Skip to content

Bad Rabbit: Ten things you need to know about the latest ransomware outbreak

A new ransomware campaign has hit a number of high profile targets in Russia and Eastern Europe.

Dubbed Bad Rabbit, the ransomware first started infecting systems on Tuesday 24 October, and the way in which organisations appear to have been hit simultaneously immediately drew comparisons to this year’s WannaCry and Petya epidemics.

 Following the initial outbreak, there was some confusion about what exactly Bad Rabbit is. Now the initial panic has died down, however, it’s possible to dig down into what exactly is going on.

1. The cyber-attack has hit organisations across Russia and Eastern Europe

Organisations across Russian and Ukraine — as well as a small number in Germany, and Turkey — have fallen victim to the ransomware. Researchers at Avast say they’ve also detected the malware in Poland and South Korea.

Russian cybersecurity company Group-IB confirmed at least three media organisations in the country have been hit by file-encrypting malware, while at the same time Russian news agency Interfax said its systems have been affected by a “hacker attack” — and were seemingly knocked offline by the incident.

Other organisations in the region including Odessa International Airport and the Kiev Metro also made statements about falling victim to a cyber-attack, while CERT-UA, the Computer Emergency Response Team of Ukraine, also posted that the “possible start of a new wave of cyberattacks to Ukraine’s information resources” had occurred, as reports of Bad Rabbit infections started to come in.

At the time of writing, it’s thought there are almost 200 infected targets and indicating that this isn’t an attack like WannaCry or Petya was — but it’s still causing problems for infected organisations.

“The total prevalence of known samples is quite low compared to the other “common” strains,” said Jakub Kroustek, malware analyst at Avast.

2. It’s definitely ransomware

Those unfortunate enough to fall victim to the attack quickly realised what had happened because the ransomware isn’t subtle — it presents victims with a ransom note telling them their files are “no longer accessible” and “no one will be able to recover them without our decryption service”.

Bad Rabbit ransom note.

Image: ESET


Victims are directed to a Tor payment page and are presented with a countdown timer. Pay within the first 40 hours or so, they’re told, and the payment for decrypting files is 0.05 bitcoin — around $285. Those who don’t pay the ransom before the timer reaches zero are told the fee will go up and they’ll have to pay more.

Bad Rabbit payment page.

Image: Kaspersky Lab


The encryption uses DiskCryptor, which is open source legitimate and software used for full drive encryption. Keys are generated using CryptGenRandom and then protected by a hardcoded RSA 2048 public key.

3. It’s based on Petya/Not Petya

If the ransom note looks familiar, that’s because it’s almost identical to the one victims of June’s Petya outbreak saw. The similarities aren’t just cosmetic either — Bad Rabbit shares behind-the-scenes elements with Petya too.

Analysis by researchers at Crowdstrike has found that Bad Rabbit and NotPetya’s DLL (dynamic link library) share 67 percent of the same code, indicating the two ransomware variants are closely related, potentially even the work of the same threat actor.

4. It spreads via a fake Flash update on compromised websites

The main way Bad Rabbit spreads is drive-by downloads on hacked websites. No exploits are used, rather visitors to compromised websites — some of which have been compromised since June — are told that they need to install a Flash update. Of course, this is no Flash update, but a dropper for the malicious install.

A compromised website asking a user to install a fake Flash update which distributes Bad Rabbit.

Image: ESET

Infected websites — mostly based in Russia, Bulgaria, and Turkey — are compromised by having JavaScript injected in their HTML body or in one of their .js files.

5. It can spread laterally across networks…

Much like Petya, Bad Rabbit comes with a potent trick up its sleeve in that it contains an SMB component which allows it to move laterally across an infected network and propagate without user interaction, say researchers at Cisco Talos.

What aids Bad Rabbit’s ability to spread is a list of simple username and password combinations which it can exploit to brute-force its way across networks. The weak passwords list consists of a number of the usual suspects for weak passwords such as simple number combinations and ‘password’.

6. … but it doesn’t use EternalBlue

When Bad Rabbit first appeared, some suggested that like WannaCry, it exploited the EternalBlue exploit to spread. However, this now doesn’t appear to be the case.

“We currently have no evidence that the EternalBlue exploit is being utilized to spread the infection,” Martin Lee, Technical Lead for Security Research at Talos told ZDNet.

7. It may not be indiscriminate

At the same point following the WannaCry outbreak, hundreds of thousands of systems around the world had fallen victim to ransomware. However, Bad Rabbit doesn’t appear to indiscriminately infecting targets, rather researchers have suggested that it only infects selected targets.

“Our observations suggest that this been a targeted attack against corporate networks,” said Kaspersky Lab researchers.

Meanwhile, researchers at ESET say instructions in the script injected into infected websites “can determine if the visitor is of interest and then add content to the page” if the target is deemed suitable for infection.

However, at this stage, there’s no obvious reason why media organisations and infrastructure in Russia and Ukraine has been specifically targeted in this attack.

8. It isn’t clear who is behind it

At this time, it’s still unknown who is distributing the ransomware or why, but the similarity to Petya has led some researchers to suggest that Bad Rabbit is by the same attack group — although that doesn’t help identify the attacker or the motive either, because the perpetrator of June’s epidemic has never been identified.

What marks this attack out is how it has primarily infected Russia – Eastern Europe cybercriminal organisations tend to avoid attacking the ‘motherland’, indicating this unlikely to be a Russian group.

9. It contains Game of Thrones references

Whoever it behind Bad Rabbit, they appear to be a fan of Game of Thrones: the code contains references to Viserion, Drogon, and Rhaegal, the dragons which feature in television series and the novels it is based on. The authors of the code are therefore not doing much to change the stereotypical image of hackers being geeks and nerds.

References to Game of Thrones dragons in the code.

Image: Kaspersky Lab

10. You can protect yourself against becoming infected by it

At this stage, it’s unknown if it’s possible to decrypt files locked by Bad Rabbit without giving in and paying the ransom – although researchers say that those who fall victim shouldn’t pay the fee, as it will only encourage the growth of ransomware.

A number of security vendors say their products protect against Bad Rabbit. But for those who want to be sure they don’t potentially fall victim to the attack, Kaspersky Lab says users can block the execution of file ‘c: \ windows \ infpub.dat, C: \ Windows \ cscc.dat.’ in order to prevent infection.



NSA bloke used backdoored MS Office key-gen, exposed secret exploits – Kaspersky

Analysis The NSA staffer who took home top-secret US government spyware installed a backdoored key generator for a pirated copy of Microsoft Office on his PC – exposing the confidential cyber-weapons on the computer to hackers.

That’s according to Kaspersky Lab, which today published a report detailing, in its view, how miscreants could have easily stolen powerful and highly confidential software exploits from the NSA employee’s bedroom Windows PC.

Earlier this month, it was alleged Russian intelligence services were able to search computers running Moscow-based Kaspersky’s antivirus tools, allowing the snoops to seek out foreign intelligence workers and steal secrets from their hard drives.

The NSA employee’s home PC was one of those tens of millions of machines running Kaspersky antivirus. Kaspersky was, therefore, accused of detecting the American cyber-weapons on the PC via its tools, tipping off Kremlin spies, and effectively helping them hack the machine to siphon off the valuable vulnerability exploits.

Well, not quite, says Kaspersky.

According to the Russian security giant, the staffer temporary switched off the antivirus protection on the PC, and infected his personal computer with malware from a product key generator while trying to use a bootleg copy of Office.

Later, once reactivated, Kaspersky’s software searched the machine as usual, removed the trojanized key-gen tool, found the secret NSA code during the scan, and uploaded it to Kaspersky’s cloud for further study by staff. Kaspersky’s technology is always on the lookout for the NSA’s secretive surveillance tools in the wild – such as the hard drive firmware spyware it revealed in 2015 – so it’s no surprise the archive of source code and other files was detected and copied for analysis.

Users can configure Kaspersky’s software to not send suspicious samples back to Mother Russia for scrutiny, however, in this case, the NSA staffer didn’t take that option, allowing the highly sensitive files to escape.

Once in the hands of a reverse-engineer, it became clear this was leaked NSA software. The CEO Eugene Kaspersky was alerted, copies of the data were deleted, and “the archive was not shared with any third parties,” we’re told.

Kaspersky’s argument is that anyone could have abused the backdoored key generator to remotely log into the machine and steal the secrets the NSA employee foolishly took home, rather than state spies abusing its antivirus to snoop on people.

Kaspersky does share intelligence of upcoming cyber-security threats, such as new strains of spyware and other software nasties, with its big customers and governments. However, in this case, it is claimed, the American tools went no further, the argument being that if the Russians got hold of the leaked exploits, it wasn’t via Kaspersky Lab.

That the biz deleted the archive almost immediately raised eyebrows in the infosec world.

Here’s a summary of what Kaspersky said happened:


On September 11, 2014, Kaspersky’s software detected the Win32.GrayFish.gen trojan on the NSA staffer’s PC. Some time after that, the employee disabled the antivirus to run an activation-key generator designed to unlock pirated copies of Microsoft Office 2013. The malicious executable was downloaded along with an ISO file of Office 2013.

As is so often the case with rogue key-gens, the software came with malware included, which was why the employee had to disable his AV. Fast forward to October 4, and Kaspersky’s software was allowed to run again, and the fake key-gen tool’s bundled malware, Win32.Mokes.hvl, which has been on the security shop’s naughty list since 2013, was clocked by the defense software.

“To install and run this keygen, the user appears to have disabled the Kaspersky products on his machine,” Kaspersky Lab said in its report.

“Our telemetry does not allow us to say when the antivirus was disabled, however, the fact that the keygen malware was later detected as running in the system suggests the antivirus had been disabled or was not running when the keygen was run. Executing the keygen would not have been possible with the antivirus enabled.”

The user was warned his computer was infected, so he told the toolkit to scan and remove all threats. The antivirus duly deleted the Mokes malware, but also found several new types of NSA code – which appeared to be similar to the agency’s Equation Group weapons that Kaspersky was already familiar with – which were pinged back to Russian servers for analysis.

According to the security firm’s account, one of its researchers recognized that they had received some highly advanced malware, and reported the discovery to Kaspersky’s CEO Eugene:

One of the files detected by the product as new variants of Equation APT malware was a 7zip archive.

The archive itself was detected as malicious and submitted to Kaspersky Lab for analysis, where it was processed by one of the analysts. Upon processing, the archive was found to contain multiple malware samples and source code for what appeared to be Equation malware.

After discovering the suspected Equation malware source code, the analyst reported the incident to the CEO. Following a request from the CEO, the archive was deleted from all our systems. The archive was not shared with any third parties.

Kapsersky said it never received any more malware samples from that particularly user, and went public with its Equation Group findings in February 2015. It says that after that disclosure, it began to find more Equation Group malware samples in the same IP range as the original discovery – honeypots to snare whoever may have stolen copies of the cyber-weapons, presumably.

“These seem to have been configured as ‘honeypots’, each computer being loaded with various Equation-related samples,” Kaspersky Lab said. “No unusual (non-executable) samples have been detected and submitted from these ‘honeypots’ and detections have not been processed in any special way.”


KALI Linux: Revealed – Free Book

Kali Linux Revealed is available to buy on – or it can be downloaded free from this link:


For anyone interested in pentesting, Kali Linux is a must have operating system. – Key Reinstallation Attack – WPA2 cracked

We discovered serious weaknesses in WPA2, a protocol that secures all modern protected Wi-Fi networks. An attacker within range of a victim can exploit these weaknesses using key reinstallation attacks (KRACKs). Concretely, attackers can use this novel attack technique to read information that was previously assumed to be safely encrypted. This can be abused to steal sensitive information such as credit card numbers, passwords, chat messages, emails, photos, and so on. The attack works against all modern protected Wi-Fi networks. Depending on the network configuration, it is also possible to inject and manipulate data. For example, an attacker might be able to inject ransomware or other malware into websites.

The weaknesses are in the Wi-Fi standard itself, and not in individual products or implementations. Therefore, any correct implementation of WPA2 is likely affected. To prevent the attack, users must update affected products as soon as security updates become available. Note that if your device supports Wi-Fi, it is most likely affected. During our initial research, we discovered ourselves that Android, Linux, Apple, Windows, OpenBSD, MediaTek, Linksys, and others, are all affected by some variant of the attacks. For more information about specific products, consult the database of CERT/CC, or contact your vendor.

The research behind the attack will be presented at the Computer and Communications Security (CCS) conference, and at the Black Hat Europe conference. Our detailed research paper can already be downloaded.


As a proof-of-concept we executed a key reinstallation attack against an Android smartphone. In this demonstration, the attacker is able to decrypt all data that the victim transmits. For an attacker this is easy to accomplish, because our key reinstallation attack is exceptionally devastating against Linux and Android 6.0 or higher. This is because Android and Linux can be tricked into (re)installing an all-zero encryption key (see below for more info). When attacking other devices, it is harder to decrypt all packets, although a large number of packets can nevertheless be decrypted. In any case, the following demonstration highlights the type of information that an attacker can obtain when performing key reinstallation attacks against protected Wi-Fi networks:

The research [PDF], titled Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2, has been published by Mathy Vanhoef of KU Leuven and Frank Piessens of imec-DistriNet, Nitesh Saxena and Maliheh Shirvanian of the University of Alabama at Birmingham, Yong Li of Huawei Technologies, and Sven Schäge of Ruhr-Universität Bochum.
The team has successfully executed the key reinstallation attack against an Android smartphone, showing how an attacker can decrypt all data that the victim transmits over a protected WiFi. You can watch the proof-of-concept (PoC) video demonstration above.

“Decryption of packets is possible because a key reinstallation attack causes the transmit nonces (sometimes also called packet numbers or initialization vectors) to be reset to zero. As a result, the same encryption key is used with nonce values that have already been used in the past,” the researcher say.

The researchers say their key reinstallation attack could be exceptionally devastating against Linux and Android 6.0 or higher, because “Android and Linux can be tricked into (re)installing an all-zero encryption key (see below for more info).”

WPA2 Vulnerabilities and their Brief Details

The key management vulnerabilities in the WPA2 protocol discovered by the researchers has been tracked as:

  • CVE-2017-13077: Reinstallation of the pairwise encryption key (PTK-TK) in the four-way handshake.
  • CVE-2017-13078: Reinstallation of the group key (GTK) in the four-way handshake.
  • CVE-2017-13079: Reinstallation of the integrity group key (IGTK) in the four-way handshake.
  • CVE-2017-13080: Reinstallation of the group key (GTK) in the group key handshake.
  • CVE-2017-13081: Reinstallation of the integrity group key (IGTK) in the group key handshake.
  • CVE-2017-13082: Accepting a retransmitted Fast BSS Transition (FT) Reassociation Request and reinstalling the pairwise encryption key (PTK-TK) while processing it.
  • CVE-2017-13084: Reinstallation of the STK key in the PeerKey handshake.
  • CVE-2017-13086: reinstallation of the Tunneled Direct-Link Setup (TDLS) PeerKey (TPK) key in the TDLS handshake.
  • CVE-2017-13087: reinstallation of the group key (GTK) while processing a Wireless Network Management (WNM) Sleep Mode Response frame.
  • CVE-2017-13088: reinstallation of the integrity group key (IGTK) while processing a Wireless Network Management (WNM) Sleep Mode Response frame.

The researchers discovered the vulnerabilities last year, but sent out notifications to several vendors on July 14, along with the United States Computer Emergency Readiness Team (US-CERT), who sent out a broad warning to hundreds of vendors on 28 August 2017.

“The impact of exploiting these vulnerabilities includes decryption, packet replay, TCP connection hijacking, HTTP content injection, and others,” the US-CERT warned. “Note that as protocol-level issues, most or all correct implementations of the standard will be affected.”


AES Encryption – Should it have been the winning algorithm?

Generally within the AES competition, you would assume the safest ciphers would be selected, however for AES, this not the case.  The two safest ciphers, which were Serpent and Twofish did not win.

Here’s a table of the safety factor for each entry:

aes safety factor



There are 3 weak algorithms in terms of safety factor (Rijndael, RC6 and MARS).

Rijndael at this point should have been deselected, and RC6 with it.

If we look at speed, there there is little different between RC6, Rijndael and Twofish, but there is a huge advantage with Twofish offering a safety factor of 2.67.

Serpent is the clear winner in terms of safety factor, but it is the slowest.

Twofish on paper appears to be the best overall candidate, as it achieves a safety factor of greater than 2 with fast speeds.

Instead we got the weakest cipher, Rijndael, and with typical sarcasm, you have to wonder why the NSA/NIST would select the weakest.

If NIST wanted speed, then even RC6 would have combined faster speeds with a slightly better safety factor.  However, the AES winner was the weakest entrant, now why is that?

TLS 1.3 – Discussion on Issues with Session Tickets for TLS 1.2

More specifically, TLS 1.2 Session Tickets.

Session Tickets, specified in RFC 5077, are a technique to resume TLS sessions by storing key material encrypted on the clients. In TLS 1.2 they speed up the handshake from two to one round-trips.

Unfortunately, a combination of deployment realities and three design flaws makes them the weakest link in modern TLS, potentially turning limited key compromise into passive decryption of large amounts of traffic.

How Session Tickets work

A modern TLS 1.2 connection starts like this:

  • The client sends the supported parameters;
  • the server chooses the parameters and sends the certificate along with the first half of the Diffie-Hellman key exchange;
  • the client sends the second half of the Diffie-Hellman exchange, computes the session keys and switches to encrypted communication;
  • the server computes the session keys and switches to encrypted communication.

This involves two round-trips between client and server before the connection is ready for application data.

normal 1.2

The Diffie-Hellman key exchange is what provides Forward Secrecy: even if the attacker obtains the certificate key and a connection transcript after the connection ended they can’t decrypt the data, because they don’t have the ephemeral session key.

Forward Secrecy also translates into security against a passive attacker. An attacker that can wiretap but not modify the traffic has the same capabilities of an attacker that obtains a transcript of the connection after it’s over. Preventing passive attacks is important because they can be carried out at scale with little risk of detection.

Session Tickets reduce the overhead of the handshake. When a client supports Session Tickets, the server will encrypt the session key with a key only the server has, the Session Ticket Encryption Key (STEK), and send it to the client. The client holds on to that encrypted session key, called a ticket, and to the corresponding session key. The server forgets about the client, allowing stateless deployments.

The next time the client wants to connect to that server it sends the ticket along with the initial parameters. If the server still has the STEK it will decrypt the ticket, extract the session key, and start using it. This establishes a resumed connection and saves a round-trip by skipping the key negotiation. Otherwise, client and server fallback to a normal handshake.

resumed 1.2

For a recap you can also watch the first part of my 33c3 talk.

Fatal flaw #1

The first problem with 1.2 Session Tickets is that resumed connections don’t perform any Diffie-Hellman exchange, so they don’t offer Forward Secrecy against the compromise of the STEK. That is, an attacker that obtains a transcript of a resumed connection and the STEK can decrypt the whole conversation.

How the specification solves this is by stating that STEKs must be rotated and destroyed periodically. I now believe this to be extremely unrealistic.

Session Tickets were expressly designed for stateless server deployments, implying scenarios where there are multiple servers serving the same site without shared state. These server must also share STEKs or resumption wouldn’t work across them.

As soon as a key requires distribution it’s exposed to an array of possible attacks that an ephemeral key in memory doesn’t face. It has to be generated somewhere, and transmitted somehow between the machines, and that transmission might be recorded or persisted. Twitter wrote about how they faced and approached exactly this problem.

Moreover, an attacker that compromises a single machine can now decrypt traffic flowing through other machines, potentially violating security assumptions.

Finally, if a key is not properly rotated it allows an attacker to decrypt past traffic upon compromise.

TLS 1.3 solves this by supporting Diffie-Hellman along with Session Tickets, but TLS 1.2 was not yet structured to support one round trip Diffie-Hellman (because of the legacy static RSA structure).

These observations are not new, Adam Langley wrote about them in 2013 and TLS 1.3 was indeed built to address them.

Fatal flaw #2

Session Tickets contain the session keys of the original connection, so a compromised Session Ticket lets the attacker decrypt not only the resumed connection, but also the original connection.

This potentially degrades the Forward Secrecy of non-resumed connections, too.

The problem is exacerbated when a session is regularly resumed, and the same session keys keep getting re-wrapped into new Session Tickets (a resumed connection can in turn generate a Session Ticket), possibly with different STEKs over time. The same session key can stay in use for weeks or even months, weakening Forward Secrecy.

TLS 1.3 addresses this by effectively hashing (a one-way function) the current keys to obtain the keys for the resumed connection. While hashing is a pretty obvious solution, in TLS 1.2 there was no structured key schedule, so there was no easy agnostic way to specify how keys should be derived for each different cipher suite.

Fatal flaw #3

The NewSessionTicket message containing the Session Ticket is sent from the server to the client just before the ChangeCipherSpec message.

Client                                               Server

ClientHello                  -------->  
                             <--------      ServerHelloDone
Finished                     -------->  
                             <--------             Finished
Application Data             <------->     Application Data  

The ChangeCipherSpec message enables encryption with the session keys and the negotiated cipher, so everything exchanged during the handshake before that message is sent in plaintext.

This means that Session Tickets are sent in the clear at the beginning of the original connection.

(╯°□°)╯︵ ┻━┻

An attacker with the STEK doesn’t need to wait until session resumption is attempted. Session Tickets containing the current session keys are sent at the beginning of every connection that merely supports Session Tickets. In plaintext on the wire, ready to be decrypted with the STEK, fully bypassing Diffie-Hellman.

TLS 1.3 solves this by… not sending them in plaintext. There is no strong reason I can find for why TLS 1.2 wouldn’t wait until after the ChangeCipherSpec to send the NewSessionTicket. The two messages are sent back to back in the same flight. Someone suggested it might be not to complicate implementations that do not expect encrypted handshake messages (except Finished).

1 + 2 + 3 = dragnet surveillance

The unfortunate combination of these three well known flaws is that an attacker that obtains the Session Ticket Encryption Key can passively decrypt all connections that support Session Tickets, resumed and not.

It’s grimly similar to a key escrow system: just before switching to encrypted communication, the session keys are sent on the wire encrypted with a (somewhat) fixed key.

Passive attacks are the enablers of dragnet surveillance, what HTTPS aims to prevent, and the same actors that are known to engage in dragnet surveillance have specialized in surgical key extraction attacks.

There is no proof that these attacks are currently performed and the aim of this post is not to spread FUD about TLS, which is still the most impactful security measure on the Internet today despite all its defects. However, war-gaming the most effective attacks is a valuable exercise to ensure we focus on improving the important parts, and Session Tickets are often the single weakest link in TLS, far ahead of the CA system that receives so much more attention.

Session Tickets in the real world

The likeliness and impact of the described attacks changes depending on how Session Tickets are deployed.

Drew Springall et al. made a good survey in “Measuring the Security Harm of TLS Crypto Shortcuts”, revealing how many networks neglect to regularly rotate STEKs. Tim Taubert wrote about what popular software stacks do regarding key rotation. The landscape is bleak.

In some cases, the same STEK can be used across national borders, putting it under multiple jurisdictional threats. A single compromised machine then enables an attacker to decrypt traffic passively across the whole world by simply exfiltrating a short key every rotation period.

Mitigating this by using different STEKs across geographical locations involves a trade-off, since it disables session resumption for clients roaming across them. It does however increase the cost for what appears to be the easiest dragnet surveillance avenue at this time, which is always a good result.

In conclusion, I can’t wait for TLS 1.3.


Cheat sheet: How to become a cybersecurity pro

Executive summary

  • Why is there an increased demand for cybersecurity professionals? Cybercrime has exploded in the past couple of years, with major ransomware attacks such as WannaCry and Petya putting enterprises’ data at risk. To protect their information and that of their clients, companies across all industries are seeking cyber professionals to secure their networks.
  • What are some of the cybersecurity job roles? A career in cybersecurity can take the form of various roles, including penetration tester, chief information security officer (CISO), security engineer, incident responder, security software developer, security auditor, or security consultant.
  • What skills are required to work in cybersecurity? The skills required to work in cybersecurity vary depending on the position and company, but generally may include penetration testing, risk analysis, and security assessment. Certifications, including Certified in Risk and Information Systems Control (CRISC), Certified Information Security Manager (CISM), and Certified Information Systems Security Professional (CISSP) are also in demand, and can net you a higher salary in the field.
  • Where are the hottest markets for cybersecurity jobs? Top companies including Apple, Lockheed Martin, General Motors, Capital One, and Cisco are all hiring cybersecurity professionals. Industries such as healthcare, education, and government are most likely to suffer a cyberattack, which will probably lead to an increase in the number of IT security jobs in these sectors.
  • What is the average salary of a cybersecurity professional? The average salary for a cybersecurity professional depends on the position. For example, information security analysts earn a median salary of $92,600 per year, according to the US Bureau of Labor Statistics. Meanwhile, CISOs earn a median salary of $212,462, according to Salaries are significantly higher in certain cities, such as San Francisco and New York.
  • What are typical interview questions for a career in cybersecurity? Questions can vary depending on the position and what the specific company is looking for, according to Forrester analyst Jeff Pollard. For entry and early career roles, more technical questions should be expected. As you move up the ranks, the questions may become more about leadership, running a program, conflict resolution, and budgeting.
  • Where can I find resources for a career in cybersecurity? ISACAISC(2)ISSA, and The SANS Institute are national and international organizations where you can seek out information about the profession as well as certification and training options. A number of universities and online courses also offer cybersecurity-related degrees, certifications, and prep programs.


Why is there an increased demand for cybersecurity professionals?

Cybercrime has exploded in the past couple of years, with major ransomware attacks such as WannaCry and Petya putting enterprises’ data at risk. The rise of the Internet of Things (IoT) has also opened up new threat vectors. To protect their information and that of their clients, companies across all industries are seeking cyber professionals to secure their networks.

However, many enterprises face difficulties filling these positions: 55% of US organizations reported that open cyber positions take at least three months to fill, while 32% said they take six months or more, according to an ISACA report. And 27% of companies said they are unable to fill cybersecurity positions at all.

Cybersecurity remains a relatively new field compared to other computer sciences, so a lack of awareness is part of the reason for the talent shortage, according to Lauren Heyndrickx, who is now CISO at JCPenney. Misconceptions about what a cybersecurity job actually entails are common, and might be part of the reason few women and minorities go into the field, she added. However, enrollment in computer science programs has increased tremendously in the past couple years, and many schools are adding cybersecurity majors and concentrations, said Rachel Greenstadt, associate professor of computer science at Drexel University.

Additional resources:

What are some of the cybersecurity job roles?

Cybersecurity jobs span a number of different roles with a variety of job functions, depending on their title as well as an individual company’s needs.

In-demand roles include penetration testers, who go into a system or network, find vulnerabilities, and either report them to the organization or patch them themselves. Cybersecurity engineers, who often come from a technical background within development, dive into code to determine flaws and how to strengthen an organization’s security posture. Security software developers integrate security into applications software during the design and development process.

Computer forensics experts conduct security incident investigations, accessing and analyzing evidence from computers, networks, and data storage devices. Security consultants act as advisors, designing and implementing the strongest possible security solutions based on the needs and threats facing an individual company.

At the top of the chain, CISOs helm a company’s cybersecurity strategy, and must continuously adapt to battle the latest threats.

What skills are required to work in cybersecurity?

The skills required to work in cybersecurity vary depending on what position you enter and what company you work for. Generally, cybersecurity workers are responsible for tasks such as penetration testing (the practice of testing a computer system, network, or web application to find vulnerabilities that an attacker could exploit), risk analysis (the process of defining and analyzing the cyber threats to a business, and aligning tech-related objectives to business objectives), and security assessment (a process that identifies the current security posture of an information system or organization, and offers recommendations for improvement).

SEE: How to build a successful career in cybersecurity (free PDF) (TechRepublic)

Certifications in cybersecurity teach these and other valuable job skills, and often lead to higher salaries in the field. Those such as Certified in Risk and Information Systems Control (CRISC), Certified Information Security Manager (CISM), and Certified Information Systems Security Professional (CISSP) are currently in high demand.

Cybersecurity jobs don’t necessarily require developer skills or a degree, Pollard said. “You don’t need a bachelor’s degree in a specific field to be great at security; in fact, you don’t necessarily need [a degree] at all,” according to Pollard. “Recognize that cybersecurity is a skill, and teach people the profession of enterprise security. That means treating it like an apprenticeship or training program.”

Cybersecurity is an interdisciplinary field that requires knowledge in tech, human behavior, finance, risk, law, and regulation. Many people in the cybersecurity workforce enter the field from other careers that tap these skills, and translate them to cyber.

“If you have security skills, there are plenty of opportunities available for you,” according toPollard. “If you have an interest in security and perhaps have a nontraditional background but are willing to learn, opportunities are certainly open from that perspective as well.”


CCleaner hacked to distribute Malware

If you have downloaded or updated CCleaner application on your computer between August 15 and September 12 of this year from its official website, then pay attention—your computer has been compromised.

Avast and Piriform have both confirmed that the Windows 32-bit version of CCleaner v5.33.6162 and CCleaner Cloud v1.07.3191 were affected by the malware.

Detected on 13 September, the malicious version of CCleaner contains a multi-stage malware payload that steals data from infected computers and sends it to attacker’s remote command-and-control servers.


Moreover, the unknown hackers signed the malicious installation executable (v5.33) using a valid digital signature issued to Piriform by Symantec and used Domain Generation Algorithm (DGA), so that if attackers’ server went down, the DGA could generate new domains to receive and send stolen information.

“All of the collected information was encrypted and encoded by base64 with a custom alphabet,” says Paul Yung, V.P. of Products at Piriform. “The encoded information was subsequently submitted to an external IP address 216.126.x.x (this address was hardcoded in the payload, and we have intentionally masked its last two octets here) via a HTTPS POST request.”

The malicious software was programmed to collect a large number of user data, including:

  • Computer name
  • List of installed software, including Windows updates
  • List of all running processes
  • IP and MAC addresses
  • Additional information like whether the process is running with admin privileges and whether it is a 64-bit system.

How to Remove Malware From Your PC

According to the Talos researchers, around 5 million people download CCleaner (or Crap Cleaner) each week, which indicates that more than 20 Million people could have been infected with the malicious version the app.

“The impact of this attack could be severe given the extremely high number of systems possibly affected. CCleaner claims to have over 2 billion downloads worldwide as of November 2016 and is reportedly adding new users at a rate of 5 million a week,” Talos said.

However, Piriform estimated that up to 3 percent of its users (up to 2.27 million people) were affected by the malicious installation.

Affected users are strongly recommended to update their CCleaner software to version 5.34 or higher, in order to protect their computers from being compromised. The latest version is available for download here.


Windows for Linux Nerds

You can learn a lot more about this from the Windows Subsystem for Linux Overview. I will go over some of the parts I found to be the most interesting.

The Windows NT kernel was designed from the beginning to support running POSIX, OS/2, and other subsystems. In the early days, these were just user-mode programs that would interact with ntdll to perform system calls. Since the Windows NT kernel supported POSIX there was already a fork system call implemented in the kernel. However, the Windows NT call for forkNtCreateProcess, is not directly compatible with the Linux syscall so it has some special handling you can read about more under System Calls.

There are both user and kernel mode parts to WSL. Below is a diagram showing the basic Windows kernel and user modes alongside the WSL user and kernel modes.


The blue boxes represent kernel components and the green boxes are Pico Processes. The LX Session Manager Service handles the life cycle of Linux instances. LXCore and lxsys, lxcore.sys and lxss.sys respectively, translate the Linux syscalls into NT APIs.

Pico Processes

As you can see in the diagram above, init and /bin/bash are Pico processes. Pico processes work by having system calls and user mode exceptions dispatched to a paired driver. Pico processes and drivers allow Windows Subsystem for Linux to load executable ELF binaries into a Pico process’ address space and execute them on top of a Linux-compatible layer of system calls.

You can read even more in depth on this from the MSDN Pico Processes post.

System Calls

One of the first things I did in WSL was run a syscall fuzzer. I knew it would break but it was interesting for the purposes of figuring out which syscalls had been implemented without looking at the source. This was how I realized PID and mount namespaces were already implemented into cloneand unshare!


The WSL kernel drivers, lxss.sys and lxcore.sys, handle the Linux system call requests and translate them to the Windows NT kernel. None of this code came from the Linux kernel, it was all re-implemented by Windows engineers. This is truly mind blowing.

When a syscall is made from a Linux executable it gets passed to lxcore.syswhich will translate it into the equivalent Windows NT call. For example, open to NtOpenFile and kill to NTTerminateProcess. If there is no mapping then the Windows kernel mode driver will handle the request directly. This was the case for fork, which has lxcore.sys prepare the process to be copied and then call the appropriate Windows NT kernel APIs to create and copy the process.

You can learn more from the MSDN System Calls post.

Launching Windows Executables

Since WSL allows for running Linux binaries natively (without a VM), this allows for some really fun interactions.

You can actually spawn Windows binaries from WSL. Linux ELF binaries get handled by lxcore.sys and lxss.sys as described above and Windows binaries go through the typical Windows userspace.


You can even launch Windows GUI apps as well this way! Imagine a Linux setup where you can launch PowerPoint without a VM…. well this is it!!

Launching X Applications

You can also run X Applications in WSL. You just need an X server. I usedvcxsrv to try it out. I run i3 on all my Linux machines and tried it out in WSL like my awesome coworker Brian Ketelsen did in his blog post.


The hidpi is a little gross but if you play with the settings for the X server you can get it to a tolerable place. While I think this is neat for running whatever X applications you love, personally I am going to stick to using tmux as my entrypoint for WSL and using the Windows GUI apps I need vs. Linux X applications. This just feels less heavy (remember, I love minimal) and I haven’t come across an X application I can not live without for the time being. It’s nice to know X applications can work when I do need something though. 🙂

Pain Points

There are still quite a few pain points with using Windows Subsystem for Linux, but it’s important to remember it is still in it’s beginnings. So that you all have an idea of what to expect I will list them here and we can watch how they improve in future builds. Each item links to it’s respective GitHub issue.

Keep in mind, I am using the default Windows console for everything. It has improved significantly since I played with it 2 years ago while we were working on porting the Docker client and daemon to Windows. 🙂

  • Copy/Paste: I am used to using ctrl-shift-v and ctrl-shift-c for copy paste in a terminal and of course those don’t work. From what I can tellenter is copy… supa weird… and ctrl-v says it’s paste. Of course it doesn’t work for me. I can get paste to work by two-finger clicking in the term, but that does not work in vim and it’s a pretty weird interaction.
  • Scroll: This might just be a huge pet peeve of mine but the scroll should not be able to scroll down to nothing. This happens all the time by accident for me with the mouse and I have no idea why the terminal is rendering more space down there. Also typing after I have scrolled should return me back to the console place where I am typing. It unfortunately does not.
  • Files Slow: Saving a lot of files to disk is super slow. This applies for example to git clones, unpacking tarballs and more. Windows is not used to applications that save a lot of files so this is being worked on to be more performant. Obviously the unix way of “everything is a file” does not scale well when saving a lot of small files is super slow.
  • Sharing Files between Windows and WSL: Right now, like I pointed out, your Windows filesystem is mounted as /mnt/c in WSL. But you can’t quite yet have a git repo cloned in WSL and then also edit from Windows. The VolFS file system, all file paths that don’t begin with /mnt, such as /home, is much closer to Linux standards. If you need to access files in VolFS, you can use bash.exe to copy them somewhere under /mnt/c, use Windows to do whatever on it, then use bash.exe to copy them back when you are done. You can also all Visual Studio code on the file from WSL and that will work. 🙂

Setting Up a Windows Machine in a Reproducible Way

This was super important to me since I am used to Linux where everything is scriptable and I have scripts for starting from a blank machine to my exact perfect setup. A few people mentioned I should check for making this possible on Windows.

Turns out it works super well! My gist for my machine lives on github. There is another powershell script there for uninstalling a few programs. I love all things minimal so I like to uninstall applications I will never use. I also learned some cool powershell commands for listing all your installed applications.

#--- List all installed programs --#
| Select-Object DisplayName, DisplayVersion, Publisher, InstallDate
|Format-Table -AutoSize

#--- List all store-installed programs --#
Get-AppxPackage | Select-Object Name, PackageFullName, Version |Format-Table

I am going to be scripting more of this out in the future with regard to pinning applications to the taskbar in powershell and a bunch of other settings. Stay tuned.


Mastercard Internet Gateway Service: Hashing Design Flaw

Last year I found a design error in the MD5 version of the hashing method used by Mastercard Internet Gateway Service. The flaw allows modification of transaction amount.  They have awarded me with a bounty for reporting it. This year, they have switched to HMAC-SHA256, but this one also has a flaw (and no response from MasterCard).

If you just want to know what the bug is, just skip to the Flaw part.

What is MIGS?

When you pay on a website, the website owner usually just connects their system to an intermediate payment gateway (you will be forwarded to another website). This payment gateway then connects to several payments system available in a country. For credit card payment, many gateways will connect to another gateway (one of them is MIGS) which works with many banks to provide 3DSecure service.

How does it work?

The payment flow is usually like this if you use MIGS:

  1. You select items from an online store (merchant)
  2. You enter your credit card number on the website
  3. The card number, amount, etc is then signed and returned to the browser which will auto POST to intermediate payment gateway
  4. The intermediate payment gateway will convert the format to the one requested by MIGS, sign it (with MIGS key), and return it to the browser. Again this will auto POST, this time to MIGS server.
  5. If 3D secure not requested, then go to step 6. If 3D secure is requested, MIGS will redirect the request to the bank that issues the card, the bank will ask for an OTP, and then it will generate HTML that will auto POST data to MIGS
  6. MIGS will return a signed data to the browser, and will auto POST the data back to the intermediate Gateway
  7. Intermediate Gateway will check if the data is valid or not based on the signature. If it is not valid, then error page will be generated
  8. Based on MIGS response, payment gateway will forward the status to the merchant

Notice that instead of communicating directly between servers, communications are done via user’s browser, but everything is signed. In theory, if the signing process and verification process is correct then everything will be fine. Unfortunately, this is not always the case.

Flaw in the MIGS MD5 Hashing

This bug is extremely simple. The hashing method used is:

MD5(Secret + Data)

But it was not vulnerable to hash length extension attack (some checks were done to prevent this). The data is created like this: for every query parameter that starts with vpc_, sort it, then concatenate the values only, without delimiter. For example, if we have this data:

Name: Joe
Amount: 10000
Card: 1234567890123456


Sort it:


Get the values, and concatenate it:


Note that if I change the parameters:


Sort it:


Get the values, and concatenate it:


The MD5 value is still the same. So basically, when the data is being sent to MIGS, we can just insert additional parameter after the amount to eat the last digits, or to the front to eat the first digits, the amount will be slashed, and you can pay a 2000 USD MacBook with 2 USD.

Intermediate gateways and merchant can work around this bug by always checking that the amount returned by MIGS is indeed the same as the amount requested.

MasterCard rewarded me with 8500 USD for this bug.

Flaw in the  HMAC-SHA256 Hashing

The new HMAC-SHA256 has a flaw that can be exploited if we can inject invalid values to intermediate payment gateways. I have tested that at least one payment gateway (Fusion Payments) have this bug. I was rewarded 500 USD from Fusion Payments. It may affect other Payment gateways that connect to MIGS.

In the new version, they have added delimiters (&) between fields,  added field names and not just values, and used HMAC-SHA256.  For the same data above, the hashed data is:


We can’t shift anything, everything should be fine. But what happens if a value contains & or = or other special characters?

Reading this documentation, it says that:

Note: The values in all name value pairs should NOT be URL encoded for the purpose of hashing.

The “NOT” is my emphasis. It means that if we have these fields:


It will be hashed as: HMAC(Amount=100&Card=1234&CVV=555)

And if we have this (amount contains the & and =)


It will be hashed as: HMAC(Amount=100&Card=1234&CVV=555)

The same as before. Still not really a problem at this point.

Of course, I thought that may be the documentation is wrong, may be it should be encoded. But I have checked the behavior of the MIGS server, and the behavior is as documented. May be they don’t want to deal with different encodings (such as + instead of %20).

There doesn’t seem to be any problem with that, any invalid values will be checked by MIGS and will cause an error (for example invalid amount above will be rejected).

But I noticed that in several payment gateways, instead of validating inputs on their server side, they just sign everything it and give it to MIGS. It’s much easier to do just JavaScript checking on the client side, sign the data on the server side, and let MIGS decide whether the card number is correct or not, or should the CVV be 3 or 4 digits, is the expiration date correct, etc. The logic is: MIGS will recheck the inputs, and will do it better.

On Fusion Payments, I found out that it is exactly what happened: they allow any characters of any length to be sent for the CVV (only checked in JavaScript), they will sign the request and send it to MIGS.


To exploit this we need to construct a string which will be a valid request, and also a valid MIGS server response. We don’t need to contact MIGS server at all, we are forcing the client to sign a valid data for themselves.

A basic request looks like this:


and a basic response from the server will look like this:


In the Fusion Payment’s case, the exploit is done by injecting  vpc_CardSecurityCode (CVV)


The client/payment gateway will generate the correct hash for this string

Now we can post this data back to the client itself (without ever going to MIGS server), but we change it slightly so that the client will read the correct variables (most client will only check forvpc_TxnResponseCode, and vpc_TransactionNo):


Note that:

  1. This will be hashed the same as the previous data
  2. The client will ignore vpc_AccessCode and the value inside it
  3. The client will process the vpc_TxnResponseCode, etc and assume the transaction is valid

It can be said that this is a MIGS client bug, but the hashing method chosen by MasterCard allows this to happen, had the value been encoded, this bug will not be possible.

Response from MIGS

MasterCard did not respond to this bug in the HMAC-SHA256. When reporting I have CC-ed it to several persons that handled the previous bug. None of the emails bounced. Not even a “we are checking this” email from them. They also have my Facebook in case they need to contact me (this is from the interaction about the MD5 bug).

Some people are sneaky and will try to deny that they have received a bug report, so now when reporting a bug, I put it in a password protected post (that is why you can see several password-protected posts in this blog). So far at least 3 views from MasterCard IP address (3 views that enter the password).  They have to type in a password to read the report, so it is impossible for them to accidentally click it without reading it. I have nagged them every week for a reply.

My expectation was that they would try to warn everyone connecting to their system to check and filter for injections.

Flaws In Payment Gateways

As an extra note: even though payment gateways handle money, they are not as secure as people think. During my pentests  I found several flaws in the design of the payment protocol on several intermediate gateways. Unfortunately, I can’t go into detail on this one(when I say “pentests”, it means something under NDA).

I also found flaws in the implementation. For example Hash Length Extension Attack, XML signature verification error, etc. One of the simplest bugs that I found is in Fusion Payments. The first bug that I found was: they didn’t even check the signature from MIGS. That means we can just alter the data returned by MIGS and mark the transaction as successful. This just means changing a single character from F (false) to 0 (success).

So basically we can just enter any credit card number, got a failed response from MIGS, change it, and suddenly payment is successful. This is a 20 million USD company, and I got 400 USD for this bug.  This is not the first payment gateway that had this flaw, during my pentest I found this exact bug in another payment gateway. Despite the relatively low amount of bounty, Fusion Payments is currently the only payment gateway that I contacted that is very clear in their bug bounty program, and is very quick in responding my emails and fixing their bugs.


Payment gateways are not as secure as you think.


%d bloggers like this: