Skip to content – Key Reinstallation Attack – WPA2 cracked

We discovered serious weaknesses in WPA2, a protocol that secures all modern protected Wi-Fi networks. An attacker within range of a victim can exploit these weaknesses using key reinstallation attacks (KRACKs). Concretely, attackers can use this novel attack technique to read information that was previously assumed to be safely encrypted. This can be abused to steal sensitive information such as credit card numbers, passwords, chat messages, emails, photos, and so on. The attack works against all modern protected Wi-Fi networks. Depending on the network configuration, it is also possible to inject and manipulate data. For example, an attacker might be able to inject ransomware or other malware into websites.

The weaknesses are in the Wi-Fi standard itself, and not in individual products or implementations. Therefore, any correct implementation of WPA2 is likely affected. To prevent the attack, users must update affected products as soon as security updates become available. Note that if your device supports Wi-Fi, it is most likely affected. During our initial research, we discovered ourselves that Android, Linux, Apple, Windows, OpenBSD, MediaTek, Linksys, and others, are all affected by some variant of the attacks. For more information about specific products, consult the database of CERT/CC, or contact your vendor.

The research behind the attack will be presented at the Computer and Communications Security (CCS) conference, and at the Black Hat Europe conference. Our detailed research paper can already be downloaded.


As a proof-of-concept we executed a key reinstallation attack against an Android smartphone. In this demonstration, the attacker is able to decrypt all data that the victim transmits. For an attacker this is easy to accomplish, because our key reinstallation attack is exceptionally devastating against Linux and Android 6.0 or higher. This is because Android and Linux can be tricked into (re)installing an all-zero encryption key (see below for more info). When attacking other devices, it is harder to decrypt all packets, although a large number of packets can nevertheless be decrypted. In any case, the following demonstration highlights the type of information that an attacker can obtain when performing key reinstallation attacks against protected Wi-Fi networks:

The research [PDF], titled Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2, has been published by Mathy Vanhoef of KU Leuven and Frank Piessens of imec-DistriNet, Nitesh Saxena and Maliheh Shirvanian of the University of Alabama at Birmingham, Yong Li of Huawei Technologies, and Sven Schäge of Ruhr-Universität Bochum.
The team has successfully executed the key reinstallation attack against an Android smartphone, showing how an attacker can decrypt all data that the victim transmits over a protected WiFi. You can watch the proof-of-concept (PoC) video demonstration above.

“Decryption of packets is possible because a key reinstallation attack causes the transmit nonces (sometimes also called packet numbers or initialization vectors) to be reset to zero. As a result, the same encryption key is used with nonce values that have already been used in the past,” the researcher say.

The researchers say their key reinstallation attack could be exceptionally devastating against Linux and Android 6.0 or higher, because “Android and Linux can be tricked into (re)installing an all-zero encryption key (see below for more info).”

WPA2 Vulnerabilities and their Brief Details

The key management vulnerabilities in the WPA2 protocol discovered by the researchers has been tracked as:

  • CVE-2017-13077: Reinstallation of the pairwise encryption key (PTK-TK) in the four-way handshake.
  • CVE-2017-13078: Reinstallation of the group key (GTK) in the four-way handshake.
  • CVE-2017-13079: Reinstallation of the integrity group key (IGTK) in the four-way handshake.
  • CVE-2017-13080: Reinstallation of the group key (GTK) in the group key handshake.
  • CVE-2017-13081: Reinstallation of the integrity group key (IGTK) in the group key handshake.
  • CVE-2017-13082: Accepting a retransmitted Fast BSS Transition (FT) Reassociation Request and reinstalling the pairwise encryption key (PTK-TK) while processing it.
  • CVE-2017-13084: Reinstallation of the STK key in the PeerKey handshake.
  • CVE-2017-13086: reinstallation of the Tunneled Direct-Link Setup (TDLS) PeerKey (TPK) key in the TDLS handshake.
  • CVE-2017-13087: reinstallation of the group key (GTK) while processing a Wireless Network Management (WNM) Sleep Mode Response frame.
  • CVE-2017-13088: reinstallation of the integrity group key (IGTK) while processing a Wireless Network Management (WNM) Sleep Mode Response frame.

The researchers discovered the vulnerabilities last year, but sent out notifications to several vendors on July 14, along with the United States Computer Emergency Readiness Team (US-CERT), who sent out a broad warning to hundreds of vendors on 28 August 2017.

“The impact of exploiting these vulnerabilities includes decryption, packet replay, TCP connection hijacking, HTTP content injection, and others,” the US-CERT warned. “Note that as protocol-level issues, most or all correct implementations of the standard will be affected.”



AES Encryption – Should it have been the winning algorithm?

Generally within the AES competition, you would assume the safest ciphers would be selected, however for AES, this not the case.  The two safest ciphers, which were Serpent and Twofish did not win.

Here’s a table of the safety factor for each entry:

aes safety factor



There are 3 weak algorithms in terms of safety factor (Rijndael, RC6 and MARS).

Rijndael at this point should have been deselected, and RC6 with it.

If we look at speed, there there is little different between RC6, Rijndael and Twofish, but there is a huge advantage with Twofish offering a safety factor of 2.67.

Serpent is the clear winner in terms of safety factor, but it is the slowest.

Twofish on paper appears to be the best overall candidate, as it achieves a safety factor of greater than 2 with fast speeds.

Instead we got the weakest cipher, Rijndael, and with typical sarcasm, you have to wonder why the NSA/NIST would select the weakest.

If NIST wanted speed, then even RC6 would have combined faster speeds with a slightly better safety factor.  However, the AES winner was the weakest entrant, now why is that?

TLS 1.3 – Discussion on Issues with Session Tickets for TLS 1.2

More specifically, TLS 1.2 Session Tickets.

Session Tickets, specified in RFC 5077, are a technique to resume TLS sessions by storing key material encrypted on the clients. In TLS 1.2 they speed up the handshake from two to one round-trips.

Unfortunately, a combination of deployment realities and three design flaws makes them the weakest link in modern TLS, potentially turning limited key compromise into passive decryption of large amounts of traffic.

How Session Tickets work

A modern TLS 1.2 connection starts like this:

  • The client sends the supported parameters;
  • the server chooses the parameters and sends the certificate along with the first half of the Diffie-Hellman key exchange;
  • the client sends the second half of the Diffie-Hellman exchange, computes the session keys and switches to encrypted communication;
  • the server computes the session keys and switches to encrypted communication.

This involves two round-trips between client and server before the connection is ready for application data.

normal 1.2

The Diffie-Hellman key exchange is what provides Forward Secrecy: even if the attacker obtains the certificate key and a connection transcript after the connection ended they can’t decrypt the data, because they don’t have the ephemeral session key.

Forward Secrecy also translates into security against a passive attacker. An attacker that can wiretap but not modify the traffic has the same capabilities of an attacker that obtains a transcript of the connection after it’s over. Preventing passive attacks is important because they can be carried out at scale with little risk of detection.

Session Tickets reduce the overhead of the handshake. When a client supports Session Tickets, the server will encrypt the session key with a key only the server has, the Session Ticket Encryption Key (STEK), and send it to the client. The client holds on to that encrypted session key, called a ticket, and to the corresponding session key. The server forgets about the client, allowing stateless deployments.

The next time the client wants to connect to that server it sends the ticket along with the initial parameters. If the server still has the STEK it will decrypt the ticket, extract the session key, and start using it. This establishes a resumed connection and saves a round-trip by skipping the key negotiation. Otherwise, client and server fallback to a normal handshake.

resumed 1.2

For a recap you can also watch the first part of my 33c3 talk.

Fatal flaw #1

The first problem with 1.2 Session Tickets is that resumed connections don’t perform any Diffie-Hellman exchange, so they don’t offer Forward Secrecy against the compromise of the STEK. That is, an attacker that obtains a transcript of a resumed connection and the STEK can decrypt the whole conversation.

How the specification solves this is by stating that STEKs must be rotated and destroyed periodically. I now believe this to be extremely unrealistic.

Session Tickets were expressly designed for stateless server deployments, implying scenarios where there are multiple servers serving the same site without shared state. These server must also share STEKs or resumption wouldn’t work across them.

As soon as a key requires distribution it’s exposed to an array of possible attacks that an ephemeral key in memory doesn’t face. It has to be generated somewhere, and transmitted somehow between the machines, and that transmission might be recorded or persisted. Twitter wrote about how they faced and approached exactly this problem.

Moreover, an attacker that compromises a single machine can now decrypt traffic flowing through other machines, potentially violating security assumptions.

Finally, if a key is not properly rotated it allows an attacker to decrypt past traffic upon compromise.

TLS 1.3 solves this by supporting Diffie-Hellman along with Session Tickets, but TLS 1.2 was not yet structured to support one round trip Diffie-Hellman (because of the legacy static RSA structure).

These observations are not new, Adam Langley wrote about them in 2013 and TLS 1.3 was indeed built to address them.

Fatal flaw #2

Session Tickets contain the session keys of the original connection, so a compromised Session Ticket lets the attacker decrypt not only the resumed connection, but also the original connection.

This potentially degrades the Forward Secrecy of non-resumed connections, too.

The problem is exacerbated when a session is regularly resumed, and the same session keys keep getting re-wrapped into new Session Tickets (a resumed connection can in turn generate a Session Ticket), possibly with different STEKs over time. The same session key can stay in use for weeks or even months, weakening Forward Secrecy.

TLS 1.3 addresses this by effectively hashing (a one-way function) the current keys to obtain the keys for the resumed connection. While hashing is a pretty obvious solution, in TLS 1.2 there was no structured key schedule, so there was no easy agnostic way to specify how keys should be derived for each different cipher suite.

Fatal flaw #3

The NewSessionTicket message containing the Session Ticket is sent from the server to the client just before the ChangeCipherSpec message.

Client                                               Server

ClientHello                  -------->  
                             <--------      ServerHelloDone
Finished                     -------->  
                             <--------             Finished
Application Data             <------->     Application Data  

The ChangeCipherSpec message enables encryption with the session keys and the negotiated cipher, so everything exchanged during the handshake before that message is sent in plaintext.

This means that Session Tickets are sent in the clear at the beginning of the original connection.

(╯°□°)╯︵ ┻━┻

An attacker with the STEK doesn’t need to wait until session resumption is attempted. Session Tickets containing the current session keys are sent at the beginning of every connection that merely supports Session Tickets. In plaintext on the wire, ready to be decrypted with the STEK, fully bypassing Diffie-Hellman.

TLS 1.3 solves this by… not sending them in plaintext. There is no strong reason I can find for why TLS 1.2 wouldn’t wait until after the ChangeCipherSpec to send the NewSessionTicket. The two messages are sent back to back in the same flight. Someone suggested it might be not to complicate implementations that do not expect encrypted handshake messages (except Finished).

1 + 2 + 3 = dragnet surveillance

The unfortunate combination of these three well known flaws is that an attacker that obtains the Session Ticket Encryption Key can passively decrypt all connections that support Session Tickets, resumed and not.

It’s grimly similar to a key escrow system: just before switching to encrypted communication, the session keys are sent on the wire encrypted with a (somewhat) fixed key.

Passive attacks are the enablers of dragnet surveillance, what HTTPS aims to prevent, and the same actors that are known to engage in dragnet surveillance have specialized in surgical key extraction attacks.

There is no proof that these attacks are currently performed and the aim of this post is not to spread FUD about TLS, which is still the most impactful security measure on the Internet today despite all its defects. However, war-gaming the most effective attacks is a valuable exercise to ensure we focus on improving the important parts, and Session Tickets are often the single weakest link in TLS, far ahead of the CA system that receives so much more attention.

Session Tickets in the real world

The likeliness and impact of the described attacks changes depending on how Session Tickets are deployed.

Drew Springall et al. made a good survey in “Measuring the Security Harm of TLS Crypto Shortcuts”, revealing how many networks neglect to regularly rotate STEKs. Tim Taubert wrote about what popular software stacks do regarding key rotation. The landscape is bleak.

In some cases, the same STEK can be used across national borders, putting it under multiple jurisdictional threats. A single compromised machine then enables an attacker to decrypt traffic passively across the whole world by simply exfiltrating a short key every rotation period.

Mitigating this by using different STEKs across geographical locations involves a trade-off, since it disables session resumption for clients roaming across them. It does however increase the cost for what appears to be the easiest dragnet surveillance avenue at this time, which is always a good result.

In conclusion, I can’t wait for TLS 1.3.


Cheat sheet: How to become a cybersecurity pro

Executive summary

  • Why is there an increased demand for cybersecurity professionals? Cybercrime has exploded in the past couple of years, with major ransomware attacks such as WannaCry and Petya putting enterprises’ data at risk. To protect their information and that of their clients, companies across all industries are seeking cyber professionals to secure their networks.
  • What are some of the cybersecurity job roles? A career in cybersecurity can take the form of various roles, including penetration tester, chief information security officer (CISO), security engineer, incident responder, security software developer, security auditor, or security consultant.
  • What skills are required to work in cybersecurity? The skills required to work in cybersecurity vary depending on the position and company, but generally may include penetration testing, risk analysis, and security assessment. Certifications, including Certified in Risk and Information Systems Control (CRISC), Certified Information Security Manager (CISM), and Certified Information Systems Security Professional (CISSP) are also in demand, and can net you a higher salary in the field.
  • Where are the hottest markets for cybersecurity jobs? Top companies including Apple, Lockheed Martin, General Motors, Capital One, and Cisco are all hiring cybersecurity professionals. Industries such as healthcare, education, and government are most likely to suffer a cyberattack, which will probably lead to an increase in the number of IT security jobs in these sectors.
  • What is the average salary of a cybersecurity professional? The average salary for a cybersecurity professional depends on the position. For example, information security analysts earn a median salary of $92,600 per year, according to the US Bureau of Labor Statistics. Meanwhile, CISOs earn a median salary of $212,462, according to Salaries are significantly higher in certain cities, such as San Francisco and New York.
  • What are typical interview questions for a career in cybersecurity? Questions can vary depending on the position and what the specific company is looking for, according to Forrester analyst Jeff Pollard. For entry and early career roles, more technical questions should be expected. As you move up the ranks, the questions may become more about leadership, running a program, conflict resolution, and budgeting.
  • Where can I find resources for a career in cybersecurity? ISACAISC(2)ISSA, and The SANS Institute are national and international organizations where you can seek out information about the profession as well as certification and training options. A number of universities and online courses also offer cybersecurity-related degrees, certifications, and prep programs.


Why is there an increased demand for cybersecurity professionals?

Cybercrime has exploded in the past couple of years, with major ransomware attacks such as WannaCry and Petya putting enterprises’ data at risk. The rise of the Internet of Things (IoT) has also opened up new threat vectors. To protect their information and that of their clients, companies across all industries are seeking cyber professionals to secure their networks.

However, many enterprises face difficulties filling these positions: 55% of US organizations reported that open cyber positions take at least three months to fill, while 32% said they take six months or more, according to an ISACA report. And 27% of companies said they are unable to fill cybersecurity positions at all.

Cybersecurity remains a relatively new field compared to other computer sciences, so a lack of awareness is part of the reason for the talent shortage, according to Lauren Heyndrickx, who is now CISO at JCPenney. Misconceptions about what a cybersecurity job actually entails are common, and might be part of the reason few women and minorities go into the field, she added. However, enrollment in computer science programs has increased tremendously in the past couple years, and many schools are adding cybersecurity majors and concentrations, said Rachel Greenstadt, associate professor of computer science at Drexel University.

Additional resources:

What are some of the cybersecurity job roles?

Cybersecurity jobs span a number of different roles with a variety of job functions, depending on their title as well as an individual company’s needs.

In-demand roles include penetration testers, who go into a system or network, find vulnerabilities, and either report them to the organization or patch them themselves. Cybersecurity engineers, who often come from a technical background within development, dive into code to determine flaws and how to strengthen an organization’s security posture. Security software developers integrate security into applications software during the design and development process.

Computer forensics experts conduct security incident investigations, accessing and analyzing evidence from computers, networks, and data storage devices. Security consultants act as advisors, designing and implementing the strongest possible security solutions based on the needs and threats facing an individual company.

At the top of the chain, CISOs helm a company’s cybersecurity strategy, and must continuously adapt to battle the latest threats.

What skills are required to work in cybersecurity?

The skills required to work in cybersecurity vary depending on what position you enter and what company you work for. Generally, cybersecurity workers are responsible for tasks such as penetration testing (the practice of testing a computer system, network, or web application to find vulnerabilities that an attacker could exploit), risk analysis (the process of defining and analyzing the cyber threats to a business, and aligning tech-related objectives to business objectives), and security assessment (a process that identifies the current security posture of an information system or organization, and offers recommendations for improvement).

SEE: How to build a successful career in cybersecurity (free PDF) (TechRepublic)

Certifications in cybersecurity teach these and other valuable job skills, and often lead to higher salaries in the field. Those such as Certified in Risk and Information Systems Control (CRISC), Certified Information Security Manager (CISM), and Certified Information Systems Security Professional (CISSP) are currently in high demand.

Cybersecurity jobs don’t necessarily require developer skills or a degree, Pollard said. “You don’t need a bachelor’s degree in a specific field to be great at security; in fact, you don’t necessarily need [a degree] at all,” according to Pollard. “Recognize that cybersecurity is a skill, and teach people the profession of enterprise security. That means treating it like an apprenticeship or training program.”

Cybersecurity is an interdisciplinary field that requires knowledge in tech, human behavior, finance, risk, law, and regulation. Many people in the cybersecurity workforce enter the field from other careers that tap these skills, and translate them to cyber.

“If you have security skills, there are plenty of opportunities available for you,” according toPollard. “If you have an interest in security and perhaps have a nontraditional background but are willing to learn, opportunities are certainly open from that perspective as well.”


CCleaner hacked to distribute Malware

If you have downloaded or updated CCleaner application on your computer between August 15 and September 12 of this year from its official website, then pay attention—your computer has been compromised.

Avast and Piriform have both confirmed that the Windows 32-bit version of CCleaner v5.33.6162 and CCleaner Cloud v1.07.3191 were affected by the malware.

Detected on 13 September, the malicious version of CCleaner contains a multi-stage malware payload that steals data from infected computers and sends it to attacker’s remote command-and-control servers.


Moreover, the unknown hackers signed the malicious installation executable (v5.33) using a valid digital signature issued to Piriform by Symantec and used Domain Generation Algorithm (DGA), so that if attackers’ server went down, the DGA could generate new domains to receive and send stolen information.

“All of the collected information was encrypted and encoded by base64 with a custom alphabet,” says Paul Yung, V.P. of Products at Piriform. “The encoded information was subsequently submitted to an external IP address 216.126.x.x (this address was hardcoded in the payload, and we have intentionally masked its last two octets here) via a HTTPS POST request.”

The malicious software was programmed to collect a large number of user data, including:

  • Computer name
  • List of installed software, including Windows updates
  • List of all running processes
  • IP and MAC addresses
  • Additional information like whether the process is running with admin privileges and whether it is a 64-bit system.

How to Remove Malware From Your PC

According to the Talos researchers, around 5 million people download CCleaner (or Crap Cleaner) each week, which indicates that more than 20 Million people could have been infected with the malicious version the app.

“The impact of this attack could be severe given the extremely high number of systems possibly affected. CCleaner claims to have over 2 billion downloads worldwide as of November 2016 and is reportedly adding new users at a rate of 5 million a week,” Talos said.

However, Piriform estimated that up to 3 percent of its users (up to 2.27 million people) were affected by the malicious installation.

Affected users are strongly recommended to update their CCleaner software to version 5.34 or higher, in order to protect their computers from being compromised. The latest version is available for download here.


Windows for Linux Nerds

You can learn a lot more about this from the Windows Subsystem for Linux Overview. I will go over some of the parts I found to be the most interesting.

The Windows NT kernel was designed from the beginning to support running POSIX, OS/2, and other subsystems. In the early days, these were just user-mode programs that would interact with ntdll to perform system calls. Since the Windows NT kernel supported POSIX there was already a fork system call implemented in the kernel. However, the Windows NT call for forkNtCreateProcess, is not directly compatible with the Linux syscall so it has some special handling you can read about more under System Calls.

There are both user and kernel mode parts to WSL. Below is a diagram showing the basic Windows kernel and user modes alongside the WSL user and kernel modes.


The blue boxes represent kernel components and the green boxes are Pico Processes. The LX Session Manager Service handles the life cycle of Linux instances. LXCore and lxsys, lxcore.sys and lxss.sys respectively, translate the Linux syscalls into NT APIs.

Pico Processes

As you can see in the diagram above, init and /bin/bash are Pico processes. Pico processes work by having system calls and user mode exceptions dispatched to a paired driver. Pico processes and drivers allow Windows Subsystem for Linux to load executable ELF binaries into a Pico process’ address space and execute them on top of a Linux-compatible layer of system calls.

You can read even more in depth on this from the MSDN Pico Processes post.

System Calls

One of the first things I did in WSL was run a syscall fuzzer. I knew it would break but it was interesting for the purposes of figuring out which syscalls had been implemented without looking at the source. This was how I realized PID and mount namespaces were already implemented into cloneand unshare!


The WSL kernel drivers, lxss.sys and lxcore.sys, handle the Linux system call requests and translate them to the Windows NT kernel. None of this code came from the Linux kernel, it was all re-implemented by Windows engineers. This is truly mind blowing.

When a syscall is made from a Linux executable it gets passed to lxcore.syswhich will translate it into the equivalent Windows NT call. For example, open to NtOpenFile and kill to NTTerminateProcess. If there is no mapping then the Windows kernel mode driver will handle the request directly. This was the case for fork, which has lxcore.sys prepare the process to be copied and then call the appropriate Windows NT kernel APIs to create and copy the process.

You can learn more from the MSDN System Calls post.

Launching Windows Executables

Since WSL allows for running Linux binaries natively (without a VM), this allows for some really fun interactions.

You can actually spawn Windows binaries from WSL. Linux ELF binaries get handled by lxcore.sys and lxss.sys as described above and Windows binaries go through the typical Windows userspace.


You can even launch Windows GUI apps as well this way! Imagine a Linux setup where you can launch PowerPoint without a VM…. well this is it!!

Launching X Applications

You can also run X Applications in WSL. You just need an X server. I usedvcxsrv to try it out. I run i3 on all my Linux machines and tried it out in WSL like my awesome coworker Brian Ketelsen did in his blog post.


The hidpi is a little gross but if you play with the settings for the X server you can get it to a tolerable place. While I think this is neat for running whatever X applications you love, personally I am going to stick to using tmux as my entrypoint for WSL and using the Windows GUI apps I need vs. Linux X applications. This just feels less heavy (remember, I love minimal) and I haven’t come across an X application I can not live without for the time being. It’s nice to know X applications can work when I do need something though. 🙂

Pain Points

There are still quite a few pain points with using Windows Subsystem for Linux, but it’s important to remember it is still in it’s beginnings. So that you all have an idea of what to expect I will list them here and we can watch how they improve in future builds. Each item links to it’s respective GitHub issue.

Keep in mind, I am using the default Windows console for everything. It has improved significantly since I played with it 2 years ago while we were working on porting the Docker client and daemon to Windows. 🙂

  • Copy/Paste: I am used to using ctrl-shift-v and ctrl-shift-c for copy paste in a terminal and of course those don’t work. From what I can tellenter is copy… supa weird… and ctrl-v says it’s paste. Of course it doesn’t work for me. I can get paste to work by two-finger clicking in the term, but that does not work in vim and it’s a pretty weird interaction.
  • Scroll: This might just be a huge pet peeve of mine but the scroll should not be able to scroll down to nothing. This happens all the time by accident for me with the mouse and I have no idea why the terminal is rendering more space down there. Also typing after I have scrolled should return me back to the console place where I am typing. It unfortunately does not.
  • Files Slow: Saving a lot of files to disk is super slow. This applies for example to git clones, unpacking tarballs and more. Windows is not used to applications that save a lot of files so this is being worked on to be more performant. Obviously the unix way of “everything is a file” does not scale well when saving a lot of small files is super slow.
  • Sharing Files between Windows and WSL: Right now, like I pointed out, your Windows filesystem is mounted as /mnt/c in WSL. But you can’t quite yet have a git repo cloned in WSL and then also edit from Windows. The VolFS file system, all file paths that don’t begin with /mnt, such as /home, is much closer to Linux standards. If you need to access files in VolFS, you can use bash.exe to copy them somewhere under /mnt/c, use Windows to do whatever on it, then use bash.exe to copy them back when you are done. You can also all Visual Studio code on the file from WSL and that will work. 🙂

Setting Up a Windows Machine in a Reproducible Way

This was super important to me since I am used to Linux where everything is scriptable and I have scripts for starting from a blank machine to my exact perfect setup. A few people mentioned I should check for making this possible on Windows.

Turns out it works super well! My gist for my machine lives on github. There is another powershell script there for uninstalling a few programs. I love all things minimal so I like to uninstall applications I will never use. I also learned some cool powershell commands for listing all your installed applications.

#--- List all installed programs --#
| Select-Object DisplayName, DisplayVersion, Publisher, InstallDate
|Format-Table -AutoSize

#--- List all store-installed programs --#
Get-AppxPackage | Select-Object Name, PackageFullName, Version |Format-Table

I am going to be scripting more of this out in the future with regard to pinning applications to the taskbar in powershell and a bunch of other settings. Stay tuned.


Mastercard Internet Gateway Service: Hashing Design Flaw

Last year I found a design error in the MD5 version of the hashing method used by Mastercard Internet Gateway Service. The flaw allows modification of transaction amount.  They have awarded me with a bounty for reporting it. This year, they have switched to HMAC-SHA256, but this one also has a flaw (and no response from MasterCard).

If you just want to know what the bug is, just skip to the Flaw part.

What is MIGS?

When you pay on a website, the website owner usually just connects their system to an intermediate payment gateway (you will be forwarded to another website). This payment gateway then connects to several payments system available in a country. For credit card payment, many gateways will connect to another gateway (one of them is MIGS) which works with many banks to provide 3DSecure service.

How does it work?

The payment flow is usually like this if you use MIGS:

  1. You select items from an online store (merchant)
  2. You enter your credit card number on the website
  3. The card number, amount, etc is then signed and returned to the browser which will auto POST to intermediate payment gateway
  4. The intermediate payment gateway will convert the format to the one requested by MIGS, sign it (with MIGS key), and return it to the browser. Again this will auto POST, this time to MIGS server.
  5. If 3D secure not requested, then go to step 6. If 3D secure is requested, MIGS will redirect the request to the bank that issues the card, the bank will ask for an OTP, and then it will generate HTML that will auto POST data to MIGS
  6. MIGS will return a signed data to the browser, and will auto POST the data back to the intermediate Gateway
  7. Intermediate Gateway will check if the data is valid or not based on the signature. If it is not valid, then error page will be generated
  8. Based on MIGS response, payment gateway will forward the status to the merchant

Notice that instead of communicating directly between servers, communications are done via user’s browser, but everything is signed. In theory, if the signing process and verification process is correct then everything will be fine. Unfortunately, this is not always the case.

Flaw in the MIGS MD5 Hashing

This bug is extremely simple. The hashing method used is:

MD5(Secret + Data)

But it was not vulnerable to hash length extension attack (some checks were done to prevent this). The data is created like this: for every query parameter that starts with vpc_, sort it, then concatenate the values only, without delimiter. For example, if we have this data:

Name: Joe
Amount: 10000
Card: 1234567890123456


Sort it:


Get the values, and concatenate it:


Note that if I change the parameters:


Sort it:


Get the values, and concatenate it:


The MD5 value is still the same. So basically, when the data is being sent to MIGS, we can just insert additional parameter after the amount to eat the last digits, or to the front to eat the first digits, the amount will be slashed, and you can pay a 2000 USD MacBook with 2 USD.

Intermediate gateways and merchant can work around this bug by always checking that the amount returned by MIGS is indeed the same as the amount requested.

MasterCard rewarded me with 8500 USD for this bug.

Flaw in the  HMAC-SHA256 Hashing

The new HMAC-SHA256 has a flaw that can be exploited if we can inject invalid values to intermediate payment gateways. I have tested that at least one payment gateway (Fusion Payments) have this bug. I was rewarded 500 USD from Fusion Payments. It may affect other Payment gateways that connect to MIGS.

In the new version, they have added delimiters (&) between fields,  added field names and not just values, and used HMAC-SHA256.  For the same data above, the hashed data is:


We can’t shift anything, everything should be fine. But what happens if a value contains & or = or other special characters?

Reading this documentation, it says that:

Note: The values in all name value pairs should NOT be URL encoded for the purpose of hashing.

The “NOT” is my emphasis. It means that if we have these fields:


It will be hashed as: HMAC(Amount=100&Card=1234&CVV=555)

And if we have this (amount contains the & and =)


It will be hashed as: HMAC(Amount=100&Card=1234&CVV=555)

The same as before. Still not really a problem at this point.

Of course, I thought that may be the documentation is wrong, may be it should be encoded. But I have checked the behavior of the MIGS server, and the behavior is as documented. May be they don’t want to deal with different encodings (such as + instead of %20).

There doesn’t seem to be any problem with that, any invalid values will be checked by MIGS and will cause an error (for example invalid amount above will be rejected).

But I noticed that in several payment gateways, instead of validating inputs on their server side, they just sign everything it and give it to MIGS. It’s much easier to do just JavaScript checking on the client side, sign the data on the server side, and let MIGS decide whether the card number is correct or not, or should the CVV be 3 or 4 digits, is the expiration date correct, etc. The logic is: MIGS will recheck the inputs, and will do it better.

On Fusion Payments, I found out that it is exactly what happened: they allow any characters of any length to be sent for the CVV (only checked in JavaScript), they will sign the request and send it to MIGS.


To exploit this we need to construct a string which will be a valid request, and also a valid MIGS server response. We don’t need to contact MIGS server at all, we are forcing the client to sign a valid data for themselves.

A basic request looks like this:


and a basic response from the server will look like this:


In the Fusion Payment’s case, the exploit is done by injecting  vpc_CardSecurityCode (CVV)


The client/payment gateway will generate the correct hash for this string

Now we can post this data back to the client itself (without ever going to MIGS server), but we change it slightly so that the client will read the correct variables (most client will only check forvpc_TxnResponseCode, and vpc_TransactionNo):


Note that:

  1. This will be hashed the same as the previous data
  2. The client will ignore vpc_AccessCode and the value inside it
  3. The client will process the vpc_TxnResponseCode, etc and assume the transaction is valid

It can be said that this is a MIGS client bug, but the hashing method chosen by MasterCard allows this to happen, had the value been encoded, this bug will not be possible.

Response from MIGS

MasterCard did not respond to this bug in the HMAC-SHA256. When reporting I have CC-ed it to several persons that handled the previous bug. None of the emails bounced. Not even a “we are checking this” email from them. They also have my Facebook in case they need to contact me (this is from the interaction about the MD5 bug).

Some people are sneaky and will try to deny that they have received a bug report, so now when reporting a bug, I put it in a password protected post (that is why you can see several password-protected posts in this blog). So far at least 3 views from MasterCard IP address (3 views that enter the password).  They have to type in a password to read the report, so it is impossible for them to accidentally click it without reading it. I have nagged them every week for a reply.

My expectation was that they would try to warn everyone connecting to their system to check and filter for injections.

Flaws In Payment Gateways

As an extra note: even though payment gateways handle money, they are not as secure as people think. During my pentests  I found several flaws in the design of the payment protocol on several intermediate gateways. Unfortunately, I can’t go into detail on this one(when I say “pentests”, it means something under NDA).

I also found flaws in the implementation. For example Hash Length Extension Attack, XML signature verification error, etc. One of the simplest bugs that I found is in Fusion Payments. The first bug that I found was: they didn’t even check the signature from MIGS. That means we can just alter the data returned by MIGS and mark the transaction as successful. This just means changing a single character from F (false) to 0 (success).

So basically we can just enter any credit card number, got a failed response from MIGS, change it, and suddenly payment is successful. This is a 20 million USD company, and I got 400 USD for this bug.  This is not the first payment gateway that had this flaw, during my pentest I found this exact bug in another payment gateway. Despite the relatively low amount of bounty, Fusion Payments is currently the only payment gateway that I contacted that is very clear in their bug bounty program, and is very quick in responding my emails and fixing their bugs.


Payment gateways are not as secure as you think.


Hackers can remotely access syringe infusion pumps to deliver fatal overdoses

Medical devices are increasingly found vulnerable to hacking. Earlier this month, the US Food and Drug Administration (FDA) recalled 465,000 pacemakers after they were found vulnerable to hackers.

Now, it turns out that a syringe infusion pump used in acute care settings could be remotely accessed and manipulated by hackers to impact the intended operation of the device, ICS-CERT warned in an advisory issued on Thursday.

An independent security researcher has discovered not just one or two, but eight security vulnerabilities in the Medfusion 4000 Wireless Syringe Infusion Pump, which is manufactured by Minnesota-based speciality medical device maker Smiths Medical.

The devices are used across the world for delivering small doses of medication in acute critical care, such as neonatal and pediatric intensive care and the operating room.

Some of these vulnerabilities discovered by Scott Gayou are high in severity that can easily be exploited by a remote attacker to “gain unauthorized access and impact the intended operation of the pump.”

According to the ICS-CERT, “Despite the segmented design, it may be possible for an attacker to compromise the communications module and the therapeutic module of the pump.”

The most critical vulnerability (CVE-2017-12725) has been given a CVSS score of 9.8 and is related to the use of hard-coded usernames and passwords to automatically establish a wireless connection if the default configuration is not changed.

The high-severity flaws include:

  • A buffer overflow bug (CVE-2017-12718) that could be exploited for remote code execution on the target device in certain conditions.
  • Lack of authentication (CVE-2017-12720) if the pump is configured to allow FTP connections.
  • Presence of hard-coded credentials (CVE-2017-12724) for the pump’s FTP server.
  • Lack of proper host certificate validation (CVE-2017-12721), leaving the pump vulnerable to man-in-the-middle (MitM) attacks.

The remaining are medium severity flaws which could be exploited by attackers to crash the communications and operational modules of the device, authenticate to telnet using hard-coded credentials, and obtain passwords from configuration files.

These vulnerabilities impact devices that are running versions 1.1, 1.5 and 1.6 of the firmware, and Smiths Medical has planned to release a new product version 1.6.1 in January 2018 to address these issues.

But in the meantime, healthcare organizations are recommended to apply some defensive measures including assigning static IP addresses to pumps, monitoring network activity for malicious servers, installing the pump on isolated networks, setting strong passwords, and regularly creating backups until patches are released.



  1. No hard coded passwords should be used in *any* device, this includes medical devices.
  2. They are using FTP?  Are you joking?  The mere presence of an FTP would fail PCI compliance for a bank.  The reason that a bank can’t use it, is the same reason the medical devices arena should not use it.

Equifax Announces Cybersecurity Incident – 143 million US Citizens

No Evidence of Unauthorized Access to Core Consumer or Commercial Credit Reporting Databases

Company to Offer Free Identity Theft Protection and Credit File Monitoring to All U.S. Consumers

September 7, 2017 — Equifax Inc. (NYSE: EFX) today announced a cybersecurity incident potentially impacting approximately 143 million U.S. consumers. Criminals exploited a U.S. website application vulnerability to gain access to certain files. Based on the company’s investigation, the unauthorized access occurred from mid-May through July 2017. The company has found no evidence of unauthorized activity on Equifax’s core consumer or commercial credit reporting databases.

The information accessed primarily includes names, Social Security numbers, birth dates, addresses and, in some instances, driver’s license numbers. In addition, credit card numbers for approximately 209,000 U.S. consumers, and certain dispute documents with personal identifying information for approximately 182,000 U.S. consumers, were accessed. As part of its investigation of this application vulnerability, Equifax also identified unauthorized access to limited personal information for certain UK and Canadian residents. Equifax will work with UK and Canadian regulators to determine appropriate next steps. The company has found no evidence that personal information of consumers in any other country has been impacted.


How three Equifax execs sold $1.8million of stock days after massive data breach – which wasn’t revealed to the public for more than SIX WEEKS

  • Equifax says the three company executives who sold stock days after company discovered major security breach were not aware of the hack at the time 
  • On Thursday, company disclosed a cyberattack that ran from May to July which exposed Social Security numbers and sensitive data of 143 million Americans 
  • The hack itself was discovered by the company on July 29
  • Days later, Chief Financial Officer John Gamble and two other executives, Rodolfo Ploder and Joseph Loughran, sold a combined $1.8million in stock 
  • Insider trading is a federal offense punishable by a maximum prison sentence of 20 years and a $5million fine, according to the SEC



Vivaldi boss: It’d be cool if Google went back to the ‘not evil’ schtick

The founder of the web browser Opera has accused Google of “anti-competitive” practices.

“A monopoly both in search and advertising, Google, unfortunately shows that they are not able to resist the misuse of power,” wrote Jon von Tetzchner, now CEO of Vivaldi, in a blog post on Monday.

He also intimated there may have been a connection between an interview published in tech glad-rag WIRED, where he criticised data gathering and ad targeting by Google and Facebook, and the suspension of Vivaldi’s Google AdWords accounts two days later.

“Was this just a coincidence?” he wrote. “Or was it deliberate, a way of sending a message?”

The Opera founder said Vivaldi was able to work with Google’s requirements in order to lift the suspension “after almost three months”.

“I am saddened by this makeover of a geeky, positive company into the bully they are in 2017,” he wrote. “It is also fair to say that Google is now in a position where regulation is needed.

What Von Tetzchner is suggesting, if true, would be remarkably mean-spirited, and we certainly don’t have any insight into whether his allegations are accurate, but Google has form on the antitrust front.

In June, EU regulators fined it €2.4bn for illegally promoting its shopping service ahead of others (it submitted a plan for dealing with that in August). Google has faced several regulation-linked court cases in and outside the European Union.


I agree with this statement.  Google’s Appendix A, which states both “do no evil” and that a search engine cannot be controlled by a commercial entity.

They argued that commerce would corrupt the nature of the search results, and today we see their initial thoughts to be true.

Google knew they would be corrupted by money, and stated it right when the company were forming.  Pay attention to Appendix A… its priceless.


%d bloggers like this: