Skip to content

Life is about to get harder for websites without https

In case you haven’t noticed, we’re on a rapid march towards a “secure by default” web when it comes to protecting traffic. For example, back in Feb this year, 20% of the Alexa Top 1 Million sites were forcing the secure scheme:

HTTPS at 20%

These figures are from Scott Helme’s biannual report and we’re looking at a 5-month-old number here. I had a quiet chat with him while writing this piece and apparently that number is now at 28% of the Top 1 Million. Even more impressive is the rate at which it’s changing – the chart above shows that it’s up 45% in only 6 months!

Perhaps even more impressive again is the near 60% of web requests Mozilla is seeing that are sent securely:

Percentage of Web Pages Loaded by Firefox Using HTTPS

Now that’s inevitably a lot of requests centred around the big players on the web who are doing HTTPS ubiquitously (think Gmail, Facebook, Twitter), but the trend is clear – HTTPS is being adopted at a fierce rate. Back in Jan I wrote about how we’d already reached the tipping point, in part because of browser measures like this:

The “shaming” of websites serving login or payment forms insecurely began with Chrome in January then Firefox shortly afterwards (occasionally with rather humorous consequences). And it worked too – soon after that tweet, Qantas did indeed properly secure their site. The indignity of visitors being told that a site is insecure inevitably helps force the hand of the site operator and HTTPS follows.

But per the title of this post, life is about to get a whole lot harder for sites that aren’t already doing HTTPS across the board. Here’s what you’re going to see in only a few months’ time:

Let’s dissect what’s going on here: at the time of writing, we’re at Chrome 59 which behaves the same as Chrome 58 in the image above so non-secure sites have no visual indicator suggesting this (at least not unless they contain a login or payment form). However, once we hit version 62 all websites with form fields served over HTTP will show a “Not secure” warning to the user. Think about what that means – for example, this site will start to show a warning:

 

Reference

https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/

Master Decrypt Key for Petya Ransomware released by creator

The master key for the original version of the Petya ransomware has been released by its creator, allowing Petya-infected victims to recover their encrypted files without paying any ransom money.

But wait, Petya is not NotPetya.

Do not confuse Petya ransomware with the latest destructive NotPetya ransomware (also known as ExPetr and Eternal Petya) attacks that wreaked havoc across the world last month, massively targeting multiple entities in Ukraine and parts of Europe.

The Petya ransomware has three variants that have infected many systems around the world, but now the author of the original malware, goes by the pseudonym Janus, made the master key available on Wednesday.

According to the security researchers, victims infected with previous variants of Petya ransomware, including Red Petya (first version) and Green Petya (second version) and early versions the GoldenEye ransomware can get their encrypted files back using the master key.

The authenticity of the master key has been verified by an independent Polish information security researcher known as Hasherezade.

“Similarly to the authors of TeslaCrypt, he released his private key, allowing all the victims of the previous Petya attacks, to get their files back,” Hasherezade posted her finding on MalwareBytes on Thursday.

“Thanks to the currently published master key, all the people who have preserved the images of the disks encrypted by the relevant versions of Petya, may get a chance of getting their data back.”

Although the first and second version of Petya was cracked last year, the private key released by Janus offers the fastest and most reliable way yet for Petya-infected victims to decrypt their files, especially locked with the uncrackable third version.

Meanwhile, Kaspersky Lab research analyst Anton Ivanov also analyzed the Janus’ master key and confirmed that the key unlocks all versions of Petya ransomware, including GoldenEye.

Janus created the GoldenEye ransomware in 2016 and sold the variants as a Ransomware-as-a-Service (RaaS) to other hackers, allowing anyone to launch ransomware attacks with just one click and encrypt systems and demand a ransom to unlock it.

If the victim pays, Janus gets a cut of the payment. But in December, he went silent.

However, according to the Petya author, his malware has been modified by another threat actor to create NotPetya that targeted computers of critical infrastructure and corporations in Ukraine as well as 64 other countries.

The NotPetya ransomware also makes use of the NSA’s leaked Windows hacking exploit EternalBlueand EternalRomance to rapidly spread within a network, and WMIC and PSEXEC tools to remotely execute malware on the machines.

Security experts even believe the real intention behind the recent ransomware outcry, which was believed to be bigger than the WannaCry ransomware, was to cause disruption, rather than just another ransomware attack.

According to researchers, NotPetya is in reality wiper malware that wipes systems outright, destroying all records from the targeted systems, and asking for ransom was just to divert world’s attention from a state-sponsored attack to a malware outbreak.

Lucky are not those infected with NotPetya, but the master key can help people who were attacked by previous variants of Petya and Goldeneye ransomware in the past.

Security researchers are using the key to build free decryptors for victims who still have crypto-locked hard drives.

Reference:

https://thehackernews.com/2017/07/petya-ransomware-decryption-key.html

New attack can now decrypt satellite phone calls in “real time”

Chinese researchers have discovered a way to rapidly decrypt satellite phone communications — within a fraction of a second in some cases.

The paper, published this week, expands on previous research by German academics in 2012 by rapidly speeding up the attack and showing that the encryption used in popular Inmarsat satellite phones can be cracked in “real time.”

 

Satellite phones are used by those in desolate environments, including high altitudes and at sea, where traditional cell service isn’t available. Modern satellite phones encrypt voice traffic to prevent eavesdropping. It’s that modern GMR-2 algorithm that was the focus of the research, given that it’s used in most satellite phones today.

The researchers tried “to reverse the encryption procedure to deduce the encryption-key from the output keystream directly,” rather than using the German researchers’ method of recovering an encryption key using a known-plaintext attack.

Using their proposed inversion attack thousands of time on a 3.3GHz satellite stream, the researchers were able to reduce the search space for the 64-bit encryption key, effectively making the decryption key easier to find.

The end result was that encrypted data could be cracked in a fraction of a second.

 

“This again demonstrates that there exists serious security flaws in the GMR-2 cipher, and it is crucial for service providers to upgrade the cryptographic modules of the system in order to provide confidential communication,” said the researchers.

An Inmarsat spokesperson said Thursday that the company “immediately took action to address the potential security issue and this was fully addressed” in 2012. “We are entirely confident that the issue… has been completely resolved and that our satellite phones are secure,” the spokesperson said.

Matthew Green, a cryptography teacher at Johns Hopkins University, blogged about the German read-collision based technique in 2012. “Satellite telephone security matters,” he said at the time. “In many underdeveloped rural areas, it’s the primary means of communicating with the outside world. Satphone coverage is also important in war zones, where signal privacy is of more than academic interest,” he added.

 

Reference:

http://www.zdnet.com/article/encryption-satellite-phones-unscramble-attack-research/

TLS security: Past, present and future

https://www.helpnetsecurity.com/2017/07/03/tls-security/

The Transport Layer Security (TLS) protocol as it stands today has evolved from the Secure Sockets Layer (SSL) protocol from Netscape Communications and the Private Communication Technology (PCT) protocol from Microsoft that were developed in the 1990s, mainly to secure credit card transactions over the Internet.

It soon became clear that a unified standard was required, and an IETF TLS WG was tasked. As a result, TLS 1.0 was specified in 1999, TLS 1.1 in 2006, TLS 1.2 in 2008, and TLS 1.3 will hopefully be released soon. Each protocol version tried to improve its predecessor and mitigated some specific attacks.

 

As is usually the case in security, there is a “cops and robbers” game going between the designers and developers of the TLS protocol and the people who try to break it (be it from the hacker community or from academia). Unfortunately, this game is open-ended, meaning that it will never end and has no winner.

Since the early days of the SSL/TLS protocols, the security community has been struggling with various attacks that have made many press headlines. Examples include protocol-level attacks, like BEAST, CRIME, TIME, BREACH, Lucky 13, POODLE, FREAK, Logjam, DROWN, SLOTH, Sweet32, and the triple handshake attack, as well as pure implementation bugs, like Apple’s “goto fail” and Heartbleed.

In the evolution of the SSL/TLS protocols, all of these attacks and incidents were considered. For example, the weaknesses and vulnerabilities that enabled attacks like BEAST, POODLE, and Lucky 13, led to TLS 1.1. All remaining weaknesses and vulnerabilities have been taken into account in the specification of TLS 1.3 (the protocol version may still change, because the protocol changes are fundamental and substantial).

From a security perspective, TLS 1.3 is a major breakthrough and tries to get rid of all cryptographic techniques and primitives that is known to be weak and exploitable. For example, ciphers are only allowed in TLS 1.3, if they provide authenticated encryption with additional data (AEAD). Most importantly, this disallows all block ciphers operated in cipherblock chaining (CBC) mode that has been the source of many attacks in the past. It also disallows the formerly used technique to first authenticate data (by generating a message authentication code) and then encrypting it. Instead, both operations must be invoked simultaneously.

TLS 1.3 also disallows cryptographic algorithms that are known to be weak, such as stream ciphers like RC4, hash functions like MD5 or SHA-1, and block ciphers like 3DES, as well as all types of export-grade cryptography. Due to attacks like CRIME, TIME, and BREACH, we know today that the secure combination of compression and encryption is tricky, and TLS 1.3 therefore abandons TLS-level compression entirely. Finally, TLS 1.3 is highly efficient and can therefore get rid of session resumption and renegotiation. These shortcut features have led to distinct attacks in the past, i.e., session renegotiation and triple handshake attacks.

 

The bottom line is that TLS 1.3 represents the state-of-the-art in cryptographic research. Does this mean that the series of attacks, countermeasures, and counterattacks will eventually come to an end, and that we are going to see a stable situation in terms of security? Probably not. Once again, the problem is in the difference between theory and practice. To put it into the words of Albert Einstein: “In theory, theory and practice are the same. In practice, they are not.” This quote is true, and it is certainly also true for TLS 1.3. In theory, TLS 1.3 is secure, but in practice, we don’t know.

There are at least two uncertainties: First, we don’t know how well the protocol is going to be implemented (keep in mind that in any nontrivial security protocol, implementation bugs are likely to occur). Second, the implementations of the protocol need to be configured and used in proper ways (keep in mind that security breaches often occur, because security products are misconfigured or misused).

This is similar to the real world: If we have a security system in place (e.g., a burglar alarm system), we can still misconfigure it or use it in a way that is inappropriate and makes its security obsolete.

Security is and remains to be a tricky and multifaceted business. There are many things that can go wrong. A mathematician would say that having secure technologies in place is a necessary but not sufficient condition to achieve security.

Many complementary things must be in place so that a technology can unfold its real power. This also applies to TLS security in general, and TLS 1.3 in particular. The most important example to mention here is a proper way to manage public key certificates on a global scale. This is still one of the Achilles heels in properly using TLS. You may stay tuned: The “cops and robbers” game is likely to continue and you may even participate in this challenging game.

 

What A Global Anti-Encryption Regime Could Look Like – UK

Our Prime Minister wrote  the dangerous RIPA law,  and this year, to my horror, she persists in attempting to end widespread encryption.  I’m concerned that these laws are open to abuse by hundreds of government departments, in addition to everyday financial crimes. In truth, I am more worried by government abuse than any crimes that may be committed by criminals.

https://www.eff.org/deeplinks/2017/06/five-eyes-unlimited

Before she was elevated to the role of Prime Minister by the fallout from Brexit, Theresa May was the author of the UK’s Investigatory Powers bill, which spelled out the UK’s plans for mass surveillance in a post-Snowden world.

At the unveiling of the bill in 2015, May’s officials performed the traditional dance: they stated that they would be looking at controls on encryption, and then stating definitively that their new proposals included “no backdoors”.

Sure enough, the word “encryption” does not appear in the Investigatory Powers Act (IPA). That’s because it is written so broadly it doesn’t need to.

We’ve covered the IPA before at EFF, but it’s worth re-emphasizing some of the powers it grants the British government.

  • Any “communications service provider” can be served with a secret warrant, signed by the Home Secretary. Communications service provider is interpreted extremely broadly to include ISPs, social media platforms, mail services and other messaging services.
  • That warrant can describe a set of people or organizations that the government wants to spy upon.
  • It can require tech companies to insert malware onto their users’ computers, re-engineer their own technology, or use their networks to interfere with any other system.
  • The warrant explicitly allows those companies to violate any other laws in complying with the warrant.
  • Beyond particular warrants, private tech companies operating in the United Kingdom also have to respond to “technical capability notices” which will require them to “To provide and maintain the capability to disclose, where practicable, the content of communications or secondary data in an intelligible form,” as well as permit targeted and mass surveillance and government hacking.
  • Tech companies also have to the provide the UK government with new product designs in advance, so that the government can have time to require new “technical capabilities” before they are available to customers.

These capabilities alone already go far beyond the Nineties’ dreams of a blanket ban on crypto. Under the IPA, the UK claims the theoretical ability to order a company like Apple or Facebook to remove secure communication features from their products—while being simultaneously prohibited from telling the public about it.

Companies could be prohibited from fixing existing vulnerabilities, or required to introduce new ones in forthcoming products. Even incidental users of communication tech could be commandeered to become spies in her Majesty’s Secret Service: those same powers also allow the UK to, say, instruct a chain of coffee shops to use its free WiFi service to deploy British malware on its customers. (And, yes, coffee shops are given by officials as a valid example of a “communications service provider.”)

Wouldn’t companies push back against such demands? Possibly: but it’s a much harder fight to win if it’s not just the UK making the demand, but an international coalition of governments putting pressure on them to obey the same powers. This, it seems is what May’s government wants next.

The Lowest Common Privacy Denominator

Since the IPA passed, May has repeatedly declared her intent to create a an international agreement on “regulating cyberspace”. The difficulty of enforcing many of the theoretical powers of the IPA makes this particularly pressing.

The IPA includes language that makes it clear that the UK expects foreign companies to comply with its secret warrants. Realistically, it’s far harder for UK law enforcement to get non-UK technology companies to act as their personal hacking teams. That’s one reason why May’s government has talked up the IPA as a “global gold standard” for surveillance, and one that they hope other countries will adopt.

 

 

Cyber attack hits CHERNOBYL radiation system: ‘Goldeneye’ ransomware strikes

http://www.dailymail.co.uk/news/article-4643752/Europe-hit-new-WannaCry-virus.html

Chernobyl’s radiation monitoring system has been hit by the attack with its sensors shut down while UK advertising giant WPP, the largest agency in the world, among dozens of firms affected.

The ransomware appears to have been spread through popular accounting software and specifically targeted at bringing down business IT systems.

The outage began in Ukraine as the country’s power grid, airport, national bank and communications firms were first to report problems, before it spread rapidly throughout Europe.

Companies in the US, Germany, Norway, Russia, Denmark and France are among those to have confirmed issues so far

 Users are being shown a message saying their data has been encrypted, with some asking for £300 in anonymous currency Bitcoin to retrieve it (pictured, an ATM in Ukraine)

More than 200,000 victims in 150 countries were infected by that software, which originated in the UK and Spain last month, before spreading globally.

But cyber security experts have warned that this time the virus is much more dangerous because it has no ‘kill switch’ and is designed to spread rapidly though networks.

Marcus Hutchins, who foiled the previous WannaCry attack by discovering a way to stop it from infecting new computers, told MailOnline that even if users pay the fee their files could now be lost forever.

He said: ‘The company that hosts the email account which the ransomware asks you to contact has closed the account. There’s no way to get files back.

‘It’s early days – we don’t know if we can find a fix yet. But if it’s decryptable we will find a way.’

Hutchins, 22, continued: ‘Everyone’s looking at this right now and I’m working with other researchers.

‘I was just praying it wasn’t the Wannacry exploit again. Ideally we’ll have to find a way to decrypt the files or else people are not going to get them back.’

The ransomware targets computers using the Windows XP operating system which have not installed the latest security updates released by Microsoft.

KALI Linux – Promiscous Mode – Wireless hacking

 

Six Modes of Wireless

  • Monitor
  • Managed
  • Ad hoc
  • Master
  • Managed
  • Mesh
  • Repeater

A Cyberattack ‘the World Isn’t Ready For’

idt

idt1

idt2

 

Reference NYTIMES

Qubes OS – Privacy

https://www.qubes-os.org/

qubes os

https://www.qubes-os.org/downloads/

The OpenVPN post-audit bug bonanza

https://guidovranken.wordpress.com/2017/06/21/the-openvpn-post-audit-bug-bonanza/

I love OpenVPN, and wish them the best of luck, in resolving these issues.

*****

Summary

I’ve discovered 4 important security vulnerabilities in OpenVPN. Interestingly, these were not found by the two recently completed audits of OpenVPN code. Below you’ll find mostly technical information about the vulnerabilities and about how  I found them, but also some commentary on why commissioning code audits isn’t always the best way to find vulnerabilities.

Here you can find the latest version of OpenVPN: https://openvpn.net/index.php/open-source/downloads.html

This was a labor of love. Nobody paid me to do this. If you appreciate this effort, please donate BTC to 1D5vYkiLwRptKP1LCnt4V1TPUgk7cxvVtg.

Introduction

After a hardening of the OpenVPN code (as commissioned by the Dutch intelligence service AIVD) and two recent audits 1 2, I thought it was now time for some real action ;).

Most of this issues were found through fuzzing. I hate admitting it, but my chops in the arcane art of reviewing code manually, acquired through grueling practice, are dwarfed by the fuzzer in one fell swoop; the mortal’s mind can only retain and comprehend so much information at a time, and for programs that perform long cycles of complex, deeply nested operations it is simply not feasible to expect a human to perform an encompassing and reliable verification.

End users and companies who want to invest in validating the security of an application written in an “unsafe” language like C, such as those who crowd-funded the OpenVPN audit, should not request a manual source code audit, but rather task the experts with the goal of ensuring intended operation and finding vulnerabilities, using that strategy that provides the optimal yield for a given funding window.

Upon first thought you’d assume both endeavors boil down to the same thing, but my fuzzing-based strategy is evidently more effective. What’s more, once a set of fuzzers has been written, these can be integrated into a continuous integration environment for permanent protection henceforth, whereas a code review only provides a “snapshot” security assessment of a particular software version.

Manual reviews may still be part of the effort, but only there where automation (fuzzing) is not adequate. Some examples:

  • verify cryptographic operations
  • other application-level logic, like path traversal (though a fuzzer may help if you’re clever)
  • determine the extent to which timing discrepancies divulge sensitive information
  • determine the extent to which size of (encrypted) transmitted data divulges sensitive information (see also). Beyond the sphere of cryptanalysis, I think this is an underappreciated way of looking at security.
  • applications that contain a lot of pointer comparisons (not a very good practice to begin with — OpenVPN is very clean in this regard, by the way) may require manual inspection to see if behavior relies on pointer values (example)
  • can memory leaks (which may be considered a vulnerability themselves) can lead to more severe vulnerabilities? (eg. will memory corruption take place if the system is drained of memory?)
  • can very large inputs (say megabytes, gigabytes, which would be very slow to fuzz) cause problems?
  • does the software rely on the behavior of certain library versions/flavors? (eg. a libc function that behaves a certain way with glibc may behave differently with the BSD libc — I’ve tried making a case around the use of ctime() in OpenVPN)

So doing a code audit to find memory vulnerabilities in a C program is a little like asking car wash employees to clean your car with a makeup brush. A very noble pursuit indeed, and if you manage to complete it, the overall results may be even better than automated water blasting, but unless you have infinite funds and time, resources are better spent on cleaning the exterior with a machine, vacuuming the interior followed by an evaluation of the overall cleanliness, and acting where necessary.

 

%d bloggers like this: