Skip to content

Edward Snowden Interview with acTVism Munich | January 15th, 2017

Edward Snowden gave a live video interview with acTVism Munich as one of six guests to discuss the topic of “Freedom & Democracy – Global Issues in Context”. The interview was hosted by Zain Raza in Munich, Germany.

Shadow Brokers Now Selling Windows Exploits, Antivirus Bypass Tools

The Shadow Brokers, a group of hackers that have stolen exploits and hacking tools from the National Security Agency (NSA), are now selling some of these tools, which include Windows exploits and antivirus bypass tools, on a website hidden on the ZeroNet network.

All previously released hacking tools worked only against UNIX-based operating systems. This is the first time the Shadow Brokers have released Windows tools.

 

According to a message posted by the Shadow Brokers on their website, the entire “Windows Warez” collection is available for 750 Bitcoin ($675,000).

Windows download here:

https://bit.no.com:43110/theshadowbrokers.bit/page/windows/

shadow-2-windows

Hacker Leaks Stolen NSA Windows Hacking Tool For Free Download

Unix download here:

https://bit.no.com:43110/theshadowbrokers.bit/page/unix/

shadow-1-unix

https://bit.no.com:43110/theshadowbrokers.bit/post/messagefinale/

shadow-3

Englandboggy – great name

shadow-4-englandboggy

The ENGLANDBOGGY as an example is a privilege escalation attack that appears to load a shared library into a XORG privileged process for local root access. Another privilege escalation ENDLESSDONUT elevates a user from the “nobody” uid to “root” via exploitation of Apache httpd, a particularly interesting attack. The snippet of files contain a number of tools that are designed for stealth operation of a compromised UNIX host and as such could be vital for forensic analysts and incident response teams who are attempting to determine if they are impacted by the Equation Group and its tools. Amongst the collection of data is the man page for a forensically aware network capture tool, that appears to have been developed professionally. The output below shows the man page which is provided for the “strifeworld” tool.

https://www.myhackerhouse.com/merry-haxmas-shadowbrokers-strike-again/

We found some discrepancies in the data when we compared the files to the table however the data mostly supported ShadowBrokers classification of each tool. The file collection consists of the output of the “find” command in each project, alongside a screen shot of a file system browsing utility. This also gives the added benefit of providing file type information. The bulk of these projects are not provided in source code form and instead appear to be binary files, which further strengthens the hypothesis that these files were compromised from an operational staging post or actively obtained from a field operation by a 3rd party. If they had been in source code format then this would suggest an insider leak is more likely, binary files are often used in operations and distributed to team members over their source code counterpart. There is no conclusive evidence to identify the source of the leak and we will focus on the risks that the unreleased data may introduce. In addition to the screen shot and file output some files contain snippets of usage data and in one a full blown man page is provided! The team at Hacker House has been able to determine the following information about the as-yet-unreleased Equation Group toolkits.

Reference:

https://www.bleepingcomputer.com/news/security/shadow-brokers-now-selling-windows-exploits-antivirus-bypass-tools/

https://www.myhackerhouse.com/merry-haxmas-shadowbrokers-strike-again/

Obama Expands Surveillance Powers on His Way Out – EFF

With mere days left before President-elect Donald Trump takes the White House, President Barack Obama’s administration just finalized rules to make it easier for the nation’s intelligence agencies to share unfiltered information about innocent people.

New rules issued by the Obama administration under Executive Order 12333 will let the NSA—which collects information under that authority with little oversight, transparency, or concern for privacy—share the raw streams of communications it intercepts directly with agencies including the FBI, the DEA, and the Department of Homeland Security, according to a report today by the New York Times.

That’s a huge and troubling shift in the way those intelligence agencies receive information collected by the NSA. Domestic agencies like the FBI are subject to more privacy protections, including warrant requirements. Previously, the NSA shared data with these agencies only after it had screened the data, filtering out unnecessary personal information, including about innocent people whose communications were swept up the NSA’s massive surveillance operations.

As the New York Times put it, with the new rules, the government claims to be “reducing the risk that the N.S.A. will fail to recognize that a piece of information would be valuable to another agency, but increasing the risk that officials will see private information about innocent people.”

Under the new, relaxed rules, there are still conditions that need to be met before the NSA will grant domestic intelligence analysts access to the raw streams of data it collects. And analysts can only search that raw data for information about Americans for foreign intelligence and counterintelligence purposes, not domestic criminal cases.

However—and this is especially troubling—“if analysts stumble across evidence that an American has committed any crime, they will send it to the Justice Department,” the Times wrote.  So information that was collected without a warrant—or indeed any involvement by a court at all—for foreign intelligence purposes with little to no privacy protections, can be accessed raw and unfiltered by domestic law enforcement agencies to prosecute Americans with no involvement in threats to national security.

gchq-joke-actually-listens

Reference:

https://www.eff.org/deeplinks/2017/01/obama-expands-surveillance-powers-his-way-out

Best Crypto – Poly1305 papers

Poly1305-AES is a state-of-the-art secret-key message-authentication code suitable for a wide variety of applications.

Poly1305-AES computes a 16-byte authenticator of a message of any length, using a 16-byte nonce (unique message number) and a 32-byte secret key. Attackers can’t modify or forge messages if the message sender transmits an authenticator along with each message and the message receiver checks each authenticator.

There’s a mailing list for Poly1305-AES discussions. To subscribe, send an empty message to poly1305-subscribe@list.cr.yp.to.

Why is Poly1305-AES better than other message-authentication codes?

Poly1305-AES has several useful features:

  • Guaranteed security if AES is secure. There’s a theorem guaranteeing that the security gap is extremely small (n/2^(102) per forgery attempt for 16n-byte messages) even for long-term keys (2^64 messages). The only way for an attacker to break Poly1305-AES is to break AES.
  • Cipher replaceability. If anything does go wrong with AES, users can switch from Poly1305-AES to Poly1305-AnotherFunction, with an identical security guarantee.
  • Extremely high speed. My Poly1305-AES software takes just 3843 Athlon cycles, 5361 Pentium III cycles, 5464 Pentium 4 cycles, 4611 Pentium M cycles, 8464 PowerPC 7410 cycles, 5905 PowerPC RS64 IV cycles, 5118 UltraSPARC II cycles, or 5601 UltraSPARC III cycles to verify an authenticator on a 1024-byte message. Poly1305-AES offers consistent high speed, not just high speed for one favored CPU.
  • Low per-message overhead. My Poly1305-AES software takes just 1232 Pentium 4 cycles, 1264 PowerPC 7410 cycles, or 1077 UltraSPARC III cycles to verify an authenticator on a 64-byte message. Poly1305-AES offers consistent high speed, not just high speed for long messages. Most competing functions are designed for long messages and don’t pay attention to short-packet performance.
  • Key agility. Poly1305-AES can fit thousands of simultaneous keys into cache, and remains fast even when keys are out of cache. Poly1305-AES offers consistent high speed, not just high speed for single-key benchmarks. Almost all competing functions use a large table for each key; as the number of keys grows, those functions miss the cache and slow down dramatically.
  • Parallelizability and incrementality. Poly1305-AES can take advantage of additional hardware to reduce the latency for long messages, and can be recomputed at low cost for a small modification of a long message.
  • No intellectual-property claims. I am not aware of any patents or patent applications relevant to Poly1305-AES.

Guaranteed security, cipher replaceability, and parallelizability are provided by the standard polynomial-evaluation-Wegman-Carter-MAC framework. Within that framework, hash127-AES achieved extremely high speed at the expense of a large table for each key. The big advantage of Poly1305-AES is key agility: extremely high speed without any key expansion.

Other standard MACs are slower and less secure than Poly1305-AES. Specifically, HMAC-MD5 is slower and doesn’t have a comparable security guarantee; CBC-MAC-AES is much slower and has a weaker security guarantee. Both HMAC-MD5 and CBC-MAC-AES are breakable within 2^64 messages. I’m not saying that anyone is going to carry out this attack; I’m saying that everyone satisfied with the standard CBC security level should be satisfied with the even higher security level of Poly1305-AES.

How do I use Poly1305-AES in my own software?

My fast poly1305aes library is in the public domain. You can and should include it in your own programs, rather than going to the effort of linking to a shared library; the compiled code is between 6 and 10 kilobytes, depending on the CPU.

To get started, download and unpack the poly1305aes library:

     wget http://cr.yp.to/mac/poly1305aes-20050218.tar.gz
     gunzip < poly1305aes-20050218.tar.gz | tar -xf -

Compile the library (making sure to use appropriate compiler options for your platform, such as -m64 for the UltraSPARC) to get an idea of how it’s structured:

     cd poly1305aes-20050218
     env CC='gcc -O2' make

Copy the library files into your project:

     cp `cat FILES.lib` yourproject/
     cat Makefile.lib >> yourproject/Makefile

For any C program that will use Poly1305-AES, modify the program to include poly1305aes.h; also modify your Makefile to link the program with poly1305aes.a and to declare that the program depends on poly1305aes.a and poly1305aes.h.

Inside the program, to generate a 32-byte Poly1305-AES key, start by generating 32 secret random bytes from a cryptographically safe source: kr[0], kr[1], …, kr[31]. Then call

     poly1305aes_clamp(kr)

to create a 32-byte Poly1305-AES secret key kr[0], kr[1], …, kr[31].

Later, to send a message m[0], m[1], …, m[len-1] with a 16-byte nonce n[0], n[1], …, n[15] (which must be different for every message!), call

     poly1305aes_authenticate(a,kr,n,m,len)

to compute a 16-byte authenticator a[0], a[1], …, a[15].

After receiving an authenticated message a[0], a[1], …, a[15], n[0], n[1], …, n[15], m[0], m[1], …, m[len-1], call

     poly1305aes_verify(a,kr,n,m,len)

to verify the authenticator. Accept the message if poly1305aes_verify returns nonzero; otherwise throw it away.

Do not generate or accept messages longer than a gigabyte. If you need to send large amounts of data, you are undoubtedly breaking the data into small packets anyway; security requires a separate authenticator for every packet.

Please make sure to set up a Googleable web page identifying your program and saying that it is “powered by Poly1305-AES.”

How does the Poly1305-AES implementation work?

Interested in writing your own Poly1305-AES implementation? Seeing whether Poly1305-AES can benefit from, say, AltiVec? Using Poly1305-AES in a language without C linkage? Checking Poly1305-AES test vectors? Building Poly1305-AES circuits? Adapting the Poly1305-AES computational techniques to other functions?

The simplest C implementation of Poly1305-AES is poly1305aes_test, which relies on GMP and OpenSSL. I suggest starting from the top: read poly1305aes_test_verify.c and work your way down.

Test implementations in other languages:

You can then move on to the serious implementations:

If you’re trying to achieve better speed, make sure you understand all the different situations covered by my speed tables. You might want to start with my essay on the difference between best-case benchmarks and the real world. I designed the Poly1305-AES software, and the underlying Poly1305-AES function, to provide consistent high speed in a broad range of applications. A slight speedup in some situations often means a big slowdown in others; a Poly1305-AES implementation making that tradeoff might be useful for some applications, but it will be at best an alternative, not a replacement.

Where can I learn more about Poly1305-AES?

There are four papers:

  • [poly1305] 18pp. (PDF) D. J. Bernstein. The Poly1305-AES message-authentication code. Proceedings of Fast Software Encryption 2005, to appear. Document ID: 0018d9551b5546d97c340e0dd8cb5750. URL: http://cr.yp.to/papers.html#poly1305. Date: 2005.03.29. Supersedes: (PDF)(PS) (DVI) 2004.11.01. (PDF) (PS) (DVI) 2005.01.13.This paper gives the complete definition of Poly1305-AES, explains the Poly1305-AES design decisions, discusses the security of Poly1305-AES, and explains how to compute Poly1305-AES quickly.
  • [securitywcs] 17pp. (PDF) (PS) (DVI) D. J. Bernstein. Stronger security bounds for Wegman-Carter-Shoup authenticators. Proceedings of Eurocrypt 2005, to appear. Document ID: 2d603727f69542f30f7da2832240c1ad. URL: http://cr.yp.to/papers.html#securitywcs. Date: 2005.02.27. Supersedes: (PDF) (PS) (DVI) 2004.10.19. (PDF) (PS) (DVI) 2004.10.27.This paper proves security of this type of authenticator up to (and slightly beyond) 2^64 messages. Previous work by Shoup was limited to a smaller number of messages, often below 2^50.
  • [permutations] 10pp. (PDF) (PS) (DVI) D. J. Bernstein. Stronger security bounds for permutations. Document ID: 2f843f5d86111da8df8a14ef9ae1a3fb. URL: http://cr.yp.to/papers.html#permutations. Date: 2005.03.23.This paper presents a new proof of the same security bound. The new proof factors the previous proof into (1) the usual Wegman-Carter security bounds and (2) a general technique for replacing uniform random functions with uniform random permutations. Previous versions of the technique were limited to far fewer messages.
  • [cachetiming] 37pp. (PDF) (PS) D. J. Bernstein. Cache-timing attacks on AES. Document ID: cd9faae9bd5308c440df50fc26a517b4. URL: http://cr.yp.to/papers.html#cachetiming. Date: 2005.04.14. Supersedes: (PDF) (PS) (DVI) 2004.11.11. Supersedes: (PDF) (PS) (DVI) 2004.11.21.This paper discusses timing leaks in AES software. This is an issue for all AES users, not just Poly1305-AES users.

I’m also giving three talks on Poly1305-AES: 2005.02.15, emphasizing the structure of message-authentication codes; 2005.02.21, emphasizing the difference between best-case benchmarks and the real world; 2005.05, emphasizing the security-bound proof.

 

Reference:

https://cr.yp.to/mac.html#poly1305-paper

Best Crypto – Things that use Curve25519

Things that use Curve25519

Updated: January 13, 2017

Here’s a list of protocols and software that use or support the superfast, super secure Curve25519 ECDH function from Dan Bernstein. Note that Curve25519 ECDH should be referred to as X25519.

This page is divided by Protocols, Networks, Operating Systems, Hardware, Software, TLS Libraries, Libraries, Miscellaneous, Timeline notes, and Support coming soon.

You may also be interested in this list of Ed25519 deployment.

Background info:

It has become increasingly common for "Curve25519" to refer to an
elliptic curve, while the original paper defined "Curve25519" as an
X-coordinate DH system using that curve. "Ed25519" unambiguously refers
to an Edwards-coordinate signature system using that curve.

Kenny and others in Toronto recommended changing terminology to clearly
separate these three items. Let me suggest the following terminology:

   * "X25519" is the recommended Montgomery-X-coordinate DH function.
   * "Ed25519" is the recommended Edwards-coordinate signature system.
   * "Curve25519" is the underlying elliptic curve.

Protocols

  • DNS
    • DNSCurve — encrypted DNS between a resolver and authoritative server
    • DNSCrypt — encrypted DNS between a client and a resolver
  • Transport
    • CurveCP — a secure transport protocol
    • QUIC — a secure transport protocol
    • Noise — a framework for crypto protocols based on Diffie-Hellman key agreement
    • Nitro — a very fast, flexible, high-level network communication library
    • lodp — Lightweight Obfuscated Datagram Protocol
    • CUSP — a reliable and secure general purpose transport designed with peer-to-peer (P2P) networking in mind
    • Dust — A Polymorphic Engine for Filtering-Resistant Transport Protocols
    • RAET — (Reliable Asynchronous Event Transport) Protocol
    • Evernym — a high-speed, privacy-enhancing, distributed public ledger engineered for self-sovereign identity
    • SSH, thanks to the curve25519-sha256@libssh.org key exchange from the libssh team, adopted by OpenSSH and Tinyssh
  • Other
    • obfs4 — a look-like nothing obfuscation protocol
    • Riffle — an efficient communication system with strong anonymity
    • OMEMO — an XMPP Extension Protocol (XEP) for secure multi-client end-to-end encryption
  • TLS
    • Nettle is the crypto library underneath GnuTLS
    • BoringSSL from Google
    • Other libraries are coming!
  • IPsec
    • OpenIKED — IKEv2 daemon which supports non-standard Curve25519
  • ZRTP
  • Other
    • Signal Protocol — encrypted messaging protocol derivative of OTR Messaging
    • Pond — forward secure, asynchronous messaging for the discerning project in stasis
    • ZeroTier — Create flat virtual Ethernet networks of almost unlimited size
    • telehash — encrypted mesh protocol
    • Olm — A Cryptographic Ratchet
    • bubblestorm — P2P group organization protocol
    • Apple AirPlay — stream content to HDTV/speakers

Networks

  • Tor — The Onion Router anonymity network
  • GNUnet — a framework for secure peer-to-peer networking that does not use any centralized or otherwise trusted services
  • URC — an IRC style, private, security aware, open source project
  • Serval — Mesh telecommunications
  • SAFE — A new Secure way to access a world of existing apps where the security of your data is put above all else
  • Stellar (Payment Network) — low-cost, real-time transactions on a distributed ledger
  • cjdns — encrypted ipv6 mesh networking
    • Plus the Enigmabox — a Hardware cjdns router

Operating Systems

  • OpenBSD — used in OpenSSH, OpenIKED, and in CVS over SSH
  • Apple iOS — the operating system used in the iPhone, iPad, and iPod Touch
  • Android — ships with Chrome, which supports X25519 for TLS and QUIC
  • All operating systems that ship with OpenSSH 6.5+ from the OpenBSD Project

Hardware

  • SC4 HSM — a fully-open USB2 HSM (hardware-secure module)

Software

  • DNS
  • Web browsers & and clients
    • Google Chrome — for TLS and QUIC
    • Iridium — a browser securing your privacy (supports X25519 for TLS and QUIC)
    • Opera
    • VapidSSL — a TLS 1.2 client derived from BoringSSL
  • Web Servers
    • Caddy — Caddy 0.9+ supports QUIC
    • All webservers built with OpenSSL 1.1.0+
  • CurveCP related
    • CurveProtect — securing major protocols with CurveCP. Also supports DNSCurve.
    • qremote — an experimental drop-in replacement for qmail’s qmail-remote with CurveCP support
    • curvevpn — based on CurveCP
    • curvetun — a lightweight curve25519-based IP tunnel
    • spiral-swarm — easy local file transfer with curvecp [ author recommends another project ]
    • QuickTun — “probably the simplest VPN tunnel software ever”
    • jeremywohl-curvecp — “A Go CurveCP implementation I was sandboxing; non-functional.”
    • curvecp — CurveCP programs, linked with TweetNaCl and built statically with Musl libc
    • curvecp.go — Go implementation of the CurveCP protocol
    • curvecp — Automatically exported from code.google.com/p/curvecp
    • urcd — the most private, secure, open source, “Internet Relay Chat” style chat network

 

Introducing python-ed25519

Ed25519 is an implementation of Schnorr Signatures in a particular elliptic curve (Curve25519) that enables very high speed operations. It also has a few nice features to make the algorithm safer and easier to use.

I’ve published some MIT-licensed Python bindings to djb++’s portable C implementation of this signature scheme. They’re available here:

https://github.com/warner/python-ed25519
or easy_install ed25519

 

Some Highlights:

  • signing keys and verifing keys are both just 32 bytes

  • signatures are 64 bytes

  • key generation and signing each take about 2ms on my 2010 MacBookPro

  • signature verification takes about 6ms

  • 128-bit security level, comparable to AES-128, SHA256, and 3072-bit RSA

  • No entropy needed during signing (signatures are deterministic)

There are amd64-specific assembly versions that run even faster, in just a few hundred microseconds, and for bulk operations you can do batch verification faster than one-at-a-time verification. So you can perform thousands of operations per second with this algorithm (and hundreds with this particular implementation).

It’s very exciting to finally have short+fast signatures (and also, through Curve25519, key-agreement and encryption): it opens up a lot of new possibilities. When public-key encryption was first invented, keys took so long to generate that folks assumed that each human would have just one: all sorts of mental baggage was built up around this restriction (ideas like never sharing signing keys, keys representing people, and the need to distribute keys separately from fingerprints). When you can easily generate a new key for each message or object or operation, we can let go of some of those psychological fetters and build something new.

(Note that “Curve25519” uses the same basic curve equation, but only provides Diffie-Hellman key agreement [and, by extension, public-key encryption]. It can’t be used to create signatures that can be verified by third parties: for that you need Ed25519. A portable Curve25519 implementation can be found in curve25519-donna, and includes a Python binding that I wrote too).

Reference:

https://ianix.com/pub/curve25519-deployment.html

https://blog.mozilla.org/warner/2011/11/21/introducing-python-ed25519/

Best Crypto – On the Impending Crypto Monoculture

On the Impending Crypto Monoculture ===================================

A number of IETF standards groups are currently in the process of applying the second-system effect to redesigning their crypto protocols. A major feature of these changes includes the dropping of traditional encryption algorithms and mechanisms like RSA, DH, ECDH/ECDSA, SHA-2, and AES, for a completely different set of mechanisms, including Curve25519 (designed by Dan Bernstein et al), EdDSA (Bernstein and colleagues), Poly1305 (Bernstein again) and ChaCha20 (by, you guessed it, Bernstein).

What’s more, the reference implementations of these algorithms also come from Dan Bernstein (again with help from others), leading to a never-before-seen crypto monoculture in which it’s possible that the entire algorithm suite used by a security protocol, and the entire implementation of that suite, all originate from one person. How on earth did it come to this?

The Underlying Problem ———————-

It would be easy to dismiss the wholesale adoption of Bernstein algorithms and code as rampant fanboyism, and indeed there is some fanboyism present. An example of this is the interpretation of the data formats to use as “whatever Dan’s code does” rather than the form specified in widely-adopted standards like X9.62 (“Additional Elliptic Curves (Curve25519 etc) for TLS ECDH key agreement”, TLS WG discussion), something that hasn’t been seen since the C language was defined as “whatever the pcc compiler accepts as input”. The underlying problem, though, is far more complex. In adopting the Bernstein algorithm suite and its implementation, implementers have rejected both the highly brittle and failure-prone current algorithms and mechanisms and their equally brittle and failure-prone implementations. Consider the simple case of authenticated encryption as used in the major Internet security protocols TLS, SSH, PGP, and S/MIME (the remaining protocol would be IPsec, but I’ve never written an IPsec implementation so I don’t have sufficient hands-on experience with it to comment on it in practice). S/MIME has an authenticated-encryption mode (encrypt-then-MAC or EtM) that’s virtually never used or even implemented, PGP has a sort-of integrity-check mode that encrypts a hash of the plaintext in CFB mode, and both TLS and SSH use the endlessly failure-prone MAC-then-encrypt (MtE) mode, with an ever- evolving suite of increasingly creatively-named attacks stretching back 15 years or more (TLS recently adopted, after a terrific struggle on their mailing list, an option to use EtM, but support in some major implementations is still lagging). What are the (standardised) alternatives? Looking through a recent paper from Real World Crypto (“The Evolution of Authenticated Encryption”, Phil Rogaway), we see the three options GCM, CCM, and OCB. The GCM slide provides a list of pros and cons to using GCM, none of which seem like a terribly big deal, but misses out the single biggest, indeed killer failure of the whole mode, the fact that if you for some reason fail to increment the counter, you’re sending what’s effectively plaintext (it’s recoverable with a simple XOR). It’s an incredibly brittle mode, the equivalent of the historically frighteningly misuse-prone RC4, and one I won’t touch with a barge pole because you’re one single machine instruction away from a catastrophic failure of the whole cryptosystem, or one single IV reuse away from the same. This isn’t just theoretical, it actually happened to Colin Percival, a very experienced crypto developer, in his backup program tarsnap. You can’t even salvage just the authentication from it, that fails as well with a single IV reuse (“Authentication Failures in NIST version of GCM”, Antoine Joux). Compare this with old-fashioned CBC+HMAC (applied in the correct EtM manner), in which you can arbitrarily misuse the IV (for example you can forget to apply it completely) and the worst that can happen is that you drop back to ECB mode, which isn’t perfect but still a long way from the total failure that you get with GCM. Similarly, HMAC doesn’t fail completely due to a minor problem with the IV. Then there’s CCM, which is two-pass and therefore an instant fail for streaming implementations, which is all of the protocols mentioned earlier (since CCM was designed for use in 802.11 which has fixed maximum-size packets this isn’t a failure of the mode itself, but does severely limit its applicability). The remaining mode is OCB, which I’d consider the best AEAD mode out there (it shares CBC’s graceful-degradation property in which reuse or misuse of the IV doesn’t lead to a total loss of security, only the authentication property breaks but not the confidentiality). Unfortunately it’s patented, and even though there are fairly broad exceptions allowing it to be used in many situations, the legal minefield that ensues makes it untouchable for most potential users. For example does the prohibition on military use cover the situation where an open-source crypto package is used in a vendor library that’s used in a medical insurance app that’s used by the US Navy, or where banking transactions protected by TLS may include ones of a military nature (both of these are actual examples that affected decisions not to use OCB). Since no-one wants to call in lawyers every time a situation like this comes up, and indeed can’t call in lawyers when the crypto is several levels away in the service stack, OCB won’t be used even though it may be the best AEAD mode out there. (The background behind this problem can be found in Phil Rogaway’s excellent essay “The Moral Character of Cryptographic Work”, which discusses aligning crypto work with principles like the Buddhist concept of right livelihood, applying it in an ethical manner. Unfortunately, in the same way that the current misguided attempts by politicians to limit mostly non-existent use of crypto by terrorists and other equestrians only affects legitimate users (the few terrorists who may actually bother with encryption won’t care), so the restriction of OCB, however well-intentioned, have the effect that a beautiful AEAD mode that should be used everywhere is instead used almost nowhere). The implementations of the algorithms aren’t much better. Alongside brittle, failure-prone crypto modes and mechanisms, we also have brittle, failure-prone implementations. The most notorious of these is OpenSSL, which powers a significant part of the world’s crypto infrastructure not only directly (as a TLS/SSL implementation) but also indirectly, when it’s used as a component of other applications like OpenSSH. In fact one of the reasons given for OpenSSH’s adoption of the chacha20-poly1305 crypto mechanisms (alongside Curve25519 and others) was that it finally allowed them to remove the last vestiges of OpenSSL from their code. The Reason for the Monoculture —————————— Anyone who works with crypto on the Internet has had to endure 15-20 years of constant breakage of the crypto they use, both of the algorithms and mechanisms and of the implementations. It’s not even possible to give references for this because the list of breakage is so long and extensive that it would take pages and pages just to enumerate it all. Take for example an organisation like Google. Every single time that there’s been some break in a crypto mechanism, Google gets hit. Again and again, year in, year out. So when they look to moving to ChaCha20 and Poly1305, it’s not Bernstein fanboyism, it’s an attempt to dig themselves out of the current hole where they get hit with a new attack every couple of months, and the breakage just keeps recurring, endlessly. What implementers are looking for is what Bernstein has termed boring crypto, “crypto that simply works, solidly resists attacks, never needs any upgrades” (“Boring crypto”, Dan Bernstein). Bernstein and colleagues offer a silver bullet, something that appears better than anything else that’s out there at the moment. In this they have no real competition. There’s no AEAD mode that’s usable, the ECC algorithms and parameters that we’re supposed to use are both tainted due to NSA involvement and riddled with side-channels (the Bernstein algorithms and mechanisms have been specifically designed to deal with both of these issues), and so on. Consider being lost in an endless desert. If you see an oasis in the distance, you head towards it even if the water is brackish and has camel dung floating in it. Bernstein et al are the oasis (or perhaps the mirage of an oasis), in an endless desert of cryptosystems and implementations of cryptosystems that keep breaking. So the (pending) Bernstein monoculture isn’t necessarily a vote for Dan, it’s more a vote against everything else. Acknowledgements —————- This essay came about as the result of a discussion at AsiaCrypt 2015, and was then developed with significant input from Lucky Green. Prior to publication, further input was provided by some of the people whose work is mentioned in it. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography

 

 

Reference:

https://lwn.net/Articles/681616/

YOUR SMART METER IS VERY SECURE (AGAINST YOU) AND VERY INSECURE (AGAINST HACKERS)

In On Smart Cities, Smart Energy, And Dumb Security — Netanel Rubin’s talk at this year’s Chaos Communications Congress — Rubin presents his findings on the failings in the security of commonly deployed smart meters.

It’s not pretty.

The meters are designed to treat their owners as attackers: you are your smart meter’s adversary, because if you could control it, you could use it to defraud the power company about your electricity usage. As a result, the physical security of smart meters is very good.

But the corollary of this adversarial relationship is that your meter’s networked insecurities are, by design, impossible for you to remedy or override. If an attacker gains control of your meter, they can jack up your bills, shut down your power (compromising both your physical safety during periods of extreme heat or cold; and your network security by powering down devices like burglar alarms and cameras), and spy on your electricity usage, etc — and that fantastic physical security means you can’t readily reprogram your meter to tell it to ignore the remote instructions that seem to be emanating from a privileged user at the power company. If you can override the power company’s instructions, the power company is vulnerable to your shenanigans, and since power companies are the primary customers for smart meters, the meters are designed to protect them at your expense.

 

This would be a lot less worrisome if the network security of smart meters was perfect (though you’d still be vulnerable to unscrupulous power company employees and repressive government orders — imagine how Turkey’s government could use this power against its enemies list in its current purge), but all security is imperfect, and in the case of smart meters, “imperfect” is an awfully charitable characterisation.

The network security model of smart meters starts from the inherently flawed Zigbee protocol, long understood to be difficult to secure, and goes downhill from there, with halfhearted and sloppy implementations of Zigbee’s second-rate security. Smart meters rely on the insecure GSM protocol, incorporate hardcoded administrative passwords, and use keys derived from six-character device names.

 

The Guardian’s Alex Hearn asked the UK department of Business, Energy and Industrial Strategy for their response to this, and they said “Robust security controls are in place across the end to end smart metering system and all devices must be independently assessed by an expert security organisation, irrespective of their country of origin.”

Translation: we will do nothing about this until it is too late.

As bad as all this seems, it’s actually worse. Rubin is almost certainly not the first person to discover these vulnerabilities, but as we learned during the US Copyright Office’s 2015 proceedings on the DMCA, security researchers who uncover these security bugs are routinely silenced by their in-house counsel, because laws like Section 1201 of the DMCA — and EU laws that implement Article 6 of the EUCD — allow companies to sue (and even jail) anyone who reveals a flaw in their digital locks.

“These security problems are not going to just go away,” Rubin said. “On the contrary, we are going to see a sharp increase in hacking attempts. Yet most utilities are not even monitoring their network, let alone the smart meters. Utilities have to understand that with great power comes great responsibility.”

Smart meters come with benefits, allowing utilities to more efficiently allocate energy production, and enabling micro-generation that can boost the uptake of renewable energy. For those reasons and more, the European Union has a goal of replacing 80% of meters with smart meters by 2020.

Reference

http://www.blacklistednews.com/Your_smart_meter_is_very_secure_%28against_you%29_and_very_insecure_%28against_hackers%29/56105/0/38/38/Y/M.html

BEWARE – From today, the Government is RECORDING everywhere you click online

http://www.express.co.uk/life-style/science-technology/748494/uk-government-snoopers-charter-recording-online-web-phone-records

December 30th marks the date that the Investigatory Powers Act, also known as the ‘Snooper’s Charter’ officially comes into force, allowing the government to collect data on anyone.

 

joke-gchq

 

The new legislation, described as “world-leading” by home secretary Amber Rudd, will primarily be used to carry out bulk email surveillance, as authorities look to monitor communications between suspects.

However it could also be used to monitor other personal information, including phone records and web browsing history, with web and phone companies required to store this information for 12 months.

The companies would also need to be able to provide police, security services and official agencies with access to this data whenever required.

Law enforcement agencies would also be given powers to hack into the computers and mobile devices of potential suspects.

However the life of the new Act could be short-lived following

The European Court of Justice (ECJ) ruled that “general and indiscriminate retention of data” is against EU law in an embarrassing retort to Prime Minister Theresa May.

The court did admit that combatting crime was a legitimate use for the information, but the laws could now be open to change or even withdrawal if there is enough opposition.

 

UDP vs. TCP

TCP vs. UDP

We have a decision to make here, do we use TCP sockets or UDP sockets?

Lets look at the properties of each:

TCP:

  • Connection based
  • Guaranteed reliable and ordered
  • Automatically breaks up your data into packets for you
  • Makes sure it doesn’t send data too fast for the internet connection to handle (flow control)
  • Easy to use, you just read and write data like its a file

UDP:

  • No concept of connection, you have to code this yourself
  • No guarantee of reliability or ordering of packets, they may arrive out of order, be duplicated, or not arrive at all!
  • You have to manually break your data up into packets and send them
  • You have to make sure you don’t send data too fast for your internet connection to handle
  • If a packet is lost, you need to devise some way to detect this, and resend that data if necessary

The decision seems pretty clear then, TCP does everything we want and its super easy to use, while UDP is a huge pain in the ass and we have to code everything ourselves from scratch. So obviously we just use TCP right?

Wrong.

Using TCP is the worst possible mistake you can make when developing a networked action game like an FPS! To understand why, you need to see what TCP is actually doing above IP to make everything look so simple!

How TCP really works

TCP and UDP are both built on top of IP, but they are radically different. UDP behaves very much like the IP protocol underneath it, while TCP abstracts everything so it looks like you are reading and writing to a file, hiding all complexities of packets and unreliability from you.

So how does it do this?

Firstly, TCP is a stream protocol, so you just write bytes to a stream, and TCP makes sure that they get across to the other side. Since IP is built on packets, and TCP is built on top of IP, TCP must therefore break your stream of data up into packets. So, some internal TCP code queues up the data you send, then when enough data is pending the queue, it sends a packet to the other machine.

This can be a problem for multiplayer games if you are sending very small packets. What can happen here is that TCP may decide that its not going to send data until you have buffered up enough data to make a reasonably sized packet (say more than 100 bytes or so). This is a problem because you want your client player input to get to the server as quickly as possible, if it is delayed or “clumped up” like TCP can do with small packets, the client’s user experience of the multiplayer game will be very poor. Game network updates will arrive late and infrequently, instead of on-time and frequently like we want.

TCP has an option you can set that fixes this behavior called TCP_NODELAY. This option instructs TCP not to wait around until enough data is queued up, but to send whatever data you write to it immediately. This is typically referred to as disabling Nagle’s algorithm.

Unfortunately, even if you set this option TCP still has serious problems for multiplayer games.

It all stems from how TCP handles lost and out of order packets, to present you with the “illusion” of a reliable, ordered stream of data.

How TCP implements reliability

Fundamentally TCP breaks down a stream of data into packets, sends these packets over unreliable IP, then takes the packets received on the other side and reconstructs the stream.

But what happens when a packet is lost? What happens when packets arrive out of order or are duplicated?

Without going too much into the details of how TCP works because its super-complicated (please refer to TCP/IP Illustrated) in essence TCP sends out a packet, waits a while until it detects that packet was lost because it didn’t receive an ack (acknowledgement) for it, then resends the lost packet to the other machine. Duplicate packets are discarded on the receiver side, and out of order packets are resequenced so everything is reliable and in order.

The problem is that if we were to attempt to synchronize this using TCP, whenever a packet is dropped it has to stop and wait for that data to be resent. Yes, even if more recent data arrives, that new data gets put in a queue, and you cannot access it until that lost packet has been retransmitted. How long does it take to resend the packet? Well, it is going to take at least round trip latency for TCP to work out that data needs to be resent, but commonly it takes 2*RTT, and another one way trip from the sender to the receiver for the resent packet to get there. So if you have a 125ms ping, you will be waiting roughly 1/5th of a second for the packet data to be resent at best, and in worst case conditions you could be waiting up to half a second or more (consider what happens if the attempt to resend the packet fails to get through?). What happens if TCP decides the packet loss indicates network congestion and it backs off? Yes it does this. Fun times!

Why you should never use TCP to network time critical data

The problem with using TCP for realtime games like FPS is that unlike web browsers, or email or most other applications, these multiplayer games have a real time requirement on packet delivery. For many parts of your game, for example player input and character positions, it really doesn’t matter what happened a second ago, you only care about the most recent data. TCP was simply not designed with this in mind.

Consider a very simple example of a multiplayer game, some sort of action game like a shooter. You want to network this in a very simple way. Every frame you send the input from the client to the server (eg. keypresses, mouse input controller input), and each frame the server processes the input from each player, updates the simulation, then sends the current position of game objects back to the client for rendering.

So in our simple multiplayer game, whenever a packet is lost, everything has to stop and wait for that packet to be resent. On the client game objects stop receiving updates so they appear to be standing still, and on the server input stops getting through from the client, so the players cannot move or shoot. When the resent packet finally arrives, you receive this stale, out of date information that you don’t even care about! Plus, there are packets backed up in queue waiting for the resend which arrive at same time, so you have to process all of these packets in one frame. Everything is clumped up!

Unfortunately, there is nothing you can do to fix this behavior with TCP, nor would you want to, it is just the fundamental nature of it! This is just what it takes to make the unreliable, packet-based internet look like a reliable-ordered stream.

Thing is we don’t want a reliable ordered stream.

We want our data to get as quickly as possible from client to server without having to wait for lost data to be resent.

This is why you should never use TCP for networking time-critical data!

Wait? Why can’t I use both UDP and TCP?

For realtime game data like player input and state, only the most recent data is relevant, but for other types of data, say perhaps a sequence of commands sent from one machine to another, reliability and ordering can be very important.

The temptation then is to use UDP for player input and state, and TCP for the reliable ordered data. If you’re sharp you’ve probably even worked out that you may have multiple “streams” of reliable ordered commands, maybe one about level loading, and another about AI. Perhaps you think to yourself, “Well, I’d really not want AI commands to stall out if a packet is lost containing a level loading command – they are completely unrelated!”. You are right, so you may be tempted to create one TCP socket for each stream of commands.

On the surface, this seems like a great idea. The problem is that since TCP and UDP are both built on top of IP, the underlying packets sent by each protocol will affect each other. Exactly how they affect each other is quite complicated and relates to how TCP performs reliability and flow control, but fundamentally you should remember that TCP tends to induce packet loss in UDP packets. For more information, read this paper on the subject.

Also, it’s pretty complicated to mix UDP and TCP. If you mix UDP and TCP you lose a certain amount of control. Maybe you can implement reliability in a more efficient way that TCP does, better suited to your needs? Even if you need reliable-ordered data, it’s possible, provided that data is small relative to the available bandwidth to get that data across faster and more reliably that it would if you sent it over TCP. Plus, if you have to do NAT to enable home internet connections to talk to each other, having to do this NAT once for UDP and once for TCP (not even sure if this is possible…) is kind of painful.

Conclusion

My recommendation then is not only that you use UDP, but that you only use UDP for your game protocol. Don’t mix TCP and UDP, instead learn how to implement the specific pieces of TCP that you wish to use inside your own custom UDP based protocol.

 

Reference:

http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/

https://www.isoc.org/INET97/proceedings/F3/F3_1.HTM

Keyloggers: personal opinion

This evening I decided to have a mess about with a keylogger.  The application freaked me out, to the point that within 30 minutes I had to uninstall it as too disturbing.  I know that Love INT is where keyloggers are used by partners who suspect their girlfriend is cheating or having an affair.  But seriously, this is sick stuff.

Triggers your Webcam to take pics.

You can select the webcam to take a photo every 2-3 minutes.  The light does come on when the webcam is operational, but with a little tinkering, I’m certain that can be prevented.

keylogger-cropped-microphone-logging-webcam

Emails

It records the passwords and content of emails.  It screenshots the email client so that you can see the inbox, just as your partner would.

Screenshots

You can set the keylogger to take screenshots every minute.  Sick.

keylogger-cropped-screenshot-settings

Microphone recording & Chat recording

You can set the microphone to record all audio conversations.  It will record both sides of the chat session.

Task Manager

The application does not show up in task manager.

Off site logs

The logs are sorted into screenshots, audio, visual and text.  The logs can be transferred via a LAN connection, to a USB or even to an email address.

Secret Uninstall on a set date.

You can set the keylogger to silently and secretly uninstall on a specific date.

keylogger-cropped-silent-unistall-of-keylogger-on-a-set-date

I suppose this is so that the keylogger is  active for the time that your girlfriend is on holiday, so that you can monitor if she has a holiday romance.  You are playing with fire if you put a keylogger on her laptop and then sneakily uninstall it on the day she comes home.

I honestly cannot suggest that you use keyloggers as the jealous partner, or the concerned parent on your teenage children, as the breach of trust is so great.

There really is no way to forgive someone who uses a keylogger on their partner or child.

 

%d bloggers like this: