Skip to content

It’s not paranoia, your phone really IS listening to EVERYTHING you say and using your private conversations to target ads, security expert warns

Many share a similar story: They were chatting about a niche product or holiday destination with friends, and soon afterwards an advertisement on the same theme appears in their social media apps.

According to one researcher, these oddly pertinent ads aren’t merely a coincidence and your phone regularly listens to what you say.

It’s not known exactly what triggers the technology, but the researcher claims the technique is completely legal and is even covered in the terms of your mobile apps’ user agreements.

These smartphone models are constantly listening out for the designated wake word or phrase, with everything else discarded.

However, one researcher claims that keywords and phrases picked-up by the gadget can be accessed by third-party apps, like Instagram and Twitter, when the right permissions are enabled.

This means when you chat about needing new jeans, or plans for a holiday in Senegal, apps can plaster your timeline with adverts for clothes and deals on flights.

Dr Peter Henway, a senior security consultant for cybersecurity firm Asterisk, told Vice: ‘From time to time, snippets of audio do go back to [apps like Facebook’s] servers but there’s no official understanding what the triggers for that are.

‘Whether it’s timing or location-based or usage of certain functions, [apps] are certainly pulling those microphone permissions and using those periodically.

All the internals of the applications send this data in encrypted form, so it’s very difficult to define the exact trigger.’

He said companies like Facebook and Instagram could have a range of thousands of triggers to kickstart the process of mining your conversations for advertising opportunities.

For example, a casual chat about cat food or a certain snack may be enough to activate the technology.

‘Seeing as Google are open about it, I would personally assume the other companies are doing the same,’ Dr Henway said.

‘Really, there’s no reason they wouldn’t be. It makes good sense from a marketing standpoint and their end-user agreements and the law both allow it, so I would assume they’re doing it, but there’s no way to be sure.’



VPNFilter router malware is a lot worse than everyone thought

Asus, D-Link, Huawei, Ubiquiti, UPVEL, and ZTE: these are the vendors newly named by Cisco’s Talos Intelligence whose products are being exploited by the VPNFilter malware.

As well as the expanded list of impacted devices, Talos warned that VPNFilter now attacks endpoints behind the firewall, and sports a “poison pill” to brick an infected network device if necessary.

When it was discovered last month, VPNFilter had hijacked half a million devices – but only SOHO devices from Linksys, MikroTik, Netgear, TP-Link, and QNAP storage kit, were commandeered.

As well as the six new vendors added to the list, Talos said this weekmore devices from Linksys, MikroTik, Netgear, and TP-Link are affected. Talos noted that, to date, all the vulnerable units are consumer-grade or SOHO-grade.


How does it work?

The software nasty’s masterminds are using compromised SOHO routers to inject malicious content into web traffic flowing through the devices. This hijacking is carried out by a third-stage module Talos this week identified within the malware.

Called ssler, the module can intercept all insecure HTTP traffic destined for port 80, and injects JavaScript code to spy on or hijack browser sessions. Basically, if you visit a website through an infected router or gateway, there is a chance sensitive details on the page – or information entered – will be siphoned off by VPNFilter to its masters.

The researchers believe the criminals controlling VPNFilter are profiling endpoints to pick out the best targets, and will swipe confidential information in transit where possible. The code snoops on the destination IP address, to help it identify valuable traffic such as a connection to a bank, as well as visited domain names. It also attempts to downgrade secure HTTPS connections to unencrypted forms, so that login passwords and the like can be obtained.

Talos provides extensive technical detail about other aspects of the module’s operation, so we’ll summarise:

  • The malware’s scripts of commands to carry out are downloaded from VPNFilters C&Cs, so it’s customisable;
  • It’s got an SSL stripper to try and force-downgrade user communications to unencrypted, to help steal credentials. Juniper notes that while HSTS forces sites to HTTPS, “but it is enough sometimes to catch the very first request as it may already contain credentials and other POST form elements”;
  • Google, YouTube, Facebook and Twitter are excluded from the SSL stripping;
  • To get around the risk that users’ reconfiguration might stop VPNFilter collecting traffic, the module dumps and recreates its route-sniffing capabilities every four minutes.

Sending devices to Lego-land

Another third-stage module performs a self-destruct operation, which is common for malware that seeks to erase its tracks, but Talos also said it can brick the host, too.

The dstr module “deletes all files and folders related to its own operation first before deleting the rest of the files on the system, possibly in an attempt to hide its presence during a forensic analysis,” Team Talos said.

The module “clears flash memory by overwriting the bytes of all available /dev/mtdX devices with a 0xFF byte. Finally, the shell command rm -rf /* is executed to delete the remainder of the file system and the device is rebooted.

“At this point, the device will not have any of the files it needs to operate and fail to boot.”

Devices and domains

The table below shows all devices VPNFilter has been identified in so far, with new devices marked by an asterisk.

Vendor Device / Series
Asus RT-AC66U*; RT-N10 series*, RT-N56 series*
D-Link DES-1210-08P*; DIR-300 Series*; DSR-250, 500, and 1000 series*
Huawei HG8245*
Linksys E1200; E1500; E3000*; E3200*; E4200*; RV082*; WRVS4400N
Microtik CCR1009*; CCR1x series; CRS series*; RB series*; STX5*
Netgear DG834*; DGN series*; FVS318N*; MBRN3000*; R-series; WNR series*; WND series*; UTM50*
QNAP TS251; TS439 Pro; other devices running QTS software
TP-Link R600VPN; TL-WR series*
Ubiquiti NSM2*; PBE M5*
UPVEL Unknown devices

Since the original VPNFilter C&C domain,, has been seized by the FBI, the malware now uses resources stashed in a number of Photobucket user accounts.


Amazon’s Rekognition Surveillance Tool Will Grant Police Even More Surveillance Power

Amazon is facing pressure from civil liberties groups for the corporation’s role in building the infrastructure which powers government surveillance.

The Electronic Frontier Foundation, the American Civil Liberties Union, Human Rights Watch, the Freedom of the Press Foundation and nearly 40 other organizations have joined together to demand that Amazon cease providing law enforcement access to surveillance technology. The organizations signed onto a letter to Amazon which condemns the company for developing new facial recognition tools that allow real-time surveillance using police body cameras and the ever growing interconnected network of cameras in most major American cities.

“Amazon has been heavily marketing this tool—called “Rekognition”—to law enforcement, and it’s already being used by agencies in Florida and Oregon,” the EFF wrote in a recent blog. “This system affords the government vast and dangerous surveillance powers, and it poses a threat to the privacy and freedom of communities across the country. That includes many of Amazon’s own customers, who represent more than 75 percent of U.S. online consumers.”

The letter also notes that Amazon’s own promotional material states that Rekognition can identify people in real-time by “instantaneously searching databases containing tens of millions of faces.” Amazon offers a “person tracking” feature that it says “makes investigation and monitoring of individuals easy and accurate” for “surveillance applications.” Amazon says Rekognition can be used to identify “all faces in group photos, crowded events, and public places such as airports.”

The letter also warns that local police could use Rekognition to identify political protesters recorded by officer body cameras. In addition, Rekognition can track people even if it can’t see their face, can identify and catalog a person’s gender, what they’re doing, what they’re wearing, and their emotional state. The program can also flag things it considers “unsafe” or “inappropriate.”

Unfortunately, Amazon’s partnership with law enforcement is nothing new. Amazon famously partnered with the CIA by offering cloud storage services through Amazon Web Services (AWS) which allows agencies to store the extremely large video files generated by body and other surveillance cameras. For only $6 to $12 extra a month law enforcement can add Rekognition to their AWS subscription.


REVEALED: Facebook let SIXTY companies, including Apple and Amazon, have ‘deep access personal data about users and their friends – and the controversial deals are STILL in place’

Facebook gave at least 60 device makers access to its users’ information, potentially in conflict with what the company told Congress, a new report has revealed.

Many of the partnerships, with companies such as Apple, Amazon, BlackBerry, Microsoft and Samsung, remain in effect even after Facebook began to quietly unwind them in April, according to a lengthy report in the New York Times.

Under some of the agreements, device makers could access the data of users’ friends, even if they believed that they had barred sharing, the Times reported citing company officials. The latest revelation affects every Facebook user worldwide.

Facing blowback from the Cambridge Analytica data harvesting scandal in March, Facebook vowed that it had put an end to that kind of information sharing, but never revealed that device makers had a special exemption.

However, Facebook blasted back at the Times report, saying the newspaper has misinterpreted the purpose and function of its so-called ‘device-integrated APIs’ – the software that allows hardware companies to bridge into Facebook’s database to offer versions of the app on their operating systems.

What does your Facebook data file hold?

Every Messenger message you have sent or received

Every Facebook friend you have connected with

Every Facebook voice call you have made

Every smartphone contact

Every text message sent or recieved

Log of phone calls made or received

Every file you have sent or receieved

Every time you signed into Facebook, and from where

Every stickers emoji you have ever sent


CBS Google’s abuse of dominance and AI DUPLEX

CBS have challenged Google’s use of their own Google+ results, not the actual Google search engine results.  Here’s an example of how the abuse can be stopped.


Google AI, for good measure


Scammers will use the phone bots to scam pensioners and vulnerable people.

What next, Duplex will call to get you a date.  Why even turn up to the date.. send your digital assistant, they tell better jokes.

An Interesting Pattern in the Prime Numbers: Parallax Compression

UPDATE: Thanks to comments from readers we have found that the pattern does not exactly match the GCD triangle for some values of the number of cells and rows:  this possibly makes it a more interesting finding.  Join the discussion in the Telegram group as well – details below) .

Early this year a software engineer, Shaun Gilchrist, reached out to me after reading a blog post of mine from many years ago, about my informal search for hidden patterns in the prime numbers.

The Ulam Spiral revealed non-random patterns, but they didn’t quite match up. Both Shaun and I had long felt there was a better way to wrap the primes that would reveal a deeper structure.

Shaun explained that he had developed a new algorithm (he calls it “Parallax Compression”) for wrapping the primes on a plane, and visualizing their distribution, inspired by the Ulam Spiral. Here is a more robust Github version of the code in a Mathematica notebook  if you want to explore it yourself (note: Thanks to Stephen Wolfram for taking a look at the Mathematica code and advising us in January, when we were wondering whether this might break crypto and needed advice; the answer is no, it doesn’t break crypto, but Mathematica is pretty great!).

After his initial discovery, Shaun searched the Web for anyone else who was thinking this way and that led him to my blog post, and to me.

Shaun’s algorithm reveals an interesting non-random, fractal-like pattern in the distribution of primes, that to our knowledge, has never been seen before.

It makes it possible to easily see where there are regions of prime and non-prime numbers, anywhere on the number line, at any level of scale.

When one looks at a visualization of this pattern, it appears reminiscent of runes, Mayan glyphs, tapestries, and hieroglyphics. If you look at it for a moment or two you will see there are several levels of nested geometric shapes within it that appear to have a kind of fractal symmetry:


A cell is colored black if there is at least 1 prime number within it, and red if there are no primes within it.

Here, the width of a cell, n, is 100, so each cell represents 100 integers in the sequence, and the pattern holds for 100 rows.

Initially we found that this pattern matches a known numerical sequence,OEIS A054521 — and it recurs for other even values of n, so it is self-similar at various levels of scale.

For example, If n = 50, then each cell represents 50 integers, and the pattern holds for 50 rows. If n = 200 then each cell represents 200 integers, and pattern holds for 200 rows.

Here is an animation (Thanks to Ian Rust) that shows the pattern approaching the GCD sequence pattern, as the values of n increase.

However, some readers noted today in the thread on Hacker News that GCD doesn’t hold for all values of n.

For example, for odd values of n we see a different pattern that is also rather interesting. Here is n = 99:



Like the GCD pattern we see for even values of n, the odd valued n pattern also recurs for different sized odd values of n. This means that this pattern is not simply the GCD sequence — there are variances that we don’t understand yet.

This algorithm also reveals sequences (that we call “runs”) of primes and non-primes along various axes that might be useful for predicting prime and non-prime regions.

After Shaun reached out to me with his discovery, we spent many sleepless days and nights collaborating to see if there were even deeper patterns behind this new visualization and eventually we made a little progress finding at least one known sequence that generated the pattern for even values of n, without needing any primality testing. But as noted above, it doesn’t hold for all values of n, and we have not done a formal proof nor have we tested a large set of values of n and compared results.

We’re not exactly sure what this all means yet — it might not mean much — it might just be a pretty visualization — but it’s interesting enough (to us at least) that we decided eventually to make this public so that others could help us explore it further, in case there is something more to this.

Perhaps this is a topographical map of the distribution of the prime numbers? Perhaps this might be useful in number theory, or in some area of science? The self-similarity at various levels of scale, and the fact that it isn’t fully described for all values of n by a known sequence means there may still be more to understand about this.

In general, finding any kind of non-random pattern in the distribution of primes is potentially interesting. Are there connections between this and other research findings, such as this recent article we found on aperiodic order in the primes?

We don’t know yet, but we are curious to find out. We are not mathematicians, but hopefully some mathematicians reading this will take it further than we can.

We hope you enjoy this, and if you make further progress on this, or find anything that may be connected, please let us know. (You can discuss it with us, and others who are interested, on this Telegram group).




NSA encryption plan for ‘internet of things’ rejected by international body

An attempt by the U.S. National Security Agency (NSA) to set two types of encryption as global standards suffered a major setback on Tuesday, after online security experts from countries including U.S. allies voted against the plan, for use on the “internet of things.”

A source at an International Organization for Standardization (ISO) meeting of expert delegations in Wuhan, China, told WikiTribune that the U.S. delegation, including NSA officials, refused to provide the standard level of technical information to proceed.

The vote is the latest setback for the NSA’s plan, which was pruned in September after ISO delegates expressed distrust and concerns that the U.S. agency could be promoting encryption technology it knew how to break, rather than the most secure.

The ISO sets agreed standards for a wide range of products, services, and measurements in almost every industry including technology, manufacturing, food, agriculture, and health. The body has been looking into adopting recommended encryption technology to improve security in devices that make up the “internet of things.” These include household items such as smart speakers, fridges, lighting and heating systems, and wearable technology.

The NSA has been pushing for these encryption tools to get a seal of approval from the ISO so they will become approved by the National Institute for Standards and Technology (NIST), and become standard for all U.S. government departments and related companies, said the source.

Agreeing to adopt ‘Simon’ and ‘Speck’ as standard block cipher algorithms would have made these part of the recommended encryption technology for a huge range of products.

The NSA had originally been promoting a broader range of encryption technologies, but during a three-year dispute behind closed doors, delegates from other countries expressed concern over the NSA’s motives. Several cited information leaked by Edward Snowden, which showed the agency had previously planned to manipulate standards and promote technology it could penetrate, as a source of distrust, according to documents seen by Reuters.

Two delegates told WikiTribune that the opposition to adding these algorithms was led by Dr. Tomer Ashur from KU Leuven University, representing the Belgian delegation and it was supported by a large group of countries.

Many crypto experts both within and outside ISO had concerns about the security of the algorithms,” said Ashur. “The NSA tried to remain as obscure as it could about certain design decisions and parameter choices they have made. As this is out of line with what is perceived as best practices of cipher design, this alarmed some of the delegates, including myself.”

Specific requests for more detailed information were met with obfuscation, said Ashur.

“I can’t speak for the other delegates but I believe it was these concerns together with the adversarial and aggressive behavior of the NSA that eventually led them to support the cancellation of the project,” he said.

Israeli delegate Orr Dunkelman told Reuters he did not trust the U.S. designers following the September meetings.

There are quite a lot of people in NSA who think their job is to subvert standards,” said Dunkelman. “My job is to secure standards.”

The NSA said Simon and Speck were developed to protect U.S. government equipment without requiring a lot of processing power, and firmly believes they are secure.

The NSA has a history (Atlas Obscura) of trying to create “backdoors” in software so it can access data. Documents leaked by Snowden also showed the NSA has made extensive efforts to break encryption tools, and insert vulnerabilities into encryption systems. The Dual EC, a standardized algorithm championed by the NSA, was withdrawn in 2014 due to wide public criticism.

According to WikiTribune’s source, experts in the delegations have clashed over recent weeks and the NSA has not provided the technical detail on the algorithms that is usual for these processes. The U.S. delegation’s refusal to provide a “convincing design rationale is a main concern for many countries,” the source said.

What are Simon and Speck?

Created by the NSA in 2013, Simon and Speck are families of lightweight block ciphers, meaning they’re cryptographic algorithms tailored for low-resource devices, such as limited memory and power. Though both algorithms are versatile in hardware and software, Simon is optimal in hardware while Speck is optimal in software. Detailed information about the Simon and Speck families is compiled by the NSA Cybersecurity in it’s official GitHub repository.

  • Simon = hardware 
  • Speck = software

In 2014, Simon and Speck were proposed to be included (IACR paper) in the ISO standard that specifies the requirements for lightweight cryptography and suitable block ciphers. Published 2012, this standard already covers two lightweight block ciphers, Present and Clefia. Furthermore, there are two “Proposed Draft Amendments” recordedwithout any content information. They might concern the proposed NSA block ciphers.

Another relevant standard specifies the security and privacy aspects of Service Level Agreements (SLA) for cloud services with the “cryptography component” as a central part. According to a notice ofPrismacloud, this standard was the theme in Wuhan, April 16-20, where the Working Groups of the responsible SO/IEC JTC 1/SC 27  held their 26th meeting. This meeting is not listed in the ISO meeting calendar.

According to the NSA, the aim of Simon and Speck is to secure applications in constrained, or specialized, environments, largely to prepare for the era of the internet of things. The basic idea is to design algorithms that are flexible and simple enough to be performed just about anywhere.

What is unusual about Simon and Speck is that the NSA had a four-year delay in publishing the ciphers with a security analysis and a description of the design decisions, which are considered mandatory best practices.

Encryption: Flaw in PGP Email Encryption Found – EFF alert

A group of researchers released a paper today that describes a new class of serious vulnerabilities in PGP (including GPG), the most popular email encryption standard. The new paper includes a proof-of-concept exploit that can allow an attacker to use the victim’s own email client to decrypt previously acquired messages and return the decrypted content to the attacker without alerting the victim. The proof of concept is only one implementation of this new type of attack, and variants may follow in the coming days.

Because of the straightforward nature of the proof of concept, the severity of these security vulnerabilities, the range of email clients and plugins affected, and the high level of protection that PGP users need and expect, EFF is advising PGP users to pause in their use of the tool and seek other modes of secure end-to-end communication for now.

Because we are awaiting the response from the security community of the flaws highlighted in the paper, we recommend that for now you uninstall or disable your PGP email plug-in. These steps are intended as a temporary, conservative stopgap until the immediate risk of the exploit has passed and been mitigated against by the wider community. There may be simpler mitigations available soon, as vendors and commentators develop narrower solutions, but this is the safest stance to take for now. Because sending PGP-encrypted emails to an unpatched client will create adverse ecosystem incentives to open incoming emails, any of which could be maliciously crafted to expose ciphertext to attackers.

While you may not be directly affected, the other participants in your encrypted conversations are likely to be. For this attack, it isn’t important whether the sender or the receiver of the original secret message is targeted. This is because a PGP message is encrypted to both of their keys.

At EFF, we have relied on PGP extensively both internally and to secure much of our external-facing email communications. Because of the severity of the vulnerabilities disclosed today, we are temporarily dialing down our use of PGP for both internal and external email.

Our recommendations may change as new information becomes available, and we will update this post when that happens.

How The Vulnerabilities Work

PGP, which stands for “Pretty Good Privacy,” was first released nearly 27 years ago by Phil Zimmermann. Extraordinarily innovative for the time, PGP transformed the level of privacy protection available for digital communications, and has provided tech-savvy users with the ability to encrypt files and send secure email to people they’ve never met. Its strong security has protected the messages of journalists, whistleblowers, dissidents, and human rights defenders for decades. While PGP is now a privately-owned tool, an open source implementation called GNU Privacy Guard (GPG) has been widely adopted by the security community in a number of contexts, and is described in the OpenPGP Internet standards document.

The paper describes a series of vulnerabilities that all have in common their ability to expose email contents to an attacker when the target opens a maliciously crafted email sent to them by the attacker. In these attacks, the attacker has obtained a copy of an encrypted message, but was unable to decrypt it.

The first attack is a “direct exfiltration” attack that is caused by the details of how mail clients choose to display HTML to the user. The attacker crafts a message that includes the old encrypted message. The new message is constructed in such a way that the mail software displays the entire decrypted message—including the captured ciphertext—as unencrypted text. Then the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls.

The second attack abuses the underspecification of certain details in the OpenPGP standard to exfiltrate email contents to the attacker by modifying a previously captured ciphertext. Here are some technical details of the vulnerability, in plain-as-possible language:

When you encrypt a message to someone else, it scrambles the information into “ciphertext” such that only the recipient can transform it back into readable “plaintext.” But with some encryption algorithms, an attacker can modify the ciphertext, and the rest of the message will still decrypt back into the correct plaintext. This property is called malleability. This means that they can change the message that you read, even if they can’t read it themselves.

To address the problem of malleability, modern encryption algorithms add mechanisms to ensure integrity, or the property that assures the recipient that the message hasn’t been tampered with. But the OpenPGP standard says that it’s ok to send a message that doesn’t come with an integrity check. And worse, even if the message does come with an integrity check, there are known ways to strip off that check. Plus, the standard doesn’t say what to do when the check fails, so some email clients just tell you that the check failed, but show you the message anyway.

The second vulnerability takes advantage of the combination of OpenPGP’s lack of mandatory integrity verification combined with the HTML parsers built into mail software. Without integrity verification in the client, the attacker can modify captured ciphertexts in such a way that as soon as the mail software displays the modified message in decrypted form, the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls. For proper security, the software should never display the plaintext form of a ciphertext if the integrity check does not check out. Since the OpenPGP standard did not specify what to do if the integrity check does not check out, some software incorrectly displays the message anyway, enabling this attack.

This means that not only can attackers get access to the contents of your encrypted messages the second you open an email, but they can also use these techniques to get access to the contents of any encrypted message that you have ever sent, as long as they have a copy of the ciphertext.

What’s Being Done to Fix this Vulnerability

It’s possible to fix the specific exploits that allow messages to be exfiltrated: namely, do better than the standard says by not rendering messages if their integrity checks don’t check out. Updating the protocol and patching vulnerable software applications would address this specific issue.

Fixing this entirely is going to take time. Some software patches have already begun rolling out, but it will be some time before every user of every affected software is up-to-date, and even longer before the standards are updated. Right now, information security researchers and the coders of OpenPGP-based systems are poring over the research paper to determine the scope of the flaw.

We are in an uncertain state, where it is hard to promise the level of protection users can expect of PGP without giving a fast-changing and increasingly complex set of instructions and warnings. PGP usage was always complicated and error-prone; with this new vulnerability, it is currently almost impossible to give simple, reliable instructions on how to use it with modern email clients.

It is also hard to tell people to move off using PGP in email permanently. There is no other email encryption tool that has the adoption levels, multiple implementations, and open standards support that would allow us to recommend it as a complete replacement for PGP. (S/MIME, the leading alternative, suffers from the same problems and is more vulnerable to the attacks described in the paper.) There are, however, other end-to-end secure messaging tools that provide similar levels of security: for instanceSignal. If you need to communicate securely during this period of uncertainty, we recommend you consider these alternatives.

We Need To Be Better Than Pretty Good

The flaw that the researchers exploited in PGP was known for many years as a theoretical weakness in the standard—one of many initially minor problems with PGP that have grown in significance over its long life.

You can expect a heated debate over the future of PGP, strong encryption, and even the long-term viability of email. Many will use today’s revelations as an opportunity to highlight PGP’s numerous issues with usability and complexity, and demand better. They’re not wrong: our digital world needs a well-supported, independent, rock-solid public key encryption tool now more than ever. Meanwhile, the same targeted populations who really need strong privacy protection will be waiting for the steps they can take to use email securely once again.

We’re taking this latest announcement as a wake-up call to everyone in the infosec and digital rights communities: not to pile on recriminations or criticisms of PGP and its dedicated, tireless, and largely unfunded developers and supporters, but to unite and work together to re-forge what it means to be the best privacy tool for the 21st century. While EFF is dialing down our use of PGP for the time being (and recommend you do so too) we’re going to double-down on supporting independent, strong encryption—whether that comes from a renewed PGP, or from integrating and adapting the new generation of strong encryption tools for general purpose use. We’re also going to keep up our work improving the general security of the email ecosystem with initiatives like STARTTLS Everywhere.

PGP in its current form has served us well, but “pretty good privacy” is no longer enough. We all need to work on really good privacy, right now.

EFF’s recommendations: Disable or uninstall PGP email plugins for now. Do not decrypt encrypted PGP messages that you receive. Instead, use non-email based messaging platforms, like Signal, for your encrypted messaging needs. Use offline tools to decrypt PGP messages you have received in the past. Check for updates at our Surveillance Self-Defense site regarding client updates and improved secure messaging systems.

Encryption Workarounds – May 2018


The widespread use of encryption has triggered a new step in many criminal investigations: The encryption workaround. We define an encryption workaround as any lawful government effort to reveal unencrypted plaintext of a target’s data that has been concealed by encryption. This Article provides an overview of encryption workarounds. It begins with a taxonomy of the different ways investigators might try to bypass encryption schemes. We classify six kinds of workarounds: find the key, guess the key, compel the key, exploit a flaw in the encryption software, access plaintext while the device is in use, and locate another plaintext copy. For each approach, we consider the practical, technological, and legal hurdles raised by its use.

The remainder of this Article develops lessons about encryption workarounds and the broader public debate about encryption in criminal investigations. First, encryption workarounds are inherently probabilistic. None work every time, and none can be categorically ruled out every time. Second, the different resources required for different workarounds will have significant distributional effects on law enforcement. Some techniques are inexpensive and can be used often by many law enforcement agencies; some are sophisticated or expensive and likely to be used rarely and only by a few. Third, the scope of legal authority to compel third-party assistance will be a continuing challenge. And fourth, the law governing encryption workarounds remains uncertain and underdeveloped. Whether encryption will be a game changer or a speed bump depends on both technological change and the resolution of important legal questions that currently remain unanswered.

Whole drive encryption only protects a hard drive that is powered off.

The laptop was encrypted when shut down, but decrypted when
in use.110 To capitalize on this, the FBI sent two plainclothes agents into the
library posing as a couple.111 While standing next to Ulbricht, the two agents
began a loud fight, which distracted Ulbricht and allowed one of the agents to
grab the laptop while it was open.112 That agent turned it over to a third officer
who immediately began to search the device while Ulbricht was placed under
arrest.113 The ruse enabled the FBI to bypass Ulbricht’s whole-disk encryption by
taking it from his hands.11

Berners-Lee Behind New Private Communications Network For Ultra-Privacy Conscious

Web founder Tim Berners-Lee is one of the privacy advocates behind a newly launched service that combines social media, cloud storage, person-to-person, and group communications for privacy-conscious users.

The so-called MeWe private communications network spun out of online privacy company Sgrouples — founded by online privacy advocate Mark Weinstein — doesn’t own, track, or share, information its members provide or share among one another. MeWe encrypts personally identifiable information and most of its communication is SSL-encrypted, and the platform was built with Scala and LISP.

MeWe follows a string of other privacy-oriented services, including secure mobile messaging service Wickr and Silent Circle, which offers private and secure voice, video, text, and file transfer services on mobile devices. The prospect of “leave no trace” communications has become more attractive to some more privacy-concerned users given the large amounts of data gathered by sites such as Facebook and Google, and especially in the wake of the NSA leaks exposing the agency’s controversial online surveillance programs.

Weinstein describes the typical MeWe user like this: “I have social network fatigue. I want a global communications network where I can stay in touch with family, friends, and co-workers. But this is not another social media” platform, he says. “It’s a private communication network… and we don’t track” users or their activity, he says.

“So when it comes to security, the first line is that we are not storing or aggregating or analyzing member data,” he says. “And you can’t post to the whole MeWe world — only to your [designated] MeWe world.”

Weinstein declined to provide data on membership thus far. MeWe is free and comes with (for free) a personal news feed, voice integration, detailed permission controls, 8 GB of storage, and it also runs on Android and iOS, as well as desktop machines.

How will MeWe make money? With optional services you can add such as its extra data storage option (up to 500GB) and picture printing via Walgreens, for instance. On tap is a MeWe app store, and eventually, a subscription-based enterprise version.

And for those users who just aren’t ready to break ties with traditional social media, MeWe has an option to also post to their Facebook, LinkedIn, Twitter, and other social media accounts.

“The original idea of the Web was that it should be a collaborative space where you can communicate through sharing information, MeWe advisor, Berners-Lee said in a statement. “The power to abuse the open Internet has become so tempting both for government and big companies. MeWe gives the power of the Internet back to the people with a platform built for collaboration and privacy.”


%d bloggers like this: