His step-by-step instructions for making a clandestine phone call are as follows:
- Analyze your daily movements, paying special attention to anchor points (basis of operation like home or work) and dormant periods in schedules (8-12 p.m. or when cell phones aren’t changing locations);
- Leave your daily cell phone behind during dormant periods and purchase a prepaid no-contract cell phone (“burner phone”);
- After storing burner phone in a Faraday bag, activate it using a clean computer connected to a public Wi-Fi network;
- Encrypt the cell phone number using a onetime pad (OTP) system and rename an image file with the encrypted code. Using Tor to hide your web traffic, post the image to an agreed upon anonymous Twitter account, which signals a communications request to your partner;
- Leave cell phone behind, avoid anchor points, and receive phone call from partner on burner phone at 9:30 p.m.—or another pre-arranged “dormant” time—on the following day;
- Wipe down and destroy handset.
PRACTICING GOOD OPSEC
Central to good privacy, says Wallen, is eliminating or reducing anomalies that would pop up on surveillance radars, like robust encryption or SIM card swapping. To understand the risks of bringing unwanted attention to one’s privacy practices, Wallen examined the United States Marine Corps’ “Combat Hunter” program, which deals with threat assessment through observation, profiling, and tracking. The program teaches Marines to establish a baseline to more easily key in on anomalies in any given environment.
“Anomalies are really bad for what I’m trying to accomplish—that means any overt encryption is bad, because it’s a giant red flag,” Wallen said. “I tried to design the whole system to have as small a footprint as possible, and avoid creating any analyzable links.”
After establishing these processes, Wallen began researching cell phones. As expected, it involved a lot of trial and error. “I was going out and actually buying phones, learning about different ways to buy them, to activate them, to store them, and so on,” said Wallen, who eventually bought a burner phone from a Rite Aid. “I kept doing it until I felt like I’d considered it from every angle.”
When it came to protecting cell phone hardware, Wallen turned to Faraday bags. Invented by English scientist Michael Faraday back in the 19th century, Faraday cages were developed for modern usage with intelligence agencies, law enforcement, and the military in mind. The cages, which can be any type of container, feature metallic shielding material that blocks radio cell, Wi-Fi, and Bluetooth connections. Now available to the public, people can transport or store their electronic devices in Faraday bags, preventing hackers, law enforcement, and spies from accessing their private data
How to make a Faraday wallet
Step 1: Make the main pocket
Step 2: Add the metal shielding
Step 3: Make a sandwich
Step 4: Construct the billfold
Step 5: Make a credit card pocket
Step 6: Make an ID card pocket
Step 7: Test your wallet
A Russian software developer has detected a security flaw, which could have allowed him to remove any video on YouTube in a matter of seconds. And he says he was close to doing just that.
Kamil Hismatullin, 21, joked he “fought the urge” to erase Justin Bieber’s channel for a couple of hours, but chose to report the bug to Google instead.
It took the security researcher from Kazan, the capital of Russia’s Republic of Tatarstan, about 7 hours to identify the vulnerability in Google’s Application Programming Interface (API). He collected $5,000 for his research, the maximum award for this kind of discovery.
Hismatullin wrote on his blog that the bug could “create utter havoc in a matter of minutes in bad hands who [could have] used this vulnerability to extort people or simply disrupt YouTube by deleting massive amounts of videos in a very short period of time.”
He said he was surprised at how quickly Google responded after he reported the bug.
Google launched its Vulnerability Research Grants in January to offer financial grants to “top performing, frequent vulnerability researchers as well as invited experts” in exchange for research into potential flaws of certain applications.
While many said Google’s award of $5,000 is less than Hismatullin deserves for his finding, the bug hunter said that security research is only his hobby, which he enjoys doing regardless of how much he is paid.
$5,000 – seriously – that’s all?
A decent vulnerability that is exploitable, can reach as much as $50,000. With reflection, he should have offered his exploit to those who don’t like Justin Bieber… we would have offered a lot more than $5,000.
OpenVPN is secure, Open Source, and extremely easy to use. Unfortunately, many censoring ISP’s are determined to prevent and block OpenVPN. Possibly the only sure way to block OpenVPN tunnels is a method called DPI (Deep packet inspection). What is troubling for many individuals is the fact the DPI works and is now widely used.
OpenVPN Sheathing is a method to hide OpenVPN tunnels from DPI. There are two major ways to accomplish sheathing:
sTunnel: A GPL Open Source SSL encryption wrapper created by Michal Trojnara. sTunnel creates a full-blown HTTPS tunnel to disguise traffic as what is normally seen on a network. If an ISP were to block HTTPS every user would be severely crippled, thusly not normally done. Unfortunately, there is a performance penalty due to the HTTPS tunnel.
Obfsproxy: A Tor subproject. This method will make any traffic unrecognisable. This is a lighter method then sTunnel, but may be more easily detected. Rather than blending into normal traffic, Obfsproxy will appear completely different – via plugins. If an ISP were to whitelist allowed protocols rather than blacklist – Obfsproxy may be blocked.
For technical details on how to setup your own systems to bypass Government censorship – see this
If you are using OpenVPN in China, even on port 443, you may find that your connections are unstable. The problem is that Chinese government can detect the difference between “normal” SSL encryption and VPN encryption.
The solution is to mask your OpenVPN connection, and make it look like a regular HTTPS connection.
You can do this using one of these methods:
- Using OpenVPN through an SSL tunnel
- Using OpenVPN through an SSH tunnel
- Using a tool called Obsfsproxy
Using OpenVPN through a SSL tunnel
You can make you OpenVPN traffic virtually indistinguishable from regular SSL traffic by tunnelling it through SSL, because Deep Packet Inspection cannot penetrate this addition layer of encryption.
Note that using a SSL tunnel will slow down your internet connections.
UDP is better for any kind of tunnel because it’s lower overhead and doesn’t try to retransmit packets unnecessarily. In certain instances retransmitting packets could be counterproductive. Basically, anything that needs to either have a stateful connection or a connection that is “reliable” (i.e. TCP) already has packet retransmission built into the protocol. If you run two of these protocols on top of each other (such as TCP over a TCP tunnel), then bad things start to happen as now you have more than one layer trying to retransmit packets. So really you should use UDP unless there’s a very specific reason you need to use TCP, such as a firewall restriction or something.
OpenVPN through an SSH tunnel
Using OpenVPN with a SSH tunnel is very similar to using it with a SSL tunnel. The difference is that you wrap your OpenVPN traffic with SSH encryption instead of SSL encryption. SSH is the “secure shell” software used to make connections to shell accounts in Unix. You can find SSH clients for most operating systems — see PuTTY for example.
When using SSH tunnels, note that:
- There is evidence that the Chinese government is slowing down SSH connections
- SSH is much more than just encryption, therefore you will see more overhead with SSH tunnels
- SSH is difficult to set up on Windows whereas stunnel is cross platform
Obfsproxy is a tool designed to make VPN connections difficult to detect. It was created by the Tor network when China started blocking Tor nodes — but it can be used outside of the Tor network to mask VPN connections.
There are instruction for setting up Obsfproxy with OpenVPN on this page.
Obfsproxy does not encrypt your traffic, but it also does not require much overhead, so if it is useful in countries where bandwidth is limited (e.g. Syria or Ethiopia).
The UK’s Court of Appeal has confirmed an earlier landmark High Court decision that a group of British consumers using Apple’s Safari browser to access Google’s services can sue the US company in the UK. Google has always argued that the appropriate forum for such cases is in the US, so this sets an important precedent for future legal actions against foreign companies operating in the UK.
The UK Court of Appeal’s ruling clears the way for the group known as “Safari Users Against Google’s Secret Tracking” to proceed with its claim for compensation. The group alleges, “Google deliberately undermined protections on the Safari browser so that they could track users’ internet usage and to provide personally tailored advertising based on the sites previously visited.”
The present ruling from the UK Court of Appeal does not address whether the Safari users should be awarded compensation for the claimed distress, but it does confirm that the appropriate forum for that case is in the UK. That’s crucial, because it means the claimants can use Europe’s stringent data protection laws to support their case. Last year, Google was forced to comply with EU data protection laws in Spain, when the Court of Justice of the European Union, Europe’s highest court, ruled that the company could be required to erase links to certain webpages from its search engine results—the so-called “right to be forgotten.”
There are two practical implications of today’s ruling. First, it just became much easier to sue US companies offering services in the UK, since British consumers no longer have to bring a case in the US. Secondly, Google may now face a substantial class-action from UK Safari users.
A Short History of Router DNS Hijacking
Subsequently there have been some excellent reports on these router DNS attacks from Sucuri, Kaspersky, and Malwarebytes, but despite the exposure the problem persists. In fact, router DNS hijacking has become so prevalent that if we look at D-link router reviews on Amazon, the first one that pops up is a complaint about the router being hacked and displaying popup ads.
Routers and DNS
DNS is like a telephone directory for the internet; you lookup the name of the site you want to connect to and receive a number (IP) where you can reach them. For example, we can use DNS to lookup the IP addresses that are assigned to the domain http://www.google.com. DNS replies with a list of IPs in the 220.127.116.11/24 range. If we select one of those IPs and connect to it then we will be connecting to a server that is hosting Google.
When one of these router DNS hijacks are successful, the DNS settings on the router are changed to point to a rogue DNS server controlled by the attackers. By default, most common operating systems (Windows, OSX, iOS, Android, Ubuntu) are configured to automatically retrieve their DNS settings from the router when they connect to a network (via DHCP). This means that when a device connects to a compromised router’s network it will be automatically configured to use the same rogue DNS settings as router.
If an attacker controls the DNS server that you are using to lookup an IP they can substitute the correct IP for the IP of a server that is under their control. Then you might connect to this IP thinking that you are connecting to a certain domain when in fact you are connecting to a server controlled by the attacker.
Google Analytics is a service that provides the ability to track and analyze website traffic. Webmasters enable Google Analytics by embedding the analytics tag on their website.
Google Analytics is currently the most widely used traffic analytics service. Since this tag is embedded on the majority of websites who are tracking traffic it is a perfect target for the fraudsters to inject into.
Google Analytics Interception and Ad Injection
In the fraud scheme investigated by Ara Labs the criminals are using a rogue DNS server located at 18.104.22.168. During a successful router hijacking this DNS server is configured as the router’s primary DNS while Google’s DNS sever (22.214.171.124) is configured as the secondary. The DNS server at 126.96.36.199 refuses to resolve most domains forcing the victim to rely on the secondary DNS server (Google) for most domain lookups. However, when a lookup is attempted for the Google Analytics domain google-analytics.com the rogue DNS server responds with the ip 188.8.131.52, which is most certainly NOT a google server. It is a rogue Google Analytics server.
Exchange Attribution – The Ad Suppliers
The other, more complex, script that is injected via the rogue Google Analytics server is heavily obfuscated to hide its intentions.
Once the script has been de-obfuscated it is clear that it’s responsible for injecting multiple ad tags into the websites that load it.
The following domains are identified as hosting the injected ad tags: zinzimo.info, ektezis.ru, and patifil.com. These are all shell domains that direct traffic to the PopUnder ad exchange. We can confirm this by examining the SSL certificates that have been issued to these domains.
PopUnder specializes in ads that disrupt the normal browsing of the user in an attempt to force them click on the ad (ie. pop-up ads). It is through this exchange that the majority of the explicit pornographic ads are sourced, as well as the online game ads displayed in the video we captured.
Protecting Yourself as a Consumer
As we have seen above the router DNS hijacking malware is taking advantage of default credentials on the routers, and bugs that allow unauthenticated configuration requests to be sent to the routers. The best protection available is to ensure the firmware on your router is fully patched, and to change the default credentials.
It’s little surprise that mobile apps regularly access our location data. In many cases, it’s a sensible deal. Maps, weather apps, social networks, and shopping services serve up useful info based on where we are. But while everyone understands the intrinsic privacy trade-off, few may realize just how often apps ping their location. According to a Carnegie Mellon study, it happens thousands of times a week.
That study, by the university’s Institute for Software Research, followed 23 Android phone owners for three weeks. In the first week, they were asked to use their apps as they normally would. In the second week, the participants used an app called App Ops to monitor and manage the data those apps were using. In the third week, the research team introduced a “privacy nudge” alert that would ping the participants each time an app requested location data.
The title of the study, which will be presented at a conference in Seoul next month, tells you all you need to know: Your Location Has Been Shared 5,398 Times! A Field Study on Mobile App Privacy Nudging. (Link to PDF)
That’s right. More than 5,000 pings in 14 days. Once participants in the study knew how frequently data was being collected, many adjusted their settings or deleted some apps entirely.
But Professor Norman Sadeh, a member of the research team that conducted the research, says the volume of location-harvesting isn’t the biggest issue.
“These are very rare cases, but some of them want to get your microphone data, your contact list data, and it’s really sensitive data at that point,” Hong says. “Right now, Android will tell you if an app uses location data, but what we do in our analysis is to say this app uses location data for X, where X might be social networking or advertising or analytics.”
In the wake of the Snowden leaks, more and more tech companies are providing their users with transparency reports that detail (to the extent they’re allowed) government requests for user data. Amazon — home to vast amounts of cloud storage — isn’t one of them.
Amazon remains the only US internet giant in the Fortune 500 that has not yet released a report detailing how many demands for data it receives from the US government.
Although people are starting to notice, the retail and cloud giant has no public plans to address these concerns.
Word first spread last week when the ACLU’s Christopher Soghoian, who’s spent years publicly denouncing companies for poor privacy practices, told attendees at a Seattle town hall event that he’s “hit a wall with Amazon,” adding that it’s “just really difficult to reach people there.”
Zack Whittaker and ZDNet ran into the same wall. Nearly thirty Amazon representatives were contacted but only one provided a response: an anonymous statement that the company was under “confidentiality obligations” not to discuss requests for data.
There are several reasons why Amazon might be hesitant to share intel/law enforcement request data, perhaps none bigger than its $600 million/10-year contract with the US intelligence community. It might also be its multiple contracts with other federal agencies, including connecting the nation’s law enforcement agencies through its AWS-hosted Criminal Justice Information Service.
But that can’t be the whole explanation. It’s not as if other companies now providing transparency reports aren’t similarly engaged with the government at some level.
Microsoft has contracts with various governments to provide Windows and Office software. Google offers a range of open-source and cloud-based services to the government, and Apple provides iPhones and iPads to government and military users, thanks to earning various certifications.
Even telephone service providers, which have historically been very proactive in accommodating government demands for data — going so far as to give intelligence analysts guidance on how to skirt legal restrictions — are producing bi-annual transparency reports. But Amazon simply refuses to do so, and then refuses to explain its refusal.
This lack of transparency has gone past the point of being merely vexatious. Amazon isn’t satisfied with simply selling and storing. It’s gathering far more data than its more famous offerings would indicate.
If ever there was a need for EU Data Protection, then Microsofts Windows 10 is the REASON for the EU to kick Microsoft.
The “Technical Preview” has left me speechless in privacy terms. I have NEVER seen such a violation of EU Privacy law as this operating system. What are they drinking at Microsoft?
Step 1 – Don’t accept the Privacy defaults.
Select “customise” and see what you’re agreeing to. Initially the privacy statements look cool – until you dig a bit deeper. Two items may protect privacy but there are many many more that will sell your grandmother. What do I mean by this?
Step 2 – Financial Data is collected by Windows 10
You agree to allowing Microsoft to collect your banking financial information, including credit card and paypal details, card numbers and **SIT DOWN** THE SECURITY CODES (CSV) ON THE BACK OF YOUR CARD. Are you shitting me?
So all your finanical accounts and security codes are being collected by Microsoft?
Step 3 – No privacy Opt outs.
Lets look at how Wndows 10 takes your home telephone number and email address as a “security” check on your OS. WTF!!
It’s an operating system, not a state department like the NHS. Even the NHS doesn’t require this information, and that deals with sensitive medical data.
On the one drive, there is no “I do not consent” button – to remove or deactivate one drive.
Is Windows 365 the future of Windows 10?
(10/02/2015): @janemccallion: Rumours that Windows could turn into a subscription-based service have hotted up thanks to a trademark filing from Microsoft. The submission, spotted by Neowin, was made to the US Patent and Trademark Office by Microsoft on 29 January this year and covers a whole range of features, including streaming and video on demand services, email and IM, and educational services.
However, the most intriguing features listed are “operating software as a service (OSAAS) … desktop-as-a-service (DAAS), cloud services … [and] providing temporary use of non-downloadable software”.
A similar application has also been made in South Africa, but not in the UK, according to the Intellectual Property Office’s online database.
It has long been rumoured that Windows 10 would include some kind of cloud-based service, but Microsoft’s announcement that Windows 10 will be a free upgrade (at least, for the first year) makes it somewhat unlikely Windows will be turning into a subscription-based service in the immediate term.
You have got to be f*** kidding me. This OS is a walking nightmare.
The German government has mandated that their state departments must stay with Windows 7 or move over to Linux.
I’m surprised to say this…. but I strongly agree with Germany here – the German government is right on the money. This OS cannot be allowed to go mainstream.
On privacy grounds alone, Windows 10 is a catestrophic breach of EU Data Protection law. Can you imagine Linux asking for your bank accounts, credit cards and security codes? Well, that’s exactly what Windows 10 does… and links your home telephone number and email to your financial data, AND then says it can hand this data over to any government agency it wants. OMG.
Windows 7 here we come.
Sadly my conclusion is that no amount of pretty pictures are worth handing over sensitive medical and financial data. This kind of data, once it’s being resold 50 times by Microsoft will be used to socially, racially profile you. Automatic profiling leads to automatic discrimination. Its sad, but true. If an insurance company knows you’re searching for “cancer”, you can bet your insurance will be cancelled or your premiums doubled. THAT is automatic discrimination, and it’s coming to a Windows 10 platform near to you. Watch your back with this OS… its BIG trouble for you.