The case seems high tech—it’s about Cisco’s Golden Shield, a set of sophisticated technologies that include specific purpose-built parts for persecution of the Falun Gong. But it’s actually fairly simple: at what point does a company that intentionally builds tools that are specially designed for governmental human rights abuses become liable for the use of those tools for their intended (and known) purposes?
No tech company should be held accountable when governments misuse general use products to engage in human rights abuses. This isn’t about bare routers or server logs. The case alleged and presented some strong early evidence that Cisco did far more – including:
- A library of carefully analyzed patterns of Falun Gong Internet activity (or “signatures”) that enable the Chinese government to uniquely identify Falun Gong Internet users;
- Highly advanced video and image analyzers that Cisco marketed as the “only product capable of recognizing over 90% of Falun Gong pictorial information;”
- Several log/alert systems that provide the Chinese government with real time monitoring and notification based on Falun Gong Internet traffic patterns;
- Applications for storing data profiles on individual Falun Gong practitioners for use during interrogation and “forced conversion” (i.e., torture);
It also included a presentation by Cisco to the Chinese authorities highlighting the special tools Cisco offered for persecuting what it called “Falun Gong evil religion.” Using such terms about any ethnic or religious group in an internal presentation regarding a government project should be a red flag for anyone concerned about human rights.
The court acknowledged these allegations, noting that the complaint alleges “individual features customized and designed specifically to find, track and suppress Falun Gong,” and that the tools were actually used for those purposes: “Golden Shield provided the means by which all the Plaintiffs were tracked, detained and tortured.” The complaint also alleged that much of Cisco’s work building the specific tools to target this religious minority was conducted from its San Jose offices.
In an ordinary lawsuit, those allegations, which are credible and in some places confirmed, would be enough to let a party get into the evidence phase of a case, passing a motion to dismiss. Think about federal criminal law, where all that is needed for a criminal conspiracy is an agreement to commit a crime and an overt act. Similarly, in patent and copyright law, the standard of “inducement” liability allows responsibility for someone else’s actions when someone “distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement.” And there is no question that some Cisco’s “overt acts” and “affirmative expressions” to foster human rights abuses—like designing and developing Falun Gong identification and tracking modules—took place in San Jose.
As a great exporter of advanced technology, American companies like Cisco can’t plead ignorance about the ways in which our technology is used when they specifically and knowingly build the tools for those uses. And when a company like Cisco customizes and crafts technology for an authoritarian regime, it has a responsibility to consider the very human consequences of its actions.
Take home message:
If Cisco are assisting China, then they are definitely assisting the NSA – and that means they’re tracking YOU. Let’s just hope that you don’t have any religious or political belief system, or you’re in big trouble, thanks to Cisco.
Deciding on a VPN provider is difficult, as you need a VPN that will respect your anonymity. Torrentfreak offer the best annual articles about this, and it’s a highly recommended read.
Two years ago, I decided to sign up with PIA (Private Internet Access) because they were cheap. Needless to say their customer service was dire. For 3 days I couldn’t connect. The only question their helpdesk asked … continuously was “have you paid, have you paid”. The answer was yes – all they had to do was check their own records to know that. After 3 days of their “HELPLESS” support, I contacted PayPal and revoked the payment. My search for a VPN supplier continued…
Next, I signed up with IVPN. They are a team of InfoSec expects, who are dedicated to a confidential service. They are the only team to wipe their routers every 10 minutes, and keep no server logs and no traffic logs. Awesome. Their privacy credentials are cemented by the fact that they’re members of the EFF.
So what makes IVPN so good? Lets discuss those reasons.
1. Unlimited bandwidth.
2. Multihop technology. You can select to hop your connections between continents. Therefore court orders in 2 separate continents have to be served – within 10 minutes. Awesome, this sounds like they know what they’re doing. They operate a coded “Warrant Canary” to tip us off. https://www.ivpn.net/resources/canary.txt
3. No server logs, and no traffic logs. The OpenVPN servers can log all connections. If the server is logging your IP, then we’re in big trouble. So IVPN don’t log your IP. Simple solution really!
4. They arm wrestle you onto OpenVPN rather than any windows VPN. Windows products are horrendous.. you simply must use OpenVPN (which you can use on Android and your home Router as well as Linux and Windows 7).
5. IVPN use AES 256 bit symmetric encryption (symmetric means single key – this is the high speed encryption/decryption of your actual traffic).
6. The Symmetric key is protected by a 4,096 bit public key. (Public Key or Asymmetric encryption has 2 keys, one public and the other is HIGHLY private).
The mothership, in this instance the IVPN OpenVPN server makes a CONFIDENTIAL key set, for it, and you. These 2 keys fit together like a mini jigsaw puzzle. That means when you connect, the mothership knows one of its children is connecting… and she lets you in.
7. Great customer service… any queries are resolved within 12 hours... including daft invoice and payment queries, as I changed from a one off month long trial, to quarterly, then to annual membership. After the PIA experience, lets just say that I played things cautiously.
8. They let you connect 3 devices on a single account, so your Android phone, laptop and desktop can all use the same account – and as it’s unlimited bandwidth they don’t throttle your bandwidth if you’re watching youtube all day.
9. They are diplomatic regarding torrents. The US is a nightmare on this issue, so none of their US servers will allow torrents… HOWEVER… most European countries really don’t care less about copyright. So if you use a European exit note then different laws apply, eg Romanian courts struck down the European Data Retention Act as a breach of privacy.
10. The UK is also a nightmare regarding High Court orders banning the PirateBay etc. The UK also has so called “porn filters” which in fact apply to diet sites and sites the UK government doesn’t want you to see. Basically it’s censorship, when we should have net neutrality. So use of other European countries sidesteps the High Court order… as UK law doesn’t apply to the Dutch, French or Swiss. Easy peasey right.
11. The use of London for your VPN, of course will give you instant access to the BBC and the BBC iplayer content, which allows you to catch up on British TV and sitcoms. So of course, there’s every reason that you might want a UK or US exit node… so if you need “Salt Lake City” for any reason.. well, go ahead, it’s there for you. Dallas alas, does not give you instant access to TV re-runs of the old “Dallas” TV show, and I cannot think of any other reason to use Dallas as an exit node.
So in conclusion, I’ve loved my connection to IVPN. I love the technology, the simplicity, the great support, the awesome privacy that is second to none. And no, I have not used “Salt Lake City” myself.. nor Dallas… but one day.. just one day, I might find a reason for them.
You can try out their services, free for a week.
Things you might want to do next:
Which is the Safest VPN on the Market? Which VPN do I use?
Step 1 – Create Data
ls -l /etc > smile.txt
Check size of file
ls -l smile.*
My original filesize was 16119 (yours will be different – so keep a note of it).
The zipped file will be automatically saved as smile.txt.gz
Step 2 – Zip File
Check file size
ls -l smile.*
Remember, the zipped file will be automatically saved as smile.txt.gz
The compressed filesize is 3527 – which is a 78.2% compression in size. Awesome!
Step 3 – Unzip File
Check file size
ls -l smile.*
Check that the filesize is exactly equal to the original filesize…. it should be exactly the same.
Step 4 – Only view Contents of a zipped file
gunzip -c smile.txt | less
zcat smile.txt.gz | less **Zcat is supplied with gzip automatically.
Facts about Compression
- Data compression is about removing redundancy from data.
- An image that is all one colour contains entirely redundant data.
- Compression is lossless or lossy.
- Lossless preserves the original data – the original can be recreated.
- Lossy removes data – the original can not be recreated – only an approximate is restored.
- Examples of Lossy are JPEG (Images) and MP3 (Music).
gzip Command Codes
gzip -c = write to standard output and KEEP original files
gzip -d = DECOMPRESS. Or we can use gunzip.
gzip -f = FORCE compress, even if a compressed version of the file already exists.
gzip -h = Display usage information.
gzip -l = LIST compression statistics.
gzip -r = where directories exist, RECURSIVELY compress files within those directories.
gzip -t = TEST the integrity of a compressed file.
gzip -v = VERBOSE – but you already knew that.
gzip -number = Set AMOUNT OF COMPRESSION. Number is an integer between 1 and 9.
The DEFAULT VALUE is 6!!!.
1 is the FASTEST – with LEAST COMPRESSION
9 is the SLOWEST – with MOST COMPRESSION
–fast and –best can be used.
Here we attempt MAX compression ratio’s = which achieves a compression rate of 78.7%
gzip -9 smile.txt
University South Wales reading list.
SHOTTS, W.E., 2012. The Linux Command Line. San Francisco: No Starch Press.
KALI – How to unzip files on Linux
After studying 20 handsets, security company Avast has warned that the “factory reset” function on Android phones doesn’t actually delete data on the phone, which can be retrieved using standard forensic security tools.
In all, the researches said that they found more than 40,000 photos, including 750 photos of women “in various stages of undress” and 250 photos of male anatomy. The EXIF data included in the picture file could also allow someone to find out details of the person’s residence if it included location. Four of the phones included the previous owners’ identity in the file data.
The problem arises because the factory reset function, found in the Settings function, doesn’t actually wipe the data from the storage on the phone. Instead, it wipes the index that points to the locations in the storage where the data is written. Normally, that is sufficient to prevent someone who acquires the phone from accessing any of that data.
But by using forensic tools that directly access the storage areas, Avast was able to reconstruct the files – and make its disconcerting discovery.
Google told Ars Technica that the research “looks to be based on older devices and versions [of Android] and does not reflect the security protections in Android versions that are used by 85% of users.” That suggests that only versions running software before Android 4.0 are vulnerable in this way.
However, Google’s Android documentation shows that setting file encryption is optional – which leaves newer devices vulnerable too.
Android 3.0 onwards has offered a setting which will encrypt the phone, using a cryptographic key generated from a user-provided passcode. If that is done, then a “factory reset” will delete the key, rendering the data unreadable. Google’s spokesperson recommended that people who are disposing of a device should enable encryption, and then carry out a factory reset.
Android Forensics, Part 1: How we recovered (supposedly) erased data
How to Encrypt ANDROID mobile
BASH scripting rules okay :)
Originally posted on SupraFortix Blog:
This short post concentrates on presenting a Bash script that I’ve written few months ago as I thought it’d be quite useful to test feasibility or real time impact of various attacks. This script will automatically try and scan a specified port. If the port is closed or filtered the script immediately restarts, if the port becomes opened e.g. during backdoor or trojan upload, the script will output a message with the time when it opened and terminates itself, giving penetration testers real time information about succession of their exploitation. Nmap flags can be changed based on preference of course.
HOW IT WORKS?
When the port finally becomes opened the script will output time when it became online and…
View original 81 more words
We can format a USB in ext3 (the linux format) or vfat (for both Windows and Linux) machines.
Step 1 – Find out the exact name of the USB
To get a real-time view of the /var/log/messages file the command is
sudo tail -f /var/log/messages
You’ll see the device reported… press control + c to get the prompt back.
If it’s USB pen, it’s very likely the drive will be
It’s important to know that /dev/sdb is the entire device and /dev/sdb1 is the first partition on the device.
Step 2 – Unmount the USB
**notice the spelling.. it’s umount
Step 3 – Create a new Filesystem with FDISK
sudo umount /dev/sdb1
sudo fdisk /dev/sdb
The first thing to do is to exame the existing partition, we do this by entering P at the prompt. Entering L will list out all possible file types, and T will change a partition’s system ID. The Hex code for Linux is 83.
Up to this point, the device has not been touched, all the changes are stored in memory not the physical device. To save, we write using W. If you want to leave the device unaltered, enter q at the prompt to exit FDISK without writing the changes. Ignore the “doom and gloom” warning messages.
Step 4 – Create a new filesystem with MKFS – convert to a Linux ext3 filesystem.
mkfs means “make filesystem” on our flash drive.
sudo mkfs -t ext3 /dev/sdb1
This means “make a filesystem”, of “type ext 3″ on “device” at “/sdb1″
Watch the Inodes and file system being created
Step 5 – ReFormat the USB back to VFAT for Windows
To reformat the device to its original FAT32 filesystem, specify VFAT as the filesystem type.
sudo mkfs -t vfat /dev/sdb1
VFAT allows both Linux and Windows to read/write to the USB. However it has a 4 GB single file limit.. which might catch you out if you try to write say the CENTOS 6.5 DVD ISO to a USB…as the filesize is 4.3 GB.
Step 6 – Check the filesystem with FSCK
Fsck can repair corrupt filesystems. Recovered portions of files are placed in the lost+found directory, in the root of each filesystem.
sudo fsck /dev/sdb1
Admission – I was on the Backtrack Machine at the time… sorry – my bad!
Why is unmounting important?
In the output of the “free” command we see statistics including BUFFERS. In order to make systems work as fast as possible, data is sent to buffers, the writing to the physical device is often deferred to a future time. The data piles up in memory. Occasionally the OS will write out this data to the physical device. Unmounting a device allows all the remaining data to be written to the device so that it can be safely removed. If the device is removed without being unmounted, some data maybe lost. In some cases, this data may include vital directory updates, which will result in a filesystem corruption, which is one of the worst things that can happen.
University South Wales reading list.
SHOTTS, W.E., 2012. The Linux Command Line. San Francisco: No Starch Press.
The US government’s web of surveillance is vast and interconnected. Now we know just how opaque, inefficient and discriminatory it can be.
As we were reminded again just this week, you can be pulled into the National Security Agency’s database quietly and quickly, and the consequences can be long and enduring. Through ICREACH, a Google-style search engine created for the intelligence community, the NSA provides data on private communications to 23 government agencies. More than 1,000 analysts had access to that information.
This kind of data sharing, however, isn’t limited to the latest from Edward Snowden’s NSA files. It was confirmed earlier this month that the FBI shares its master watchlist, the Terrorist Screening Database, with at least 22 foreign governments, countless federal agencies, state and local law enforcement, plus private contractors.
The watchlist tracks “known” and “suspected” terrorists and includes both foreigners and Americans. It’s also based on loose standards and secret evidence, which ensnares innocent people. Indeed, the standards are so low that the US government’s guidelines specifically allow for a single, uncorroborated source of information – including a Facebook or Twitter post – to serve as the basis for placing you on its master watchlist.
Of the 680,000 individuals on that FBI master list, roughly 40% have “no recognized terrorist group affiliation”, according to the Intercept. These individuals don’t even have a connection – as the government loosely defines it – to a designated terrorist group, but they are still branded as suspected terrorists.
The absurdities don’t end there. Take Dearborn, Michigan, a city with a population under 100,000 that is known for its large Arab American community – and has more watchlisted residents than any other city in America except New York.
These eye-popping numbers are largely the result of the US government’s use of a loose standard – so-called “reasonable suspicion” – in determining who, exactly, can be watchlisted.
Reasonable suspicion is such a low standard because it requires neither “concrete evidence” nor “irrefutable evidence”. Instead, an official is permitted to consider “reasonable inferences” and “to draw from the facts in light of his/her experience”.
Consider a real world context – actual criminal justice – where an officer needs reasonable suspicion to stop a person in the street and ask him or her a few questions. Courts have controversially held that avoiding eye contact with an officer, traveling alone, and traveling late at night, for example, all amount to reasonable suspicion.
This vague criteria is now being used to label innocent people as terrorism suspects.
Moreover, because the watchlist isn’t limited to known, actual terrorists, an official can watchlist a person if he has reasonable suspicion to believe that the person is a suspected terrorist. It’s a circular logic – individuals can be watchlisted if they are suspected of being suspected terrorists – that is ultimately backwards, and must be changed.
The government’s self-mandated surveillance guidance also includes loopholes that permit watchlisting without even showing reasonable suspicion. For example, non-citizens can be watchlisted for being associated with a watchlisted person – even if their relationship with that person is entirely innocuous. Another catch-all exception allows non-citizens to be watchlisted, so long as a source or tipster describes the person as an “extremist”, a “militant”, or in similar terms, and the “context suggests a nexus to terrorism”. The FBI’s definition of “nexus”, in turn, is far more nebulous than they’re letting on.
Because the watchlist designation process is secret, there’s no way of knowing just how many innocent people are added to the list due to these absurdities and loopholes. And yet, history shows that innocent people are inevitably added to the list and suffer life-altering consequences. Life on the master watchlist can trigger enhanced screening at borders and airports; being on the No Fly List, which is a subset of the larger terrorist watchlist, can prevent airline travel altogether. The watchlist can separate family members for months or years, isolate individuals from friends and associates, and ruin employment prospects.
Being branded a terrorism suspect also has far-reaching privacy implications. The watchlist is widely accessible, and government officials routinely collect the biometric data of watchlisted individuals, including their fingerprints and DNA strands. Law enforcement has likewise been directed to gather any and all available evidence when encountering watchlisted individuals, including receipts, business cards, health information and bank statements.
Watchlisting is an awesome power, and if used, must be exercised prudently and transparently.
The standards for inclusion should be appropriately narrow, the evidence relied upon credible and genuine, and the redress and review procedures consistent with basic constitutional requirements of fairness and due process. Instead, watchlisting is being used arbitrarily under a cloud of secrecy.
A watchlist saturated with innocent people diverts attention from real, genuine threats. A watchlist that disproportionately targets Arab and Muslim Americans or other minorities stigmatizes innocent people and alienates them from law enforcement. A watchlist based on poor standards and secret processes raises major constitutional concerns, including the right to travel freely and not to be deprived of liberty without due process of law.
Indeed, you can’t help but wonder: are you already on the watchlist?
Take home message:
Schneier warned us that data collection was a bigger threat than cyber warfare – and the threat with the potential to do most harm.
Blacklisting, social and racial profiling and active discrimination based on INFERENCES are the reason that the Stasi and “WATCH LISTS” were so damaging to German society.
East Germany wrote the book on the damage that surveillance does to society and this the reason that Europe has strong data protection laws – to PREVENT the creation of SECRET databases.
Last summer, German secure email provider Posteo faced a do-or-die moment: give in to police threats to seize its servers or fight back in court. Investigators in the state of Bavaria had contacted the Berlin-based startup because they wanted the identity of a Posteo account holder who was thought to be using the service for illicit purposes. But Patrik and Sabrina Löhr, the husband-and-wife team who run the swiftly growing email provider, told police time and again that they simply couldn’t comply: Posteo is an anonymous email provider; it doesn’t store any data on its customers’ identities.
“We went around in circles with the authorities,” Patrik Löhr says. “But when we looked at their search warrant, we saw that it didn’t, in fact, give them permission to search our whole office. They were only allowed to receive a list of our bank transactions – which they already had gotten from the bank.” Löhr filed a suit against police officials, accusing them of intimidation. That move, the media attention it generated, and a stated commitment to transparency made all the more relevant in the wake of the Edward Snowden leaks, has helped Posteo become one of Germany‘s fastest growing email providers with a business model of fee-driven, privacy-oriented email services.
The immediate effect of Posteo’s tangle with the German authorities was the pressure it put on global telecoms giant Deutsche Telekom. Just days after Posteo released Germany’s first transparency report on government requests for information, Telekom dashed out its own paper detailing the extent of its cooperation with police and intelligence officials. The revelations were eye-opening. In 2013 alone, Telekom gave authorities in Germany nearly as much data on its customers as ATT and Verizon had furnished that same year to US law enforcement.
This resulted in Germans ditching American email providers in Posteo’s favour. “We went from 10,000 subscribers before the Snowden leaks a year ago to 70,000 today,” Löhr says.
The most dangerous assumption is that magnetic drive erasure techniques will work on Flash based SSD drives, so lets discuss what’s different between these two very different technologies.
Assumption 1 – Magnetic Erasure tactics work on SSD’s
“Flash based solid state drives (SSD’s) differ from hard drives in both the technology they use to store data (flash chips vs magnetic disks) and the algorithms they use to manage and access that data.
There are no agreed erasure standards for SSD’s – every state authority issues different guidelines.
Assumption 2 – Manufacturers built in commands will work
Manufacturer implementations of secure erase commands were found to be faulty or have catastrophic bugs; this has resulted in all data remaining intact on the disk.
Wei et al (2010) tested 12 drives. Only four drives executed the “ERASE UNIT” command reliably. One drive reported erasure as successful, when all the data remained intact. Two more drives suffered coding flaws the prevented the “ERASE UNIT” command working, unless a firmware reset had taken place.
Wei et al (2010) stated that “The wide variance among the drives leads us to conclude that each implementation of the security commands must be individually tested before it can be trusted to properly sanitise the drive”.
This is an important point – Manufacturer claims are not verified to be true, yet consumers rely on manufacturers claims when purchasing hardware. Clearly there exists a SIGNIFICANT number of issues, enough that perhaps the EU ought to verify the erasure ability of drive manufacturers. This issue appears to be the ability to implement “Secure Erasure”. It’s so badly implemented, that in a huge percentage it simply doesn’t work at all, let alone as advertised. The laws to fine both manufacturers and retailers exist under current Trading Standards laws. Failure to work as advertised also impacts on EU Data Protection laws; where a user has taken steps to erase sensitive data such as encryption keys, passwords, banking and financial data, the erasure then does not take place (worse still the drive may alert the user that the erasure HAS occurred). The user may sell or donate the drive to third parties, trusting in the manufacturers coding of secure erasure commands.
Assumption 3 – All erasure is safe
We use the mnemonic “LAD” to the three levels of erasure; Logical, Analogue and Digital.
Logical is the LEAST safe. The data can be forensically recovered. Where users overwrite parts of the drive, equivalent to CLEARING in NIST 800-80.
Analogue is the most SAFE. Analogue makes reconstructing the signal effectively impossible, it is equivalent to PURGING in NIST 800-80.
Digital means disk overwriting and then deletion, but may not erase bad blocks (these often contain data).
Assumption 4 – Cryptographic erasure is safe
Firstly the drive stores the encryption key. An analogy for this is to fit the world’s strongest burglar alarm, and then hide the door key under the front door mat.
AES is a symmetric cipher, which means a single key. The key to encrypt is the same key that decrypts the data. If this single key can be recovered – you’re in BIG trouble. EVERYTHING depends on the strength of the encryption… now here we enter the realm of the cryptologists, and they have a lot to say about AES 128 and AES 256.
Amongst cryptologists, AES 128 & 256 have been openly criticised.
Schneier attempted to add extra rounds to make AES robust – but the cipher became too slow to use. The reduced rounds used in AES are it’s Achilles heel. Too read more on AES, look for Schneiers work from 2000.
That “cryptographic erasure” doesn’t look so great now, does it?
Assumption 5 – Pages vs Blocks.
Flash memory is broken into pages and blocks. An analogy is pages and books. If we wish to erase a single page, SSD’s make us erase the entire book. The program operations that apply to pages can only change 1’s to 0’s.
Erasure operations only apply to Blocks (or the entire book), and set all the bits in a block to 1.
So SSD’s are not equipped for erasure. Yet if we can write to pages, why doesn’t the coding to ERASE pages exist? Surely the need to erase data was considered?
Assumption 6 – We can overwrite single files
Since we can only erase entire blocks, file level erasure is not possible. Overwriting a file only provides logical erasure.
Assumption 7 – Data Remnants – what’s this?
Digital remnants on SSD’s can range between 6 and 25% of the entire drive.. yes, a quarter of the SSD drive may hold “remnants of data”, which has massive implications for privacy.
Assumption 8 – The Capacity of the drive is as advertised.
In SSD’s the drives are often larger than they advertise as their logical capacity.
Assumption 9 – Only 1 copy of a file exists.
In SSD’s up to 16 stale copies of a file may exist. Even if you overwrite one file, there are another 15 copies that can be forensically recovered. Again, this single point has massive implications for privacy. How are you going to locate these floating around copies?
The differences between hard drives and SSD’s potentially lead to a dangerous disconnect between user expectations and the drives actual behaviour”, (Wei et al, 2010).
I would suggest that the EU test and verifies built in drive erasure commands to ensure compliance with Trading Standards. If the onus is placed back on the manufacturer, the manufacturer will test and verify that erasure works. It’s simply a case of “follow the money”. Consumers are not able to carry out such testing, as they do not have technical ability to recover data to ensure the code works. Therefore we need to force manufacturers to carry out “due diligence”.
Today is the second Birthday of the blog… and it’s time to reflect, on the incredible popularity it’s gained, which has blown me away. It’s gone from 14 hits a day… to 1,446 hits a day in just 2 years.
This year, in August, you guys, averaged an unbelievable 1,446 hits a day…. WOW!!
I’m so proud of my little blog, and so proud of you guys for supporting me.
By Christmas we may reach 2/3rds to 3/4th of a million hits. That’s amazing for InfoSec ..
PS. If you want any Kali tools, hacking tools, password crackers, VPN’s, Network Security, buffer overflows or encryption tutorials written, just leave a comment and tell me :)