Skip to content

UWNTHESIS – Thank you for 1.5 million hits

Today, I noticed that the hits counter had gone well over 1.5 million views.  As you may know, a blog is requested of University of South Wales students, to allow lecturers to monitor that all reading is at Thesis level.  It’s probably fair to say that that no-one could have expected 1.5 million hits of an InfoSec blog.

Here’s some facts about this blog:

No-one could have expected that a Kali HashCat password hacking article would be the most searched for article.

No-one could have expected the VPN, OpenVPN and Obsproxy which gives us the ability to bypass surveillance, would be used by gamers, to bypass restrictions.

No-one could have expected the level of support that you’ve given me, or the level of input, which dramatically changed the direction of the thesis.

So a giant thank you to you all.  You did this!!

IVPN – VPN Speed Test of over 200 mb

Wow, what can I say?  We’ve just run a speed test on IVPN using servers in various towns.

  1. Woking = 137 mb

  2. Bracknell = 136 mb

  3. Reading = 216 mb. **Seriously… no joke

The Reading server at one point reached 239 mb after which it maxed out.

The settings were TCP port 80 > London Server > Obfsproxy (bypass censorship) on a MAC. Yes, a Mac, not a Windows laptop.

I knew that IVPN said their speeds weren’t throttled… but at over 200 mb – that’s twice the fastest speeds found in Sweden (100 mb).

We did not expect this result.

On the other hand, our location in South Wales didn’t fair so well on speed tests.

  1. Gloucester = 67 mb
  2. Newport = 29 mb
  3. Cardiff = 30 mb.

30mb is respectable, but disappointing compared with the speeds that the VPN can operate at, if you tested from the Reading servers.

The tests from Reading show what can be achieved – and it was amazing to watch.

VPN – So what do I look for from my VPN?

In the UK, we have the Snoopers Charter, which means the ISP stores your browsing habits for 1 year.  Unfortunately BT will hand over all the sites you’ve visited under a court order, which could easily be obtained by divorce lawyers, benefit agents and the tax man.

Think of your VPN as an agent – it keeps lawyers away.

No Logs.

It’s vital that the VPN keeps no server logs.  If the police seized a server and it has your IP logged, then you’re in trouble.  Torrentfreak provide an annual list of those providers who do no log.  Use this as your starting point.

No Court orders.

Ensure that the VPN has never complied with a court order.  If the VPN is set up correctly, the provider should be unable to comply, as they will not have the data.

If the VPN has complied with court orders – give them a miss.


You’re looking for OpenVPN.  Never use anything else.

Especially do not use the Windows 7 client VPN.  **Warning – M$ co-operate with any and all government agencies..  I’ve heard them boast at conferences about this.  The presenter really thought he was clever, and didn’t for one minute suspect a privacy person would be at that conference.

RSA Key size

Ideally we want RSA 4096.  Remember that the EU ENISA report recommended RSA 4096 at a minimum, and moving forward look for RSA 16k, when software allows it (its not built into software, but this gives us a heads up that RSA 2048 is not acceptable).

Multiple Countries.

If you want access to BBC Iplayer, then they’ll need a UK IP.

Your VPN needs to provide a number of countries, so that you can switch easily.  IVPN offer 12 countries, NordVPN offer 41 countries. Don’t use UK/USA exit nodes for torrents – use a European exit node (usually the Netherlands is a good choice).


This hides the OpenVPN traffic, so that China and other state authorities can’t detect your VPN.  Without this tool, deep packet inspection will find OpenVPN traffic – and in some countries that could prove dangerous.

There are only 2 VPN providers that offer this tool called obfsproxy – NordVPN and IVPN.


Check your VPN isn’t leaking with this tool – my results are shown below.

This should show your internal IP (assigned by the OpenVPN client – in my case it’s, and the public IP for the OpenVPN server – which is  At no time is my REAL ip from my ISP revealed, or my internal network IP eg – as shown.



IVPN – The VPN that I use everyday

Which is the safest VPN on the market? Who do I use for a VPN?

The Privacy Wars Are About to Get A Whole Lot Worse – Excellent Article

It used to be that server logs were just boring utility files whose most dramatic moments came when someone forgot to write a script to wipe out the old ones and so they were left to accumulate until they filled the computer’s hard-drive and crashed the server.

Then, a series of weird accidents turned server logs into the signature motif of the 21st century, a kind of eternal, ubiquitous exhaust from our daily lives, the CO2 of the Internet: invisible, seemingly innocuous, but harmful enough, in aggregate, to destroy our world.

Here’s how that happened: first, there were cookies. People running web-servers wanted a way to interact with the people who were using them: a way, for example, to remember your preferences from visit to visit, or to identify you through several screens’ worth of interactions as you filled and cashed out a virtual shopping cart.

Then, Google and a few other companies came up with a business model. When Google started, no one could figure out how the com­pany would ever repay its investors, especially as the upstart search-engine turned up its nose at the dirtiest practices of the industry, such as plastering its homepage with banner ads or, worst of all, selling the top results for common search terms.

Instead, Google and the other early ad-tech companies worked out that they could place ads on other people’s websites, and that those ads could act as a two-way conduit between web users and Google. Every page with a Google ad was able to both set and read a Google cookie with your browser (you could turn this off, but no one did), so that Google could get a pretty good picture of which websites you visited. That information, in turn, could be used to target you for ads, and the sites that placed Google ads on their pages would get a little money for each visitor. Advertisers could target different kinds of users – users who had searched for information about asbestos and lung cancer, about baby products, about wedding planning, about science fiction novels. The websites themselves became part of Google’s ‘‘inventory’’ where it could place the ads, but they also improved Google’s dossiers on web users and gave it a better story to sell to advertisers.

The idea caught the zeitgeist, and soon everyone was trying to figure out how to gather, aggregate, analyze, and resell data about us as we moved around the web.

Of course, there were privacy implications to all this. As early breaches and tentative litigation spread around the world, lawyers for Google and for the major publishers (and for publishing tools, the blogging tools that eventually became the ubiquitous ‘‘Content Management Systems’’ that have become the default way to publish material online) adopted boiler­plate legalese, those ‘‘privacy policies’’ and ‘‘terms of service’’ and ‘‘end user license agreements’’ that are referenced at the bottom of so many of the pages you see every day, as in, ‘‘By using this website, you agree to abide by its terms of service.’’

As more and more companies twigged to the power of ‘‘surveillance capitalism,’’ these agreements proliferated, as did the need for them, because before long, everything was gathering data. As the Internet everted into the physical world and colonized our phones, we started to get a taste of what this would look like in the coming years. Apps that did innocuous things like turning your phone into a flashlight, or recording voice memos, or letting your kids join the dots on public domain clip-art, would come with ‘‘permissions’’ screens that required you to let them raid your phone for all the salient facts of your life: your phone number, e-mail address, SMSes and other messages, e-mail, location – everything that could be sensed or inferred about you by a device that you carried at all times and made privy to all your most sensitive moments.

When a backlash began, the app vendors and smartphone companies had a rebuttal ready: ‘‘You agreed to let us do this. We gave you notice of our privacy practices, and you consented.’’

This ‘‘notice and consent’’ model is absurd on its face, and yet it is surprisingly legally robust. As I write this in July of 2016, US federal appellate courts have just ruled on two cases that asked whether End User Licenses that no one read and no one understands and no one takes seriously are enforceable. The cases differed a little in their answer, but in both cases, the judges said that they were enforceable at least some of the time (and that violating them can be a felony!). These rulings come down as the entirety of America has been consumed with Pokémon Go fever, only to have a few killjoys like me point out that merely by installing the game, all those millions of players have ‘‘agreed’’ to forfeit their right to sue any of Pokémon’s corporate masters should the com­panies breach all that private player data. You do, however, have 30 days to opt out of this forfeiture; if Pokémon Go still exists in your timeline and you signed up for it in the past 30 days, send an e-mail to <>with the subject ‘‘Arbitra­tion Opt-out Notice’’ and include in the body ‘‘a clear declaration that you are opting out of the arbitration clause in the Pokémon Go terms of service.’’

Notice and consent is an absurd legal fiction. Jonathan A. Obar and Anne Oeldorf-Hirsch, a pair of communications professors from York University and the University of Connecticut, published a working paper in 2016 called ‘‘The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Net­working Services.’’ The paper details how the profs gave their students, who are studying license agreements and privacy, a chance to beta-test a new social network (this service was fictitious, but the students didn’t know that). To test the network, the students had to create accounts, and were given a chance to review the service’s terms of service and privacy policy, which prominently promised to give all the users’ personal data to the NSA, and demanded the students’ first-born children in return for access to the service. As you may have gathered from the paper’s title, none of the students noticed either fact, and almost none of them even glanced at the terms of service for more than a few seconds.

Indeed, you can’t examine the terms of service you interact with in any depth – it would take more than 24 hours a day just to figure out what rights you’ve given away that day. But as terrible as notice-and-consent is, at least it pretends that people should have some say in the destiny of the data that evanescences off of their lives as they move through time, space, and information.

The next generation of networked devices are literally incapable of participating in that fiction.

The coming Internet of Things – a terrible name that tells you that its proponents don’t yet know what it’s for, like ‘‘mobile phone’’ or ‘’3D printer’’ – will put networking capability in everything: appliances, light­bulbs, TVs, cars, medical implants, shoes, and garments. Your lightbulb doesn’t need to be able to run apps or route packets, but the tiny, com­modity controllers that allow smart lightswitches to control the lights anywhere (and thus allow devices like smart thermostats and phones to integrate with your lights and home security systems) will come with full-fledged computing capability by default, because that will be more cost-efficient that customizing a chip and system for every class of devices. The thing that has driven computers so relentlessly, making them cheaper, more powerful, and more ubiquitous, is their flexibility, their character of general-purposeness. That fact of general-purposeness is inescapable and wonderful and terrible, and it means that the R&D that’s put into making computers faster for aviation benefits the computers in your phone and your heart-monitor (and vice-versa). So every­thing’s going to have a computer.

You will ‘‘interact’’ with hundreds, then thou­sands, then tens of thousands of computers every day. The vast majority of these interactions will be glancing, momentary, and with computers that have no way of displaying terms of service, much less presenting you with a button to click to give your ‘‘consent’’ to them. Every TV in the sportsbar where you go for a drink will have cameras and mics and will capture your image and process it through facial-recognition software and capture your speech and pass it back to a server for continu­ous speech recognition (to check whether you’re giving it a voice command). Every car that drives past you will have cameras that record your like­ness and gait, that harvest the unique identifiers of your Bluetooth and other short-range radio devices, and send them to the cloud, where they’ll be merged and aggregated with other data from other sources.

In theory, if notice-and-consent was anything more than a polite fiction, none of this would hap­pen. If notice-and-consent are necessary to make data-collection legal, then without notice-and-consent, the collection is illegal.

But that’s not the realpolitik of this stuff: the reality is that when every car has more sensors than a Google Streetview car, when every TV comes with a camera to let you control it with gestures, when every medical implant collects telemetry that is collected by a ‘‘services’’ business and sold to insurers and pharma companies, the argument will go, ‘‘All this stuff is both good and necessary – you can’t hold back progress!’’



Wow, great article.  I totally agree with these arguments, the “Internet of Things” or IOT is not a happy development.  Imagine the day when you get home early from work, put on the kettle and the kettle tells your boss, that you’ve finished early, and you get a written warning from your employer.    This is why surveillance is such a bad thing.  It’ll snitch on you to your boss, your landlord, the benefits agency, the dole, and of course the taxman.

There’s a reason the Stasi of East Germany were so unpopular…. and that reason is the same reason anyone who gets involved with IOT will suffer.



VPN – Is this the Ultimate guide to a VPN 2016?

As you know, I adore accomplished VPN providers, particularly as the UK keeps trying to promote the “Snoopers Charter”.  Today I discovered someone has been researching the VPN market, and provided an extended set of results, with a traffic light colour coding.

I’m not recommending any VPN here, just asking that you choose a VPN that will keep you safe.

VPN Providers – Simple Results

vpn simple comparision 1

Filtered by 1. Logging and 2. Security

Results in 14 VPN providers being identified

vpn filtered by logging &amp; security

You really do not want a VPN provider that logs your connection.  The server needs to be configured not to keep logs or the active wiping of any logs every few minutes needs to be in effect.Keeping the logs in RAM, rather than to disk is also important.

For instance if server logs were only wiped once a week, then a court order could obtain all your connection records.  If a server is seized, there needs to be no logs, or your privacy has been breached.

Filtered by 1. Logging 2. Security 3. Availability and 4. Privacytools criteria (my personal criteria).

Results in 7 VPN providers being identified

vpn top 7

What are the important services in these Top 7 providers?

  1. We see that they all offer OpenVPN = this is an excellent start.
  2. DNS servers = Definitely needed.  So really our top 7 is only 6 providers.

vpn top 7 services

Now, we get down to the services that you would like to see.  Do you want multihop?  If you did, then we’re down to 4 providers.

If, like me, you’re a fan of Obfsproxy, then things are easier, as there are only 2 providers. Obfsproxy is designed to bypass OpenVPN censorship.  It hides OpenVPN traffic by use of an “obscuring proxy” server, hence the name.

3. P2p ?  Read the restrictions carefully.

vpn p2p

 4. Countries

The UK uses High Court orders to prevent our ISP’s accessing banned websites as a form of censorship.  It’s subtle, but the censorship is there. However, if you wanted to watch BBC iPlayer, then a British IP would be needed.    Accessing the internet from a range of countries is an invaluable asset from a VPN provider.

vpn countries


Which VPN do I use?

It’s well known that I’ve used IVPN for years.  If a friend asked me would I recommend them, then the answer has to be yes.

So where did I get this VPN data?

Simple Excel spreadsheet

Extended Excel spreadsheet


And Finally, some additional advice would be:

Always look for OpenVPN support.

Obfsproxy will bypass most anti VPN technology.

vpn service


Use a Minimum of RSA 4096 (EU ENISA Report)

The EU’s ENISA recommended a minimum of RSA 4096.  RSA 15360 is recommended in the longer term (page 35)

vpn security



Good luck!

NordVPN Website


EU ENISA Report on Encryption

DNS Leaks and other issues with VPN’s

Service Broker and Message Queue Server

The Krypt

SQL Server provides a relatively easy way to decouple the messaging from the data being communicated between applications, and provides a service bus for inter-service messaging. Service Broker takes the form of a separate collection of tables in an SQL database, each table being a messaging queue, and the operations are performed using a special variant of the SQL query language.

Advantages of Asynchronous Messaging
With synchronous communication, one application will send a message to the address of the receiving application, which might have a listener process on a TCP/UDP port. An acknowledgement message might be returned to the sending/requesting application, enabling it to switch to another messaging task.


While this pont-to-point method of communication might be more efficient on a network with relatively few services, it’s not scalable. Dependencies might exist between the applications, and there must be a way of tracking or resolving application addresses.
Asynchronous messaging is…

View original post 668 more words

Open Whisper Systems – Forward Secrecy, Advanced cryptographic ratcheting and other great things

Some companies have made encryption and privacy into non – negotiable building blocks, rather than an afterthought. Open Whisper Systems is an exemplary designer of encrypted systems.

This article needs reading and reading for Crypto fans.



Thieves can wirelessly unlock up to 100 million Volkswagens, each at the press of a button

Security researchers will demonstrate how crooks can break into cars at will using wireless signals that can unlock millions of vulnerable vehicles.

The eggheads, led by University of Birmingham computer scientist Flavio Garcia alongside colleagues from German engineering firm Kasper & Oswald, have managed to clone a VW Group remote control key fob after eavesdropping on the gizmos’ radio transmissions.

The hack can be used by thieves to wirelessly unlock as many as 100 million VW cars, each at the press of a button. Almost every vehicle the Volkswagen group has sold for the past 20 years – including cars badged under the Audi and Skoda brands – is potentially vulnerable, say the researchers. The problem stems from VW’s reliance on a “few, global master keys.”

During an upcoming presentation, titled Lock It and Still Lose It — on the (In)Security of Automotive Remote Keyless Entry Systems at the Usenix security conference (abstract below) – the researchers are also due to outline a different set of cryptographic flaws in keyless entry systems as used by car manufacturers including Ford, Mitsubishi, Nissan and Peugeot.

The two examples are designed to raise awareness and show that keyless entry systems are insecure and ought to be re-engineered in much the same way that car immobilisers were previously shown to provide less than adequate protection.

While most automotive immobiliser systems have been shown to be insecure in the last few years, the security of remote keyless entry systems (to lock and unlock a car) based on rolling codes has received less attention. In this paper, we close this gap and present vulnerabilities in keyless entry schemes used by major manufacturers.In our first case study, we show that the security of the keyless entry systems of most VW Group vehicles manufactured between 1995 and today relies on a few, global master keys. We show that by recovering the cryptographic algorithms and keys from electronic control units, an adversary is able to clone a VW Group remote control and gain unauthorised access to a vehicle by eavesdropping a single signal sent by the original remote.

Secondly, we describe the Hitag2 rolling code scheme (used in vehicles made by Alfa Romeo, Chevrolet, Peugeot, Lancia, Opel, Renault, and Ford among others) in full detail. We present a novel correlation-based attack on Hitag2, which allows recovery of the cryptographic key and thus cloning of the remote control with four to eight rolling codes and a few minutes of computation on a laptop. Our findings affect millions of vehicles worldwide and could explain unsolved insurance cases of theft from allegedly locked vehicles.

Garcia was previously blocked from giving a talk about weaknesses in car immobilisers following a successful application to a British court by Volkswagen. This earlier research on how the ignition key used to start cars might be subverted was eventually presented last year, following a two year legally enforced postponement.

The latest research shows how tech-savvy thieves might be able to unlock cars locked by the vehicles’ owners without covering how their engines might subsequently be turned on.

WiReD reports that both attacks might be carried out using a cheap $40 piece of radio hardware to intercept signals from a victim’s key fob. Alternatively, a software defined radio rig connected to a laptop might be employed. Either way, captured data can be used to make counterfeit kit. ®

Microsoft accidentally release backdoor keys to bypass UEFI secure boot

Secure Boot is a security feature that protects your device from certain types of malware, such as a rootkit, which can hijack your system bootloader, as well as, Secure Boot restricts you from running any non-Microsoft operating system on your device.

In other words, when Secure Boot is enabled, you will only be able to boot Microsoft approved (cryptographically signature checking) operating systems.

However, the Golden Keys disclosed by two security researchers, using alias MY123 and Slipstream, can be used to install non-Windows operating systems, say GNU/Linux or Android, on the devices protected by Secure Boot.

Moreover, according to the blog post published by researchers, it is impossible for Microsoft to fully revoke the leaked keys, potentially giving law enforcement (such as FBI and NSA) special backdoor that can be used to unlock Windows-powered devices in criminal cases.

The issue actually resides in the Secure Boot policy loading system, where a specially signed policy loads early and disables the operating system signature checks, the reg reports.

This specific Secure Boot policy was created and signed by Microsoft for developers, testers, and programmers for debugging purposes.

“During the development of Windows 10 v1607 ‘Redstone,’ MS added a new type of secure boot policy. Namely, “supplemental” policies that are located in the EFIESP partition…” researcher said.

“…a backdoor, which MS put into secure boot because they decided to not let the user turn it off in certain devices, allows for secure boot to be disabled everywhere!”

Yesterday, Microsoft released August Patch Tuesday that includes a security patch for designing flaw in Secure Boot for the second time in two months, but unfortunately, the patch is not complete.

Researchers crack open unusually advanced malware that hid for 5 years

Espionage platform with more than 50 modules was almost certainly state sponsored.

The name “Project Sauron” came from code contained in one of the malware’s configuration files.

Security experts have discovered a malware platform that’s so advanced in its design and execution that it could probably have been developed only with the active support of a nation-state.

The malware—known alternatively as “ProjectSauron” by researchers from Kaspersky Lab and “Remsec” by their counterparts from Symantec—has been active since at least 2011 and has been discovered on 30 or so targets. Its ability to operate undetected for five years is a testament to its creators, who clearly studied other state-sponsored hacking groups in an attempt to replicate their advances and avoid their mistakes. State-sponsored groups have been responsible for malware like the Stuxnet- or National Security Agency-linked Flame, Duqu, and Regin. Much of ProjectSauron resides solely in computer memory and was written in the form of Binary Large Objects, making it hard to detect using antivirus.

Because of the way the software was written, clues left behind by ProjectSauron in so-called software artifacts are unique to each of its targets. That means that clues collected from one infection don’t help researchers uncover new infections. Unlike many malware operations that reuse servers, domain names, or IP addresses for command and control channels, the people behind ProjectSauron chose a different one for almost every target.

“The attackers clearly understand that we as researchers are always looking for patterns,” Kaspersky researchers wrote in a report published Monday. “Remove the patterns and the operation will be harder to discover. We are aware of more than 30 organizations attacked, but we are sure that this is just a tiny tip of the iceberg.” Symantec researchers, in a report of their own, said they were aware of seven organizations infected.

Jumping air gaps

Part of what makes ProjectSauron so impressive is its ability to collect data from computers considered so sensitive by their operators that they have no Internet connection. To do this, the malware uses specially prepared USB storage drives that have a virtual file system that isn’t viewable by the Windows operating system. To infected computers, the removable drives appear to be approved devices, but behind the scenes are several hundred megabytes reserved for storing data that is kept on the “air-gapped” machines. The arrangement works even against computers in which data-loss prevention software blocks the use of unknown USB drives.

Kaspersky researchers still aren’t sure precisely how the USB-enabled exfiltration works. The presence of the invisible storage area doesn’t in itself allow attackers to seize control of air-gapped computers. The researchers suspect the capability is used only in rare cases and requires use of a zero-day exploit that has yet to be discovered. In all, Project Sauron is made up of at least 50 modules that can be mixed and matched to suit the objectives of each individual infection.

“Once installed, the main Project Sauron modules start working as ‘sleeper cells,’ displaying no activity of their own and waiting for ‘wake-up’ commands in the incoming network traffic,” Kaspersky researchers wrote in a separate blog post. “This method of operation ensures Project Sauron’s extended persistence on the servers of targeted organizations.”

Kaspersky researchers said they discovered the malware last September after a customer at an unidentified government organization hired them to investigate anomalous network traffic. They eventually unearthed a “strange” executable program library that was loaded into the memory of one of the customer’s domain controller servers. The library was masquerading as a Windows password filter, which is something administrators typically use to ensure passwords match specific requirements for length and complexity. The module started every time a network or local user logged in or changed a password, and it was able to view passcodes in plaintext.

The main purpose of the malware platform was to obtain passwords, cryptographic keys, configuration files, and IP addresses of the key servers related to any encryption software that was in use. Infected groups include government agencies, scientific research centers, military organizations, telecommunication providers, and financial institutions in Russia, Iran, Rwanda, China, Sweden, Belgium, and possibly in Italian-speaking countries.

Kaspersky researchers estimate that development and operation of the Sauron malware is likely to have required several specialist teams and a budget in the millions of dollars. The researchers went on to speculate that the project was funded by a nation-state, but they stopped short of saying which one.

%d bloggers like this: