Skip to content

USW win Cyber University of the Year – 2019

USW are the winners!

 

USW won the title of Cyber University of the Year – 2019.

USW’s Cyber team picked up a national award at the National Cyber Awards 2019 in April being named as the Best University for Cyber Education in the country.

The awards aim to reward those who are committed to cyber innovation, cyber crime reduction and protecting the citizen online.

Andrew Bellamy, course leader in Computer Forensics, said: “It’s great that Cyber at USW is being recognised nationally for the work we do around outreach, innovative teaching and our engagement with industry.

“This award really does validate all the hard work and dedication of the team who teach on our cyber courses.

 

WOOHOO!  That’s my iconic Uni!

 

USW link:

https://www.southwales.ac.uk/study/subjects/computing/?content=3152

Finalists and winners are listed here:

https://thenationalcyberawards.org/2019-winners/

 

 

Welsh Government – South Wales Cyber Cluster

 

BACKGROUND TO THE FORMATION OF CYBER SECURITY CLUSTERS

In 2011, the updated UK Government’s National Security Strategy classed “cyber security” as one of the top priorities for national security, alongside international terrorism, international military crises and natural disasters.

In response to this classification, the Government published the UK Cyber Security Strategy in November 2011. It sets out how the UK will support economic prosperity and protect national security by building a more trusted and resilient digital environment.

Four Strategy Objectives were published in the UK Cyber Security Strategy:

  • Make the UK one of the most secure places in the world to do business in cyberspace
  • Make the UK more resilient to cyber attack and better able to protect our interests in cyberspace
  • Help shape an open, vibrant and stable cyberspace that supports open societies
  • Build the UK’s cyber security knowledge, skills and capability

Each year, Cabinet Office Minister Francis Maude, has provided a Progress Report explaining what has been implemented to prevent cyber crime and make the UK a safer place to do business. For example:

  • Setting up a National Cyber Crime Unit within the National Crime Agency, bringing together Police eCrime Unit and SOCA
  • Building the Cyber Security Information Sharing Partnership (CISP) with businesses to allow the exchange information on cyber threats in a trusted environment
  • Providing cyber security advice to businesses such as the 10 Steps to Cyber Security Booklet already downloaded by more than 7,000 businesses.
  • Working with industry on cyber security standards such as IASME and the Cyber Essentials Scheme to give organisations a clear baseline to aim for to protect themselves against the most common cyber security threats.
  • Making more than £500,000 available to UK SMEs via Cyber Security Innovation Vouchers to improve their cyber security and protect their business ideas, administered by the Technology Strategy Board.

In addition to these Government initiatives, indirect support is also provided for individuals, organisations and groups who want to make a contribution to the combined effort of achieving the UK Cyber Security Strategy Objectives.

A good example of this is the formation of Cyber Security Clusters.

A number of Clusters have been formed that are centred around universities. There are 12 centres for excellence in academic research in this field, where the body of knowledge on cyber crime and cyber security is being expanded by students and faculties. These Clusters increasingly involve input from specialist cyber security companies and organisations who’s “real-world” experience helps to accelerate the learning process and act as a catalyst for ideas and a proving ground for innovation.

https://southwalescyber.net/south-wales-cluster-formed/

Advertisements

KALI – How configure the root mysql password using phpmyadmin for mutillidae web penetration testing

In Kali, the XAMPP and Mutillidae frameworks are used for web pen testing.  However,

the php.ini.config file needs to be edited to include the mutillidae password

Browse to

/opt

phpmyadmin

php.ini.config

cookie

user = ‘root’

password = ‘mutillidae’

 

 

here’s a video of this.

how to set root password on mysql for mutillidae

KALI Linux – How to install Kali 2019 using VirtualBox – Visual Guide

Recently I found this blog which is a nice visual guide on how to install KALI Linux, using an OVA file, where all the work has been done for us.

One word of caution: if you install the OVA, without the VirtualBox Extension Pack, the VM will fail, as there’s no USB 2 setup.  Install the extension pack and the VM will launch fine.

How to install Kali Linux using VirtualBox – Visual Guide

WireGuard VPN – pdf on ChaCha20 encryption

I’ve been asked to betatest the new WireGuard VPN on Android, and I have to say, that over the last 3 months, the results have been impressive.

WireGuard uses the last ciphers such as ChaCha20 and Poly1305.

Here’s a PDF on the ChaCha20 cipher.

ChaCha20 Cipher

 

Info on Poly1305

Poly1305

 

As WireGuard is still in testing, it’s not recommended where security is important, but watch out for this system, as it’s looking fast, and robust in connection terms.

 

Logging Concerns of WireGuard.

Many VPN providers cite that WireGuard is designed to be used with logs.  No VPN provider should keep logs of any kind – and if they do, then don’t use their services.

It will be interesting to see how VPN providers restrict all logging, as this is fundamental to privacy.

WireGuard Logging issues and Concerns

 

How to Exploit OpenID Connect

Here’s how to exploit, any failures in security.

exploit openidconnect

Reference:

Preventing mix up attacks against OpenID Connect

Broadly, the attacks consist of using dynamic client registration, or the compromise of an OpenID Provider (OP), to trick the Relying Party (RP) into sending an authorization code to the attacker’s Token Endpoint. Once a code is stolen, an attack that involves cutting and pasting values and state in authorization requests and responses can be used to confuse the relying party into binding an authorization to the wrong user.

Many deployments of OpenID Connect (and OAuth) in which the configuration is static, and the OPs are trusted, are at greatly reduced risk of these attacks. Despite that, these suggestions are best current practices that we recommend to all deployments to improve security, with a particular emphasis on more dynamic environments.

The full research papers on these attacks can be read here: A Comprehensive Formal Security Analysis of OAuth 2.0, and On the security of modern Single Sign-On Protocols: Second-Order Vulnerabilities in OpenID Connect.

 

Using the Hybrid Flow to mitigate attacks by a bad OP

Fortunately, the Hybrid flow of OpenID Connect is already hardened against these attacks, as the ID Token cryptographically binds the issuer to the code, and the user’s session, and through doing dynamic discovery on the issuer, the token endpoint. In fact, any OpenID Connect flow that returns an ID Token from the Authorization Endpoint already contains the same information returned by the OAuth 2.0 Mix-Up Mitigation draft specification, the Issuer (as the iss claim) and the Client ID (as the aud claim), enabling the RP to verify it, and thus prevent mix-up attacks.

To protect against the Mix-Up attack, RPs that allow user-driven dynamic OP discovery and client registration should:

Use the hybrid code id_token flow, and verify in the authorization response that:

  1. The response contains tokens required for the response type that you requested (code id_token).
  2. The ID Token is valid (signature validates, aud is correct).
  3. The issuer (iss value) matches the OP that the request was made to, and the token endpoint you will exchange the code at is the one listed in the issuer’s discovery document.
  4. The nonce value matches the nonce associated with the user session that initiated the authorization request.
  5. The c_hash value verifies correctly.

To aid the implementation of the best practice, we recommend that OPs consider supporting OAuth 2.0 Form Post Response Mode, as it makes it simpler for clients doing code id_token to get both the code and the ID Token on the backend for verification.

OPs MUST also follow the OpenID Connect requirement for exact matching of a pre-registered redirect URI, to protect against open redirector attacks.

Using the Code Flow to mitigate attacks involving a compromised OP

Environments with statically registered OPs are not susceptible to dynamic registration attacks (by definition), however, it is still possible for a whitelisted OPs to potentially attack other OPs and for malicious users to bind stolen codes to their own sessions. This may sound far-fetched (why would your trusted OPs attack each other after all?), but if one OP was compromised for example, it could be used to attack the other OPs, which is not ideal. To protect against such attacks, RPs using the “code” flow with statically registered OPs should:

  1. Register a different redirect URI for each OP, and record the redirect URI used in the outgoing authorization request in the user’s session together with state and nonce. On receiving the authorization code in the response, verify that the user’s session contains the state value of the response, and that the redirect URI of the response matches the one used in the request.
  2. Always use nonce with the code flow (even though that parameter is optional to use). After performing the code exchange, compare the nonce in the returned id token to the nonce associated to the user’s session from when the request was made, and don’t accept the authorization if they don’t match.

Reference

https://openid.net/2016/07/16/preventing-mix-up-attacks-with-openid-connect/

OAUTH2 – Common Attack Vectors

There are few OAUTH2 or OpenID Connect resources that *actually* make sense at the first read, but luckily there is one blog that can be recommended.

https://leastprivilege.com/2013/03/15/common-oauth2-vulnerabilities-and-mitigation-techniques/

Step 1 – Check the Flow Type.

Authorisation Code is the most secure flow type, whereas Implicit Flow is much less secure, transmitting tokens in the browser (yikes).

Step 2 – Redirect URI – does is use SSL?

Check if SSL is used (the author probably means TLS).

attack redirect URI

Step 3 – Change Response Type from Code to Token

Beautiful if this works!  Buy a lottery ticket, as your luck is in.

attack

attack tokens

Step 4 – Bearer Tokens need to be protected by SSL/TLS as there’s no encryption as in Shibboleth

SAML ie Shibboelth uses public key encryption (encryption and signing keys exist).

In OAuth2, the bearer tokens are not encrypted – therefore if the software developer didn’t use TLS to protect the token in transit, then as InfoSec, you’ll need to take a closer look to close that attack vector.

 

 

 

How to harden TLS and SSH – Linux Journal

At the moment, my task is to select secure ciphers for SSH.  There is an excellent Linux Journal article that assisted me – and confirmed my bias against 3DES.

Here we go:

Argument 1 – Strong Ciphers in TLS

The Transport Layer Security (TLS) protocols emerged from the older Secure Sockets Layer (SSL) that originated in the Netscape browser and server software.

It should come as no surprise that SSL must not be used in any context for secure communications. The last version, SSLv3, was rendered completely insecure by the recent POODLE exploit. No version of SSL is safe for secure communications of any kind—the design of the protocol is fatally flawed, and no implementation of it can be secure.

TLS version 1.0 is also no longer safe. The immediate preference for secure communication is the modern TLS version 1.2 protocol, which, unfortunately, is not (yet) widely used. Despite the lack of popularity, prefer 1.2 if you value security.

Yet, even with TLS version 1.2, there still are a number of important weaknesses that must be addressed to meet current best practice as specified in RFC 7525:

  • “Implementations MUST NOT negotiate RC4 cipher suites.” The RC4 cipher is enabled by default in many versions of TLS, and it must be disabled explicitly. This specific issue was previously addressed in RFC 7465.
  • “Implementations MUST NOT negotiate cipher suites offering less than 112 bits of security, including so-called ‘export-level’ encryption (which provide 40 or 56 bits of security).” In the days of SSL, the US government forced weak ciphers to be used in encryption products sold or given to foreign nationals. These weak “export” ciphers were created to be easily broken (with sufficient resources). They should have been removed long ago, and they recently have been used in new exploits against TLS.
  • “Implementations MUST NOT negotiate SSL version 3.” This formalizes our distaste for the entire SSL suite.
  • “Implementations SHOULD NOT negotiate TLS version 1.0 (or) 1.1.” Prefer TLS 1.2 whenever possible.

Summary – get rid of TLS 1, 1.1, SSLv3, export ciphers, and RC4.

 

Argument 2 – Compression OFF

If possible under your release of Apache, also issue an SSLCompression Off directive. Compression should not be used with TLS because of the CRIME attack.

 

Argument 3 – Restart web server daily for perfect forward secrecy

It is also important to restart your TLS Web server for key regeneration every day, as is mentioned in the Apache changelog:

Session ticket creation uses a random key created during web server startup and recreated during restarts. No other key recreation mechanism is available currently. Therefore using session tickets without restarting the web server with an appropriate frequency (e.g. daily) compromises perfect forward secrecy.

This information is not well known, and has been met with some surprise and dismay in the security community: “You see, it turns out that generating fresh RSA keys is a bit costly. So modern web servers don’t do it for every single connection. In fact, Apache mod_ssl by default will generate a single export-grade RSA key when the server starts up, and will simply re-use that key for the lifetime of that server” (from http://blog.cryptographyengineering.com/2015/03/attack-of-week-freak-or-factoring-nsa.html).

 

Argument 4 – Strong Ciphers in SSH

It is now well-known that (some) SSH sessions can be decrypted (potentially in real time) by an adversary with sufficient resources. SSH best practice has changed in the years since the protocols were developed, and what was reasonably secure in the past is now entirely unsafe.

The first concern for an SSH administrator is to disable protocol 1 as it is thoroughly broken. Despite a stream of vendor updates, older Linux releases maintain this flawed configuration, requiring the system manager to remove it by hand. Do so by ensuring “Protocol 2” appears in your sshd_config, and all reference to “Protocol 2,1” is deleted. Encouragement also is offered to remove it from client SSH applications as well, in case a server is inaccessible or otherwise overlooked.

sshd config

 

Argument 5 – Putty and HMAC’s

For older versions of SSH, I turn to the Stribika Legacy SSH Guide, which contains relevant configuration details for Oracle Linux 5, 6 and 7.

There are only two recommended sshd_config changes for Oracle Linux 5:


Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-ripemd160

Unfortunately, the PuTTY suite of SSH client programs for Win32 are incompatible with the MACs hmac-ripemd160 setting and will not connect to a V5 server when this configuration is implemented. As PuTTY quietly has become a corporate standard, this likely is an insurmountable incompatibility, so most enterprise deployments will implement only the Cipher directive.

Version 0.58 of PuTTY also does not implement the strong AES-CTR ciphers (these appear to have been introduced in the 0.60 release) and likewise will not connect to an SSH server where they are used exclusively. It is strongly recommended that you implement the Cipher directive, as it removes RC4 (arcfour), which is totally inappropriate for modern SSH. It is not unreasonable to expect corporate clients to run the latest versions of PuTTY, as new releases are trivially easy to install.

 

Argument 6 – We disable 3DES in SSH

The Stribika Guide immediately dismisses the 3DES cipher, which is likely reasonable as it is slow and relatively weak, but also goes to some length to criticize the influence of NIST and the NSA. In the long view, this is not entirely fair, as the US government’s influence over the field of cryptography has been largely productive. To quote cryptographer Bruce Schneier, “It took the academic community two decades to figure out that the NSA ‘tweaks’ actually improved the security of DES….DES did more to galvanize the field of cryptanalysis than anything else.” Despite unfortunate recent events, modern secure communication has much to owe to the Data Encryption Standard and those who were involved in its introduction.

Stribika levels specific criticism:

…advising against the use of NIST elliptic curves because they are notoriously hard to implement correctly. So much so, that I wonder if it’s intentional. Any simple implementation will seem to work but leak secrets through side channels. Disabling them doesn’t seem to cause a problem; clients either have Curve25519 too, or they have good enough DH support. And ECDSA (or regular DSA for that matter) is just not a good algorithm, no matter what curve you use.

In any case, there is technical justification for leaving 3DES in TLS, but removing it from SSH—there is a greater financial cost when browsers and customers cannot reach you than when your administrators are inconvenienced by a software standards upgrade.

 

Summary

As I have to take decisions on which ciphers to block/disable in SSH, this blog summarises the actions to be taken across hundreds of servers.  I dislike 3DES, however, this also means that I reluctantly allow it, where outdated browsers may be used.  For SSH, 3DES must go!  Yay!  Me? Biased?

Reference:

Linux Journal

https://www.linuxjournal.com/content/cipher-security-how-harden-tls-and-ssh

%d bloggers like this: