Skip to content

How to use HashCheck on Windows 10 for SHA-256 and SHA-512 file hashing.

The most robust way to *prove* that a file has not been tampered with is to use a hashing algorithm which generates a fixed length number that represents the file.  The length of the hash, is related to the hashing type, so MD5 will be much shorter than SHA-256 or SHA-512.

To be “secure” in hashing terms means that 2 files cannot generate the same long number.  If 2 files do have the same hash, this is called a collision and means the algorithm is unsafe. Try to avoid legacy MD5 and SHA-1 hashing, and go for SHA-256 or SHA-512 as these are secure.

If you use Windows 10, there is a free personal hashing algorithm that works pin a windows tab, and can generate evidence of the hash.

 

Step 1 – Download HashCheck 2.4.0

HashCheck can be downloaded here:

https://www.neowin.net/news/hashcheck-240

Download: HashCheck 2.4.0 | 497 KB (Open Source)
View: HashCheck Home Page

This new version supports SHA-256 as default and SHA-512 if selected.

 

Step 2 – Right Click  on any file

HashCheck introduces a new tab into windows, called File Hashes, click on this tab to view the hash value of the file.

Right click on any file > select properties > FILE HASHES TAB

Here I selected a file called “broadcast storm commands for cpu usage.png”.

file hashes tab

Step 3 – Generate a record of a file hash – in a new file

Sometimes, you may need to permanently record the hash value of a file.

HashCheck allows us to do this, using the Windows right-click menu.

Right-click on a file > Create Checksum File

create checksum file

The default hash is SHA-256.  Use the drop down box to change to SHA-512, if needed.

file hashes drop down list

A new file will be created that contains both the hash and the name of the file selected.  The new file will have a large Red tick on it.

red tick

Doubleclick on the Red Tick File – it opens up and checks the hash matches. A Match proves the data has not been tampered with.

red tick file open

This is especially useful if you use this to check archived files, that have read only data, or must not have altered in any way.  Where data integrity is critical, this will automatically check the hash to ensure the integrity is proven.

 

Advertisements

AMD Security Flaws – AMD only given 24 hours advance notice

Today, CTS-Labs, a security company based in Israel, has published a whitepaper identifying four classes of potential vulnerabilities of the Ryzen, EPYC, Ryzen Pro, and Ryzen Mobile processor lines. AMD is in the process of responding to the claims, but was only given 24 hours of notice rather than the typical 90 days for standard vulnerability disclosure. No official reason was given for the shortened time.

As of 3/13 at 5:40pm ET, AMD has since opened a section on its website to respond to these issues. At present, the statement says:

“We have just received a report from a company called CTS Labs claiming there are potential security vulnerabilities related to certain of our processors. We are actively investigating and analyzing its findings. This company was previously unknown to AMD and we find it unusual for a security firm to publish its research to the press without providing a reasonable amount of time for the company to investigate and address its findings. At AMD, security is a top priority and we are continually working to ensure the safety of our users as potential new risks arise. We will update this blog as news develops.”

At this point AMD has not confirmed any of the issues brought forth in the CTS-Labs whitepaper, so we cannot confirm in the findings are accurate. It has been brought to our attention that some press were pre-briefed on the issue, perhaps before AMD was notified, and that the website that CTS-Labs has setup for the issue was registered on February 22nd, several weeks ago. Given the level of graphics on the site, it does look like a planned ‘announcement’ has been in the works for a little while, seemingly with little regard for AMD’s response on the issue. This is compared to Meltdown and Spectre, which was shared among the affected companies several months before a planned public disclosure. CTS-Labs has also hired a PR firm to deal with incoming requests for information, which is also an interesting avenue to the story, as this is normally not the route these security companies take. CTS-Labs is a security focused research firm, but does not disclose its customers or research leading to this disclosure. CTS-Labs was started in 2017, and this is their first public report.

CTS-Labs’ claims revolve around AMD’s Secure Processor and Promontory Chipset, and fall into four main categories, which CTS-Labs has named for maximum effect. Each category has sub-sections within.

MasterKey 1, 2, and 3

MasterKey is an exploit that allows for arbitrary code execution within the secure processor of the CPU, but requires the attacker to re-flash the BIOS with an update that attacks the Arm Cortex A5 at the heart of the secure processor. In one version of MasterKey, the BIOS update uses metadata to exploit the vulnerability, but the coal is to bypass AMD’s Hardware Validated Boot (HVM). The impact of MasterKey would allow security features to be disabled, such as the Firmware Trusted Platform Module or Secure Encrypted Virtualization. This could lead to hardware-based random attacks. CTS-Labs cite that American Megatrends, a common BIOS provider for Ryzen systems, makes a BIOS re-flash very easy, assuming the attacker has a compatible BIOS.

Impact EPYC Ryzen Ryzen Pro Ryzen Mobile
MasterKey-1 Disable Security Features
within
AMD Secure Processor
Yes Yes Maybe Maybe
MasterKey-2
MasterKey-3

CTS-Labs state that MasterKey-1 and Masterkey-2 has been successfully exploited on EPYC and Ryzen, but only theorized on Ryzen Pro and Ryzen Mobile by examining the code. Masterkey-3 has not been attempted. Protection comes via preventing unauthorized BIOS updates, although if Ryzenfall compromised system may bypass this.

Chimera HW and Chimera SW

The Chimera exploit focuses on the Promontory chipset, and hidden manufacturer backdoors that allow for remote code execution. CTS-Labs cites that ASMedia, the company behind the chipset, has been fallen foul of the FTC due to security vulnerabilities in its hardware.

Impact EPYC Ryzen Ryzen
Pro
Ryzen
Mobile
Chimera HW Chipset code execution No Yes Yes No
Chimera SW

A successful exploit allows malicious code that can attack any device attached through the chipset, such as SATA, USB, PCIe, and networking. This would allow for loggers, or memory protection bypasses, to be put in place. It is cited that malware could also be installed and abuse the Direct Memory Access (DMA) engine of the chipset, leading to an operating system attack. CTS-Labs has said that they have successfully exploited Chimera on Ryzen and Ryzen Pro, by using malware running on a local machine with elevated administrator privileges and a digitally signed driver. It was stated that a successful firmware attack would be ‘notoriously difficult to detect or remove’.

Ryzenfall 1, 2, 3, and 4

The Ryzenfall exploit revolves around AMD Secure OS, the operating system for the secure processor. As the secure processor is an Arm Cortex A5, it leverages ARM TrustZone, and is typically responsible for most of the security on the chip, including passwords and cryptography.

Impact EPYC Ryzen Ryzen
Pro
Ryzen
Mobile
Ryzenfall-1 VTL-1 Memory Write No Yes Yes Yes
Ryzenfall-2 Disable SMM Protection No Yes Yes No
Ryzenfall-3 VTL-1 Memory Read
SMM Memory Read (req R-2)
No Yes Yes No
Ryzenfall-4 Code Execution on SP No Yes Maybe No

CTS-Labs states that the Ryzenfall exploit allows the attacker to access protected memory regions that are typically sealed off from hardware, such as the Windows Isolated User Mode and Isolated Kernel Mode, the Secure Management RAM, and AMD Secure Processor Fenced DRAM. A successful attack, via elevated admin priveledges and a vendor supplied driver, are stated to allow protected memory reads and writes, disabling of secure memory protection, or arbitrary code execution.

Fallout 1, 2, and 3

Fallout applies to EPYC processors only, and is similar to Ryzenfall. In fact, the way that CTS-Labs describes the vulnerability, the results are identical to Ryzenfall, but relies on compromising the Boot Loader in the secure processor. Again, this is another attack that requires elevated administrator access and goes through a signed driver, and like Ryzenfall allows access to protected memory regions.

Impact EPYC Ryzen Ryzen
Pro
Ryzen
Mobile
Fallout-1 VTL-1 Memory Write Yes No No No
Fallout-2 Disable SMM Protection Yes No No No
Fallout-3 VTL-1 Memory Read
SMM Memory Read (req F-2)
Yes No No No

CTS-Labs states this as a separate name on the basis that it can bypass Microsoft Virtualization-based security, open up the BIOS to flashing, and allow malware to be injected into protected memory that is outside the scope of most security solutions.

What Happens Now

As this news went live, we got in contact with AMD, who told us have an internal team working on the claims of CTS-Labs. The general feeling is that they have been somewhat blindsided by all of this, given the limited time from notice to disclosure, and are using the internal team to validate the claims made. CTS-Labs state that it has shared the specific methods it used to identify and exploit the processors with AMD, as well as sharing the details with select security companies and the US regulators.

All of the exploits require elevated administrator access, with MasterKey going as far as a BIOS reflash on top of that. CTS-Labs goes on the offensive however, stating that it ‘raises concerning questions regarding security practices, auditing, and quality controls at AMD’, as well as saying that the ‘vulnerabilities amount to complete disregard of fundamental security principles’. This is very strong wording indeed, and one might have expected that they might have waited for an official response. The other angle is that given Spectre/Meltdown, the ‘1-day’ disclosure was designed for the maximum impact. Just enough time to develop a website, anyway.

CTS-Labs is very forthright with its statement, having seemingly pre-briefed some press at the same time it was notifying AMD, and directs questions to its PR firm. The full whitepaper can be seen here, at safefirmware.com, a website registered on 6/9 with no home page and seemingly no link to CTS-Labs. Something doesn’t quite add up here.

Reference

https://www.anandtech.com/show/12525/security-researchers-publish-ryzen-flaws-gave-amd-24-hours-to-respond

Kali available for Windows 10

Both Linux distros use Windows 10’s built-in Windows Subsystem for Linux capability, which permits Linux operating systems to run on top of Windows.

https://www.kali.org/news/kali-linux-in-the-windows-app-store/

No, really…this isn’t clickbait. For the past few weeks, we’ve been working with the Microsoft WSL team to get Kali Linux introduced into the Microsoft App Store as an official WSL distribution and today we’re happy to announce the availability of the “Kali Linux” Windows application. For Windows 10 users, this means you can simply enable WSL, search for Kali in the Windows store, and install it with a single click. This is especially exciting news for penetration testers and security professionals who have limited toolsets due to enterprise compliance standards.

While running Kali on Windows has a few drawbacks to running it natively (such as the lack of raw socket support), it does bring in some very interesting possibilities, such as extending your security toolkit to include a whole bunch of command line tools that are present in Kali. We will update our blog with more news and updates regarding the development of this app as it’s released.

We’d like to take this opportunity to thank the WSL team at Microsoft, and specifically @tara_msft and @benhillis for all the assistance and guidance with which this feat would not be possible. We hope you enjoy WSL’d Kali on Windows 10!

And now, a quick guide on getting Kali installed from the Microsoft App Store:

Getting Kali Linux Installed on WSL

Here’s a quick description of the setup and installation process. For an easier copy / paste operation, these are the basic steps taken:

1. Update your Windows 10 machine. Open an administrative PowerShell window and install the Windows Subsystem with this one-liner. A reboot will be required once finished.

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

2. Once rebooted, open the Windows App store and search for the “Kali Linux” application, or alternatively click here to go there directly. Install the app and enjoy Kali!

Updating Kali Linux on WSL

Updating Kali Linux on WSL is no different from any other instance of Kali:

apt-get update
apt-get dist-upgrade

Here’s a quick video of the process:

Installing Penetration Testing tools on Kali

Installing tools from the Kali Linux repository is usually done via apt commands. For example, to install the Metasploit Framework, you can simply:

apt-get update
apt-get install metasploit-framework

Note: Some Kali tools are identified by antivirus software as malware. One way to deal with this situation is to allow antivirus exceptions on the directory in which the Kali chroot resides in. The following video walks you through this process:

Recovering from a failed Kali WSL instance

Sometimes, you can inadvertently kill your Kali WSL instance, due to an overzealous command, an unintentional action, or even due to Kali or WSL bugs. If this happens, here is a quick recovery guide to get back on top of things. Note: this process will wipe your Kali WSL chroot, and re-extract a new copy. Any changes made to the filesystem will be gone, and reset to default.

Kali Linux is maintained by Offensive Security, a provider of security penetration testing training, and a maintainer of the Exploit Database repository of known software exploits. When run on Windows 10, Kali Linux has “a few drawbacks to running it natively (such as the lack of raw socket support),” Offensive Security explained in a blog post, although it opens up “exciting possibilities,” as well.

Reference

https://mcpmag.com/articles/2018/03/08/kali-debian-linux-windows-store.aspx

https://www.kali.org/news/kali-linux-in-the-windows-app-store/

Search Encrypt – Privacy Search Engine

To my delight, more private search engines are available, each year.  The newest search engine is Search Encrypt.

https://www.searchencrypt.com/

https://choosetoencrypt.com/news/search-encrypt-eliminates-need-clear-history/

This joins

Startpage.com – EU verified as safe

Duckduckgo.com

 

The results are a little thin, so this is a new site by results.  Startpage is EU verified and samples results from Google, with your IP removed.  Duckduckgo.com is a well established search engine, that has a different algorithm to Google, and often find info that Google has suppressed.

All private search engines have to start somewhere, and I wish them the very best.

GDPR – The Practical guide

The rights of the user/client (referred to as “data subject” in the regulation) that I think are relevant for developers are: the right to erasure (the right to be forgotten/deleted from the system), right to restriction of processing (you still keep the data, but mark it as “restricted” and don’t touch it without further consent by the user), the right to data portability (the ability to export one’s data in a machine-readable format), the right to rectification (the ability to get personal data fixed), the right to be informed (getting human-readable information, rather than long terms and conditions), the right of access (the user should be able to see all the data you have about them).

Additionally, the relevant basic principles are: data minimization (one should not collect more data than necessary), integrity and confidentiality (all security measures to protect data that you can think of + measures to guarantee that the data has not been inappropriately modified).

Even further, the regulation requires certain processes to be in place within an organization (of more than 250 employees or if a significant amount of data is processed), and those include keeping a record of all types of processing activities carried out, including transfers to processors (3rd parties), which includes cloud service providers. None of the other requirements of the regulation have an exception depending on the organization size, so “I’m small, GDPR does not concern me” is a myth.

It is important to know what “personal data” is. Basically, it’s every piece of data that can be used to uniquely identify a person or data that is about an already identified person. It’s data that the user has explicitly provided, but also data that you have collected about them from either 3rd parties or based on their activities on the site (what they’ve been looking at, what they’ve purchased, etc.)

Having said that, I’ll list a number of features that will have to be implemented and some hints on how to do that, followed by some do’s and don’t’s.

  • “Forget me” – you should have a method that takes a userId and deletes all personal data about that user (in case they have been collected on the basis of consent or based on the legitimate interests of the controller (see more below), and not due to contract enforcement or legal obligation). It is actually useful for integration tests to have that feature (to cleanup after the test), but it may be hard to implement depending on the data model. In a regular data model, deleting a record may be easy, but some foreign keys may be violated. That means you have two options – either make sure you allow nullable foreign keys (for example an order usually has a reference to the user that made it, but when the user requests his data be deleted, you can set the userId to null), or make sure you delete all related data (e.g. via cascades). This may not be desirable, e.g. if the order is used to track available quantities or for accounting purposes. It’s a bit trickier for event-sourcing data models, or in extreme cases, ones that include some sort of blockchain/hash chain/tamper-evident data structure. With event sourcing you should be able to remove a past event and re-generate intermediate snapshots. For blockchain-like structures – be careful what you put in there and avoid putting personal data of users. There is an option to use a chameleon hash function, but that’s suboptimal. Overall, you must constantly think of how you can delete the personal data. And “our data model doesn’t allow it” isn’t an excuse. What about backups? Ideally, you should keep a separate table of forgotten user IDs, so that each time you restore a backup, you re-forget the forgotten users. This means the table should be in a separate database or have a separate backup/restore process.
  • Notify 3rd parties for erasure – deleting things from your system may be one thing, but you are also obligated to inform all third parties that you have pushed that data to. So if you have sent personal data to, say, Salesforce, Hubspot, twitter, or any cloud service provider, you should call an API of theirs that allows for the deletion of personal data. If you are such a provider, obviously, your “forget me” endpoint should be exposed. Calling the 3rd party APIs to remove data is not the full story, though. You also have to make sure the information does not appear in search results. Now, that’s tricky, as Google doesn’t have an API for removal, only a manual process. Fortunately, it’s only about public profile pages that are crawlable by Google (and other search engines, okay…), but you still have to take measures. Ideally, you should make the personal data page return a 404 HTTP status, so that it can be removed.
  • Restrict processing – in your admin panel where there’s a list of users, there should be a button “restrict processing”. The user settings page should also have that button. When clicked (after reading the appropriate information), it should mark the profile as restricted. That means it should no longer be visible to the backoffice staff, or publicly. You can implement that with a simple “restricted” flag in the users table and a few if-clasues here and there.
  • Export data – there should be another button – “export data”. When clicked, the user should receive all the data that you hold about them. What exactly is that data – depends on the particular usecase. Usually it’s at least the data that you delete with the “forget me” functionality, but may include additional data (e.g. the orders the user has made may not be delete, but should be included in the dump). The structure of the dump is not strictly defined, but my recommendation would be to reuse schema.org definitions as much as possible, for either JSON or XML. If the data is simple enough, a CSV/XLS export would also be fine. Sometimes data export can take a long time, so the button can trigger a background process, which would then notify the user via email when his data is ready (twitter, for example, does that already – you can request all your tweets and you get them after a while). You don’t need to implement an automated export, although it would be nice. It’s sufficient to have a process in place to allow users to request their data, which can be a manual database-querying process.
  • Allow users to edit their profile – this seems an obvious rule, but it isn’t always followed. Users must be able to fix all data about them, including data that you have collected from other sources (e.g. using a “login with facebook” you may have fetched their name and address). Rule of thumb – all the fields in your “users” table should be editable via the UI. Technically, rectification can be done via a manual support process, but that’s normally more expensive for a business than just having the form to do it. There is one other scenario, however, when you’ve obtained the data from other sources (i.e. the user hasn’t provided their details to you directly). In that case there should still be a page where they can identify somehow (via email and/or sms confirmation) and get access to the data about them.
  • Consent checkboxes – “I accept the terms and conditions” would no longer be sufficient to claim that the user has given their consent for processing their data. So, for each particular processing activity there should be a separate checkbox on the registration (or user profile) screen. You should keep these consent checkboxes in separate columns in the database, and let the users withdraw their consent (by unchecking these checkboxes from their profile page – see the previous point). Ideally, these checkboxes should come directly from the register of processing activities (if you keep one). Note that the checkboxes should not be preselected, as this does not count as “consent”. Another important thing here is machine learning/AI. If you are going to use the user’s data to train your ML models, you should get consent for that as well (unless it’s for scientific purposes, which have special treatment in the regulation). Note here the so called “legitimate interest”. It is for the legal team to decide what a legitimate interest is, but direct marketing is included in that category, as well as any common sense processing relating to the business activity – e.g. if you collect addresses for shipping, it’s obviously a legitimate interest. So not all processing activities need consent checkboxes.
  • Re-request consent – if the consent users have given was not clear (e.g. if they simply agreed to terms & conditions), you’d have to re-obtain that consent. So prepare a functionality for mass-emailing your users to ask them to go to their profile page and check all the checkboxes for the personal data processing activities that you have.
  • “See all my data” – this is very similar to the “Export” button, except data should be displayed in the regular UI of the application rather than an XML/JSON format. I wouldn’t say this is mandatory, and you can leave it as a “desirable” feature – for example, Google Maps shows you your location history – all the places that you’ve been to. It is a good implementation of the right to access. (Though Google is very far from perfect when privacy is concerned). This is not all about the right to access – you have to let unregistered users ask whether you have data about them, but that would be a more manual process. The ideal minimum would be to have a feature “check by email”, where you check if you have data about a particular email. You also need to tell the user in what ways you are processing their data. You can simply print all the records in your data process register for which the user has consented to.
  • Age checks – you should ask for the user’s age, and if the user is a child (below 16), you should ask for parent permission. There’s no clear way how to do that, but my suggestion is to introduce a flow, where the child should specify the email of a parent, who can then confirm. Obviously, children will just cheat with their birthdate, or provide a fake parent email, but you will most likely have done your job according to the regulation (this is one of the “wishful thinking” aspects of the regulation).
  • Keeping data for no longer than necessary – if you’ve collected the data for a specific purpose (e.g. shipping a product), you have to delete it/anonymize it as soon as you don’t need it. Many e-commerce sites offer “purchase without registration”, in which case the consent goes only for the particular order. So you need a scheduled job/cron to periodically go through the data and anonymize it (delete names and addresses), but only after a certain condition is met – e.g. the product is confirmed as delivered. You can have a database field for storing the deadline after which the data should be gone, and that deadline can be extended in case of a delivery problem.

Now some “do’s”, which are mostly about the technical measures needed to protect personal data (outlined in article 32). They may be more “ops” than “dev”, but often the application also has to be extended to support them. I’ve listed most of what I could think of in a previous post. An important note here is that this is not mandated by the regulation, but it’s a good practice anyway and helps with protecting personal data.

  • Encrypt the data in transit. That means that communication between your application layer and your database (or your message queue, or whatever component you have) should be over TLS. The certificates could be self-signed (and possibly pinned), or you could have an internal CA. Different databases have different configurations, just google “X encrypted connections. Some databases need gossiping among the nodes – that should also be configured to use encryption
  • Encrypt the data at rest – this again depends on the database (some offer table-level encryption), but can also be done on machine-level. E.g. using LUKS. The private key can be stored in your infrastructure, or in some cloud service like AWS KMS.
  • Encrypt your backups – kind of obvious
  • Implement pseudonymisation – the most obvious use-case is when you want to use production data for the test/staging servers. You should change the personal data to some “pseudonym”, so that the people cannot be identified. When you push data for machine learning purposes (to third parties or not), you can also do that. Technically, that could mean that your User object can have a “pseudonymize” method which applies hash+salt/bcrypt/PBKDF2 for some of the data that can be used to identify a person. Pseudonyms could be reversible or not, depending on the usecase (the definition in the regulation implies reversibility based on a secret information, but in the case of test/staging data it might not be). Some databases have such features built-in, e.g. Orale.
  • Protect data integrity – this is a very broad thing, and could simply mean “have authentication mechanisms for modifying data”. But you can do something more, even as simple as a checksum, or a more complicated solution (like the one I’m working on). It depends on the stakes, on the way data is accessed, on the particular system, etc. The checksum can be in the form of a hash of all the data in a given database record, which should be updated each time the record is updated through the application. It isn’t a strong guarantee, but it is at least something.
  • Have your GDPR register of processing activities in something other than Excel – Article 30 says that you should keep a record of all the types of activities that you use personal data for. That sounds like bureaucracy, but it may be useful – you will be able to link certain aspects of your application with that register (e.g. the consent checkboxes, or your audit trail records). It wouldn’t take much time to implement a simple register, but the business requirements for that should come from whoever is responsible for the GDPR compliance. But you can advise them that having it in Excel won’t make it easy for you as a developer (imagine having to fetch the excel file internally, so that you can parse it and implement a feature). Such a register could be a microservice/small application deployed separately in your infrastructure.
  • Log access to personal data – every read operation on a personal data record should be logged, so that you know who accessed what and for what purpose. This does not follow directly from the provisions of the regulation, but it is kinda implied from the accountability principles. What about search results (or lists) that contain personal data about multiple subjects? My hunch is that simply logging “user X did a search for criteria Y” would suffice. But don’t display too many personal data in lists – for example see how facebook makes you go through some hoops to get a person’s birthday. Note: some have treated article 30 as a requirement to keep an audit log. I don’t think it is saying that – instead it requires 250+ companies (or companies processing data regularly) to keep a register of the types of processing activities (i.e. what you use the data for). There are other articles in the regulation that imply that keeping an audit log is a best practice (for protecting the integrity of the data as well as to make sure it hasn’t been processed without a valid reason)
  • Register all API consumers – you shouldn’t allow anonymous API access to personal data. I’d say you should request the organization name and contact person for each API user upon registration, and add those to the data processing register.

Finally, some “don’t’s”.

  • Don’t use data for purposes that the user hasn’t agreed with – that’s supposed to be the spirit of the regulation. If you want to expose a new API to a new type of clients, or you want to use the data for some machine learning, or you decide to add ads to your site based on users’ behaviour, or sell your database to a 3rd party – think twice. I would imagine your register of processing activities could have a button to send notification emails to users to ask them for permission when a new processing activity is added (or if you use a 3rd party register, it should probably give you an API). So upon adding a new processing activity (and adding that to your register), mass email all users from whom you’d like consent. Note here that additional legitimate interests of the controller might be added dynamically.
  • Don’t log personal data – getting rid of the personal data from log files (especially if they are shipped to a 3rd party service) can be tedious or even impossible. So log just identifiers if needed. And make sure old logs files are cleaned up, just in case
  • Don’t put fields on the registration/profile form that you don’t need – it’s always tempting to just throw as many fields as the usability person/designer agrees on, but unless you absolutely need the data for delivering your service, you shouldn’t collect it. Names you should probably always collect, but unless you are delivering something, a home address or phone is unnecessary.
  • Don’t assume 3rd parties are compliant – you are responsible if there’s a data breach in one of the 3rd parties (e.g. “processors”) to which you send personal data. So before you send data via an API to another service, make sure they have at least a basic level of data protection. If they don’t, raise a flag with management.
  • Don’t assume having ISO XXX makes you compliant – information security standards and even personal data standards are a good start and they will probably 70% of what the regulation requires, but they are not sufficient – most of the things listed above are not covered in any of those standards

Overall, the purpose of the regulation is to make you take conscious decisions when processing personal data. It imposes best practices in a legal way. If you follow the above advice and design your data model, storage, data flow , API calls with data protection in mind, then you shouldn’t worry about the huge fines that the regulation prescribes – they are for extreme cases, like Equifax for example. Regulators (data protection authorities) will most likely have some checklists into which you’d have to somehow fit, but if you follow best practices, that shouldn’t be an issue.

I think all of the above features can be implemented in a few weeks by a small team. Be suspicious when a big vendor offers you a generic plug-and-play “GDPR compliance” solution. GDPR is not just about the technical aspects listed above – it does have organizational/process implications. But also be suspicious if a consultant claims GDPR is complicated. It’s not – it relies on a few basic principles that are in fact best practices anyway. Just don’t ignore them.

Reference:

https://techblog.bozho.net/gdpr-practical-guide-developers/

Buckhacker – Search Amazon Server Data

A search tool that can look for specific files on Amazon Web Service servers has been released by a group whose identity is unknown.

https://www.buckhacker.com/

The tool, Buckhacker, gets its name from the fact that AWS Simple Storage Servers (S3) are known as buckets.

It will make searches for data leaks much easier than in the past.

Buckhacker released an alpha version of the search engine on Wednesday which was noticed by UK security researcher Kevin Beaumont.

He tweeted: “In case you missed it, for the very first time there’s now a Google for Amazon S3 buckets – a full search engine called Buckhacker. This is page 400 of results for *.sql in S3. This is a game changer.”

The search engine has now been taken offline, with the people behind Buckhacker saying on Twitter: “Sorry guys, we are going offline for maintenance. We went online with the alpha version too early.”

Apparently, there were some cache issues in the alpha release, according to the Buckhacker Twitter feed.

Plenty of sensitive data has been found lying unsecured in S3 buckets, with the security firm UpGuard finding such stashes quite often.

UpGuard releases details of its finds on the Web regularly. It has found misconfigured S3 buckets leaking data from Paris-based brand marketing company Octoly, California data analytics firm Alteryx, credit repair service National Credit Federation, the NSA, the Pentagon, global corporate consulting and management firm Accenture, publisher Dow Jones, a Chicago voter database, a North Carolina security firm, and a contractor for the US National Republican Committee

Reference

https://www.itwire.com/security/81762-group-releases-search-tool-for-amazon-s3-buckets.html

Court Rules Facebook Violates Users’ Rights With Illegal Default Privacy Settings and Data Sharing

A German court has ruled that Facebook is breaching data protection rules with privacy settings that “over-share” by default and by requiring its users to give their real names, a consumer rights organization said, AFP reported.

According to German law, one’s own personal information can only be stored and used by a company who has an agreement from the individual.

However, Berlin judges ruled Facebook leaves many of its settings that may be seen as “privacy invasive” switched on by default, failing to offer users an essential choice about how their data is used by the company, plaintiffs for the Federation of German Consumer Organisations (VZBV) said.

“Facebook hides default settings that are not privacy-friendly in its privacy centre and does not provide sufficient information about this when users register,” VZBV legal expert Heiko Duenkel said.

The judges found that at least five different default privacy settings for Facebook were illegal, including sharing location data with its chat partners WhatsApp and Instagram or making user profiles available to external search engines, allowing anyone to search and find information on a person.

Facebook’s partners and subsidiaries collect data to enable hyper-targeted advertising on its users.

Additionally, the court ruled that eight paragraphs of Facebook’s terms of service were invalid, while one of the most significant requires people to use their real names on the social network which the court deemed was illegal.

The VZBV further stated that users were already paying to use Facebook—but with access to their data, instead of with cash.

Facebook could face fines of up to 250,000 euros ($306,000) per infraction if it does not fix its terms of conditions in Germany; however, Facebook said it would file an appeal to the ruling.

“Our products and terms of service have changed a lot since the beginning of the case, and we are making further changes this year to our terms of use and data protection guidelines, with a view to upcoming legal changes,” a spokeswoman told AFP.

Then there is the fact that Facebook plans to unveil a new facial recognition technology across the site which will use artificial intelligence to scan uploaded photos to analyze and recognize faces based on images previously uploaded to the site. This wasn’t even ruled on by the court; I can just imagine what the court decision would be if they ruled against requiring users to use their real names and Facebook selling user data to third party websites.

Besides the huge privacy concerns of Facebook, the company is admitted to have deliberately manipulated its users’ emotions.

Reference:

https://www.activistpost.com/2018/02/court-facebook-violates-users-illegal-default-privacy-settings-data-sharing.html

SANS Top 20 Critical Security Controls – 2017

Every year, SANS details the Top 20 critical security controls to be implemented.  GCHQ has listed the controls here:

GCHQ Top 20 Controls

GCHQ detail a list of quick wins to automate a lot of the top control measures – as a pdf to download.

https://www.ncsc.gov.uk/content/files/protected_files/guidance_files/2014-04-11-critical-security-controls.pdf

GCHQ’s 10 Steps to Cyber Security

https://www.ncsc.gov.uk/guidance/10-steps-cyber-security

gchq 10 steps

NIST 800-53 version 4 – Security Controls for NIST Cyber Security Framework.

NIST offers a list of technical security controls to implement on a system, these are “Generic” controls, that can be implemented on multiple systems.  For Nist, they are classified as low medium or high.

nist 800-53

https://nvd.nist.gov/800-53/Rev4/impact/low

Just to be contrary to the importance of each control, we could start with a check list of those controls considered  “low” – which is shown below.  NIST gives a code to each control.  So Access control is AC and a number.  AC-19 details access control on mobile devices.

 

nist security controls low

 

Access control 7 or AC-7 relates to unsuccessful logon attempts.

The priority is P2. What does P2 mean?

NIST priorities are from P0 to P5, with P1 being the highest priority.  Generally 1-5 dictates the order in which the controls should be implemented.

There is a P0 – which is the lowest priority.

We can see that NIST considers Access control policies and Account management as top priority or a P1, and therefore should be the first technical controls to be implemented.  Account lockouts are a P2 control, which means they are important, and next to be implemented – after all the P1 controls are in place.

High Level controls

Under high level controls we see policies for the AC-5 Separation of Duties and another for AC-6 Least Privilege.  It’s important to separate out duties, so there person who raises a purchase order, is not the person who pays the invoice.  This is to prevent internal fraud.  Likewise, you always give the least amount of access rights to do the job, this is the concept behind “Least Privilege”.

nist controls high

These controls are well worth reading.

CRISC Maturity Models

There are 3 main Maturity Models.

CMMI – Capabilities Maturity Model – Levels 1 to 5

IDEAL – Initiating, Diagnosing, Establishing, Acting and Learning

PAM – Process Assessment Model

 

 

Why Use a Maturity model?

  1. They are proven, industry best practice.
  2. Set targets of improvement
  3. Compare what the organisation against the Maturity Model, to see where they are, based on evidence
  4. Audit tested organisation for contractual reasons

 

CMMI – 5 Levels

CMMI – Most organisations will be around Level 2 to 3.

http://cmmiinstitute.com/capability-maturity-model-integration

CRISC CMMI level 5

https://en.wikipedia.org/wiki/Capability_Maturity_Model

crisc maturity level 1

Level 1 = Ad hoc, chaotic.

Level 2 = The process is documented and repeatable.

Level 3  = Standard Business process

Level 4 = Capable – managed to agreed metrics

Level 5 – Deliberate process optimisation / Improvement

crisc cmmi wiki

https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration

CRISC Questions.

  1. Why would an organisation use a model?

crisc maturity leve 3 part of business

 

%d bloggers like this: