Skip to content

Multiformats LogoMultiformats / Multihash – from MIT

Multihash is a protocol for differentiating outputs from various well-established hash functions, addressing size + encoding considerations. It is useful to write applications that future-proof their use of hashes, and allow multiple hash functions to coexist.

Safer, easier cryptographic hash function upgrades

Multihash is particularly important in systems which depend on cryptographically secure hash functions. Attacks may break the cryptographic properties of secure hash functions. These cryptographic breaks are particularly painful in large tool ecosystems, where tools may have made assumptions about hash values, such as function and digest size. Upgrading becomes a nightmare, as all tools which make those assumptions would have to be upgraded to use the new hash function and new hash digest length. Tools may face serious interoperability problems or error-prone special casing.

How many programs out there assume a git hash is a sha1 hash?

How many scripts assume the hash value digest is exactly 160 bits?

How many tools will break when these values change?

How many programs will fail silently when these values change?

This is precisely where Multihash shines. It was designed for upgrading.

When using Multihash, a system warns the consumers of its hash values that these may have to be upgraded in case of a break. Even though the system may still only use a single hash function at a time, the use of multihash makes it clear to applications that hash values may use different hash functions or be longer in the future. Tooling, applications, and scripts can avoid making assumptions about the length, and read it from the multihash value instead. This way, the vast majority of tooling – which may not do any checking of hashes – would not have to be upgraded at all. This vastly simplifies the upgrade process, avoiding the waste of hundreds or thousands of software engineering hours, deep frustrations, and high blood pressure.

The Multihash Format

A multihash follows the TLV (type-length-value) pattern.

  • the type <hash-func-type> is an unsigned variable integer identifying the hash function. There is a default table, and it is configurable. The default table is the multihash table.
  • the length <digest-length> is an unsigned variable integer counting the length of the digest, in bytes
  • the value <digest-value> is the hash function digest, with a length of exactly <digest-length> bytes.
<hash-func-type><digest-length><digest-value>
unsigned varint code of the hash function being used
unsigned varint digest length, in bytes
hash function output value, with length matching the prefixed length value

For example:

122041dd7b6443542e75701aa98a0c235951a28a0d851b11564d20022ab11d2589a8
Hashing function: sha2-256 (code in hex: 0x12)
Length: 32 (in hex: 0x20)
Digest: 41dd7b6443542e75701aa98a0c235951a28a0d851b11564d20022ab11d2589a8

Implementations

These implementations are available:

Examples

The following multihash examples are different hash function outputs of the same exact input:

Merkle–Damgård

The multihash examples are chosen to show different hash functions and different hash digest lengths at play.

sha1 – 160 bits

11148a173fd3e32c0fa78b90fe42d305f202244e2739
Hashing function: sha1 (code in hex: 0x11)
Length: 20 (in hex: 0x14)
Digest: 8a173fd3e32c0fa78b90fe42d305f202244e2739

sha2-256 – 256 bits (aka sha256)

122041dd7b6443542e75701aa98a0c235951a28a0d851b11564d20022ab11d2589a8
Hashing function: sha2-256 (code in hex: 0x12)
Length: 32 (in hex: 0x20)
Digest: 41dd7b6443542e75701aa98a0c235951a28a0d851b11564d20022ab11d2589a8

sha2-512 – 256 bits

132052eb4dd19f1ec522859e12d89706156570f8fbab1824870bc6f8c7d235eef5f4
Hashing function: sha2-512 (code in hex: 0x13)
Length: 32 (in hex: 0x20)
Digest: 52eb4dd19f1ec522859e12d89706156570f8fbab1824870bc6f8c7d235eef5f4

sha2-512 – 512 bits (aka sha512)

134052eb4dd19f1ec522859e12d89706156570f8fbab1824870bc6f8c7d235eef5f4c2cbbafd365f96fb12b1d98a0334870c2ce90355da25e6a1108a6e17c4aaebb0
Hashing function: sha2-512 (code in hex: 0x13)
Length: 64 (in hex: 0x40)
Digest: 52eb4dd19f1ec522859e12d89706156570f8fbab1824870bc6f8c7d235eef5f4c2cbbafd365f96fb12b1d98a0334870c2ce90355da25e6a1108a6e17c4aaebb0

blake2b-512 – 512 bits

b24040d91ae0cb0e48022053ab0f8f0dc78d28593d0f1c13ae39c9b169c136a779f21a0496337b6f776a73c1742805c1cc15e792ddb3c92ee1fe300389456ef3dc97e2
Hashing function: blake2b-512 (code in hex: 0xb240)
Length: 64 (in hex: 0x40)
Digest: d91ae0cb0e48022053ab0f8f0dc78d28593d0f1c13ae39c9b169c136a779f21a0496337b6f776a73c1742805c1cc15e792ddb3c92ee1fe300389456ef3dc97e2

blake2b-256 – 256 bits

b220207d0a1371550f3306532ff44520b649f8be05b72674e46fc24468ff74323ab030
Hashing function: blake2b-256 (code in hex: 0xb220)
Length: 32 (in hex: 0x20)
Digest: 7d0a1371550f3306532ff44520b649f8be05b72674e46fc24468ff74323ab030

blake2s-256 – 256 bits

b26020a96953281f3fd944a3206219fad61a40b992611b7580f1fa091935db3f7ca13d
Hashing function: blake2s-256 (code in hex: 0xb260)
Length: 32 (in hex: 0x20)
Digest: a96953281f3fd944a3206219fad61a40b992611b7580f1fa091935db3f7ca13d

blake2s-128 – 128 bits

b250100a4ec6f1629e49262d7093e2f82a3278
Hashing function: blake2s-128 (code in hex: 0xb250)
Length: 16 (in hex: 0x10)
Digest: 0a4ec6f1629e49262d7093e2f82a3278

F.A.Q.

Q: Why have digest length as a separate number?

Because combining hash function code and hash digest length ends up with a function code really meaning “function-and-digest-size-code”. Makes using custom digest sizes annoying, and much less flexible. We would need hundreds of codes for all the combinations people would want to use.

Q: Why varints (variable integers)?

So that we have no limitation on functions or lengths.

Q: What kind of varints?

A Most Significant Bit unsigned varint, as defined by the multiformats/unsigned-varint doc.

Q: Don’t we have to agree on a table of functions?

Yes, but we already have to agree on functions, so this is not hard. The table even leaves some room for custom function codes.

Q: Why not use "sha256:<digest>"?

For three reasons:

  • (1) Multihash and all other multiformats endeavor to make the values be “in-band” and to be treated as the original value. The construction <string-prefix>:<hex-digest>is human readable and tuned for some outputs. Hashes are stored compactly in their binary representation. Forcing applications to always convert is cumbersome (split on :, turn the right hand side into binary, remove the :, concat).
  • (2) Multihash and all other multiformats endeavor to be as compact as possible, which means a binary packed representation will help save a lot of space in systems that use millions or billions of hashes. For example, a 100 TB file in IPFS may have as many as 400 million subobjects, which would mean 400 million hashes.
    400,000,000 hashes * (7 - 2) bytes = 2 GB
    
  • (3) The length is extremely useful when hashes are truncated. This is a type of choice that should be expressed in-band. It is also useful when hashes are concatenated or kept in lists, and when scanning a stream quickly.

Q: Is Multihash only for cryptographic hashes?

What about non-cryptographic hashes like murmur3, cityhash, etc?

We decided to make Multihash work for all hash functions, not just cryptographic hash functions. The same kind of choices that people make around

We wanted to be able to include MD5 and SHA1, as they are widely used even now, despite no longer being secure. Ultimately, we could consider these cryptographic hash functions that have transitioned into non-cryptographic hash functions. Perhaps all of them eventually do.

Q: How do I add hash functions to the table?

Three options to add custom hash functions:

  • (1) If other applications would benefit from this hash function, propose it at the multihash repo
  • (2) If your function is only for your application, tou can add a hash function to the table in a range reserved specially for this purpose. See the table.
  • (3) If you need to use a completely custom table, most implementations support loading a separate hash function table.

Q. I want to upgrade a large system to use Multihash. Could you help me figure out how?

Sure, ask for help in IRC, github, or other fora. See the Multiformats Community listing.

Q. I wish Multihash would _______. I really hate _______.

Those are not questions. But please leave any and all feedback over in the Multihash repo. It will help us improve the project and make sure it addresses our users’ needs. Thanks!

About

Specification

There is a spec in progress, which we hope to submit to the IETF. It is being worked on at this pull-request.

Credits

The Multihash format was invented by @jbenet, and refined by the IPFS Team. It is now maintained by the Multiformats community. The Multihash implementations are written by a variety of authors, whose hard work has made future-proofing and upgrading hash functions much easier. Thank you!

Open Source

The Multihash format (this documentation and the specification) is Open Source software, licensed under the MIT License and patent-free. The multihash implementations listed here are also Open Source software. Please contribute to make them great! Your bug reports, new features, and documentation improvements will benefit everyone.

Part of the Multiformats Project

Multihash is part of the Multiformats Project, a collection of protocols which aim to future-proof systems, today. Check out the other multiformats. It is also maintained and sponsored by Protocol Labs.

Reference:

http://multiformats.io/multihash/#open-source

Group Policy – How to map network drives using group policy windows server

Where you need to create a mapped drive for an AD group, we use Group Policy Preferences. The second video is more realistic for everyday use than the first video.

Group Policy Management > Preferences > AD Group  or default domain policy >

r/click > edit > user configuration > preferences > windows settings > drive mpas > r/click > new mapped drive > create/ paste in path to drive/label drive/reconnect/drive letter for drive/show this drive > ok

Last use gpupdate /force to force the gp update.

Gpupdate /force

How Spanning Tree Protocol prevents loops – retro video

This is a hand drawn retro video on STP, which explains STP in great yet simple detail, in a pratical way.

 

 

How to stop a switching loop

 

  1. Discover the topology (scope) of the loop.

    Once it has been established that the reason for the network outage is a forwarding loop, the highest priority is to stop the loop and restore the network operation. In order to stop the loop, you must know which ports are involved in the loop: look at the ports with the highest link utilization (packets per second). The show interface Cisco IOS software command displays the utilization for each interface.

    In order to display only the utilization information and the interface name (for a quick analysis), you might use Cisco IOS software regular expression output filtering. Issue the show interface | include line|\/sec command to display only the packet per second statistics and the interface name:

    cat# show interface | include line|\/sec
    
    GigabitEthernet2/1 is up, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/2 is up, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/3 is up, line protocol is up
      5 minute input rate 99765230 bits/sec, 24912 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/4 is up, line protocol is up
      5 minute input rate 1000 bits/sec, 27 packets/sec
      5 minute output rate 101002134 bits/sec, 25043 packets/sec
    GigabitEthernet2/5 is administratively down, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/6 is administratively down, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/7 is up, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/8 is up, line protocol is up
      5 minute input rate 2000 bits/sec, 41 packets/sec
      5 minute output rate 99552940 bits/sec, 24892 packets/sec
    

    Pay particular attention to the interfaces with the highest link utilization. In this example, these are interfaces g2/3, g2/4, and g2/8; they are probably the ports that are involved in the loop.

  2. If you see maxed out input traffic on a trunk (meaning the loop traffic is coming into that device from somewhere else), go to the device on the other end of the trunk and issue a “show interfaces” command on that device. Keep doing this until you reach a device that only has maxed out output traffic on the trunks. This means that the culprit is directly connected to the device you are currently logged into and the loop traffic is originating on the device you are currently logged into,
  3. Break the loop.

    To break the loop, you must shut down or disconnect the involved ports.

    It is very important to not only stop the loop but to also find and fix the root cause of the loop. It is relatively easier to break the loop

The OpenVPN Audit Begins February 15th 2017

The OpenVPN audit is going to be carried out as planned by QuarksLab’s Gabriel Campana and Jean-Baptiste Bedrune on February 15th 2017. There will be 90 man-days of work completed throughout this audit and it will take approximately 45 days to complete.

openvpn-logo

During this time period, we will work with the OpenVPN team to address issues as they are found on the fly. Once we are confident that the issues have been addressed, we will release the full results of the audit to the public. We expect to have final results released to the public by April 7th 2017.

 

The details about the value of this effort and the power of OSTIF are below:

Why OpenVPN?

OpenVPN is a cross platform and well supported Virtual Private Network application that is free to use for everyone around the world. It is available for all versions of Windows, OSX, Android, iOS, Chrome OS, most versions of Linux, and BSD. It is used worldwide by businesses to protect information that goes over insecure networks, and increasingly it is used by private users who want to protect their information from cybercrime and surveillance. OpenVPN is powerful, free, and used by millions of people. An audit of OpenVPN will add further integrity to the software, allowing users around the world to trust that the software is strong and resistant to intrusion.

Why now?

OpenVPN version 2.4 has just entered beta, with a full release to soon follow. As the first major revision of the software to be released in years, it has a huge number of bug fixes and feature changes under the hood, giving an audit a great opportunity to review these new areas of code that will likely persist for years with small changes until OpenVPN 2.5 is ready to go. It is the most effective time for an audit to take place.

Why OSTIF?

We are an independent advocate for free and open software that improves the security of users around the world. We are efficient, open, and ready to tackle the problems that face free software. We have just completed our audit of VeraCrypt 1.18 a few months ago, and we are striking while the iron is hot to continue making harder and more powerful security software for all of us.

Early credits for groups that are contributing to the effort:

Leading the effort, the groups that have made significant contributions to the cause:
iPredator has contributed 10 BTC or about $7700
OpenVPN Technologies Inc. has contributed $5000
Perfect Privacy has contributed $3500
nVpn.net has contributed $2650
ExpressVPN has contributed $2500
SmartDNSProxy has contributed $2500
iVPN has contributed $2100
VikingVPN has contributed $2000, making their total contribution to OSTIF $2000.
ZorroVPN has contributed $1600
SecureVPN.to has contributed $1500
VPN.ac has contributed $1500
GetFlix has contributed $1350
CryptoStorm has contributed $1337
TrickByte has contributed $1150
NordVPN has contributed $1000
BlackVPN has contributed $1000

Also contributing to the effort:
FatDisco has contributed $650
BestSmartDNS has contributed $600
StrongVPN has contributed $500
Windscribe has contributed $500
Celo.net has contributed $400
BolehVPN has contributed $300
ThatOnePrivacySite has contributed $200. They have also agreed to add a field to their famous VPN Comparison Chart showing which VPN services contribute back to the privacy community. (under activism – gives back to privacy causes)
VPNcompare.co.uk has contributed $201
BestVPN has contributed $300 and plans to write an article about the fundraiser, helping us reach out the privacy community!
InvizBox has contributed $100
Ender Informatics GmbH has contributed $100

 

Comment:

OpenVPN is the industry standard for private VPN’s.   I wish both OpenVPN and the audit, the utmost success.  We all need OpenVPN to continue to be our flag-bearing privacy software.

Reference:

https://ostif.org/the-openvpn-audit-begins-february-15th-2017/

WARNING: Parents told DESTROY talking doll as paedos could use it to watch YOUR children

PARENTS in Germany have been told to destroy a ‘Big Brother’ child’s doll because it can be used for spying.

 

CaylaGETTY

Parents told to destroy talking doll because it could be used for hacking

Officials fear the doll called ‘My Friend Cayla’ – which is linked to the Internet – can be easily hijacked by hackers or paedophiles.

Consumer watchdog officials say My Friend Cayla is equipped with an insecure Bluetooth device, which cyber-criminals could take over in order to steal personal data and listen and talk to the child playing with it.

Germany’s Federal Network Agency, the regulatory office for electricity, gas, telecommunications, post and railway markets, now tells parents: “Destroy it immediately.”

 

DollNC

Officials fear the doll can be easily hijacked by hackers or paedophiles.

A spokesman for the agency said My Friend Cayla was a “concealed transmitting device” – illegal in Germany.

The doll answers questions by accessing the Internet but also asks for sensitive personal information, such as the user’s name, school, parents’ names and hometown.

Manufacturer Genesis Toys of the USA has not yet commented on the warning.

 

Comment:

Congratulations to the German regulators for this sound advice to parents.

It does beggar belief that the toy vendor could consider connecting a doll to the internet, when a toy is clearly designed for children.

 

Reference:

http://www.express.co.uk/news/world/768748/german-parents-destroy-my-friend-cayla-dolls-over-hacking-fears

Will You Graduate? Ask Big Data – AI predicts those who will graduate

“You could get a C or an A in that first nursing class and still be successful,” said Timothy M. Renick, the vice provost. “But if you got a low grade in your math courses, by the time you were in your junior and senior years, you were doing very poorly.”

The analysis showed that fewer than 10 percent of nursing students with a C in math graduated, compared with about 80 percent of students with at least a B+. Algebra and statistics, it seems, were providing an essential foundation for later classes in biology, microbiology, physiology and pharmacology.

 

A little less than half of the nation’s students graduate in four years; given two more years to get the job done, the percentage rises to only about 60 percent. That’s no small concern for families shouldering the additional tuition or student debt (an average of more than $28,000 on graduation, according to a 2016 College Board report). Students who drop out are in even worse shape. Such outcomes have led parents and politicians to demand colleges do better. Big data is one experiment in how to do that.

 

Different courses at different universities have proved to be predictors of success, or failure. The most significant seem to be foundational courses that prepare students for higher-level work in a particular major. Across a dozen of its clients, the data analysts Civitas Learning found that the probability of graduating dropped precipitously if students got less than an A or a B in a foundational course in their major, like management for a business major or elementary education for an education major. El Paso Community College’s nursing hot spot was a foundational biology course. Anyone who got an A had a 71 percent chance of graduating in six years; those with a B had only a 53 percent chance.

At the University of Arizona, a high grade in English comp proved to be crucial to graduation. Only 41 percent of students who got a C in freshman writing ended up with a degree, compared with 61 percent of the B students and 72 percent of A students.

“We always figured that if a student got a C, she was fine,” said Melissa Vito, a senior vice provost. “It turns out, a C in a foundation course like freshman composition can be an indicator that the student is not going to succeed.” The university now knows it needs to throw more resources at writing, specifically at those C students.

At Middle Tennessee State University, History 2020, an American history course required for most students, has been a powerful predictor. The most consistent feature for those who did not graduate was that they received a D in it. “History is a heavy reading course,” said Richard D. Sluder, vice provost for student success, “so it signifies a need for reading comprehension.”

Before predictive analytics, Dr. Sluder said, many of the D’s went unnoticed. That’s because advisers were mainly monitoring grade-point averages, not grades in specific courses. “You take a student who’s getting A’s and B’s and you see a C in this one class,” he said, “and you’d say, ‘He looks fine.’ But, really, he was at risk.”

Such insight may revolutionize the way student advising works.

One woman who was planning to major in psychology had taken it and three other courses as a freshman in the fall of 2014. She earned three A’s and a C.

“It was a pretty decent start,” Ms. Mercer said. “But guess what? The C was in Psych 1100.” In the spring of 2015, the student signed up for five classes. She withdrew from one. The next semester she withdrew from three of her five classes. This fall she took four classes and withdrew from all of them.

“It was just what the analytics had predicted,” Ms. Mercer said. “I tend to be a little skeptical. It wasn’t until I dove into the records and I saw, ‘Yes, indeed, this is a problem.’ ”

 

Reference:

https://mobile.nytimes.com/2017/02/02/education/edlife/will-you-graduate-ask-big-data.html

What is the Legal status of Kodi users in the UK – BBC Radio 5

In recent weeks, the legality or otherwise of so-called fully-loaded Kodi boxes has become a big topic. The devices are massively popular in the UK but are people going to get busted for using them? Almost certainly not, an intellectual property lawyer told the BBC this morning.

 

Unlike most other kinds of unauthorized online sharing, the way content is delivered through Kodi has exposed a whole new legal gray area. While it’s definitely illegal in Europe and the US to share copyrighted content without permission using BitTorrent, no one is really clear whether streaming content via Kodi has the same status.

In recent weeks, this has led to the publication of dozens of articles which claim to answer that question. Upon review, none of them actually do, so the topic remains hot in the UK.

To that end, BBC 5 Live ran a pretty long feature this morning which had host Adrian Chiles discussing the topic with FACT chief Kieron Sharp, intellectual property lawyer Steve Kuncewicz and technical guy Tom Cheesewright who really knew what he was talking about.

The start of the interview was marked by Chiles noting that when he found out what a Kodi device could do, he immediately wanted one.

 

“I’d never heard of them,” he said. “I heard what they were and then I wanted one. And then someone told me that they’re probably illegal, so I better not get one.”

Chiles’ reaction is probably held in common with millions of others who’ve learned about what Kodi devices can do. There’s a clear and totally understandable attraction, and it was helpful for the broadcaster to acknowledge that.

After a brief technological description from Cheesewright, Chiles turned to IP lawyer Steve Kuncewicz, who was asked where the law stands. His response was fairly lengthy but clearly focused on the people supplying the devices.

“You’ve got big content producers like HBO that are used to producing premium content that people pay for,” Kuncewicz said.

“Where they are directing their attention is on the people who sell these boxes loaded with software that lets you get around paying a subscription.”

 

The lawyer acknowledged that there are some ongoing cases in the UK which involve suppliers of devices which effectively allow users to get around copyright protection.

“That’s been the focus of the strategy and it’s a big, big, big issue,” he said.

But for those who know Chiles’ down-to-earth style, it was always obvious that he would want to know how the law views the man in the street.

“From the punter’s point of view, if you’re watching something made by HBO that Netflix would hope that you’d be paying them to watch, but you’re watching it for free via your Kodi stick, then are you going to get a knock on the door?” Chiles asked.

Chiles didn’t get a straight answer about the law, but after a breath, Kuncewicz offered the reality.

“In all likelihood, no,” the lawyer responded.

Noting that there have been cases against file-sharers, the IP expert said that there is a difference – a legal gray area – when it comes to streaming versus file-sharing.

 

“What tends to happen is that the content providers go after the ISPs, they go after platforms [offering pirate content], not the individual people,” he said, adding that getting a knock on the door at home would be fairly unlikely.

 

Reference:

https://torrentfreak.com/unlikely-pirate-kodi-users-will-get-in-trouble-experts-suggest-170105/

Trump elected by Big Data – the impact of Cambridge Analytica

Imagine the influence of a London based company, which acted as the catalyst that powered both BREXIT and President Trump’s campaign to success.

What if I told you that Trump was elected by Big Data analysis, carried out by a British company, and that this company can swing elections.  You probably already know the strength of Cambridge Analytica in winning elections, but the video below is for those who may not have realised what was happening.

Here’s the Trump campaign video:

Every single adult in America has been analysed by Cambridge Analytica.  Next they altered the campaign message to each individual’s personality.

So before you get bored.. I always warned you about the dangers of Big Data. This is one of the side effects – one British company can make Presidents.

 

In Europe there are several elections in the next year.  How much would you pay Cambridge Analytica to win an election for you….

By the way, Alexandra Nix is a great name.  I wonder if it’s real, or provided as a result of data analysis?

 

Reference:

https://motherboard.vice.com/en_us/article/how-our-likes-helped-trump-win

The Kali Linux Certified Professional

Introducing the KLCP Certification

After almost two years in the making, it is with great pride that we announce today our new Kali Linux Professional certification – the first and only official certification program that validates one’s proficiency with the Kali Linux distribution.

If you’re new to the information security field, or are looking to take your first steps towards a new career in InfoSec, the KLCP is a “must have” foundational certification. Built on the philosophy that “you’ve got to walk before you can run,” the KLCP will give you direct experience with your working environment and a solid foundation toward a future with any professional InfoSec work. As we continually see, those entering the Offensive Security PWK program with previous working experience with Kali, and a general familiarity with Linux, tend to do better in the real world OSCP exam.

For those of you who already have some experience in the field, the KLCP provides a solid and thorough study of the Kali Linux Distribution – learning how to build custom packages, host repositories, manage and orchestrate multiple instances, build custom ISOs, and much, much, more. The KLCP will allow you to take that ambiguous bullet point at the end of your resume – the one that reads “Additional Skills – familiarity with Kali Linux”, and properly quantify it. Possession of the KLCP certification means that you have truly mastered the Kali penetration testing distribution and are ready to take your information security skills to the next level.

The KLCP exam will be available via Pearson VUE exam centres worldwide after the Black Hat USA 2017 event in Las Vegas.

 

“Kali Linux Revealed” Class at Black Hat USA, 2017

This year, we are fortunate enough to debut our first official Kali Linux training at the Black Hat conference in Las Vegas, 2017. This in depth, four day course will focus on the Kali Linux platform itself (as opposed to the tools, or penetration testing techniques), and help you understand and maximize the usage of Kali from the ground up. Delivered by Mati Aharoni and Johnny Long, in this four day class you will learn how to:

  • Gain confidence in basic Linux proficiency, fundamentals, and the command line.

  • Install and verify Kali Linux as a primary OS or virtual machine, including full disk encryption and preseeding.

  • Use Kali as a portable USB distribution including options for encryption, persistence, and “self-destruction”.

  • Install, remove, customize, and troubleshoot software via the Debian package manager.

  • Thoroughly administer, customize, and configure Kali Linux for a streamlined experience.

  • Troubleshoot Kali and diagnose common problems in an optimal way.

  • Secure and monitor Kali at the network and filesystem levels.

  • Create your own packages and host your own custom package repositories.

  • Roll your own completely customized Kali implementation and preseed your installations.

  • Customize, optimize, and buld your own kernel.

  • Scale and deploy Kali Linux in the enterprise.

  • Manage and orchestrate multiple installations of Kali Linux.

Reference

https://www.kali.org/news/introducing-kali-linux-certified-professional/

Linux Performance

This page links to various Linux performance material I’ve created, including the tools maps on the right. The first is a hi-res version combining observability, static performance tuning, and perf-tools/bcc (see discussion). The remainder were designed for use in slide decks and have larger fonts and arrows, and show: Linux observability tools, Linux benchmarking tools, Linux tuning tools, and Linux sar. For even more diagrams, see my slide decks below.

Reference:

http://www.brendangregg.com/linuxperf.html

Tools

Documentation

Talks

In rough order of recommended viewing or difficulty, intro to more advanced:

1. Linux Systems Performance (PerconaLive 2016)

This is my summary of Linux systems performance in 50 minutes, covering six facets: observability, methodologies, benchmarking, profiling, tracing, and tuning. It’s intended for people who have limited appetite for this topic.

A video of the talk is on percona.com, and the slides are on slideshare or as a PDF.

For a lot more information on observability tools, profiling, and tracing, see the talks that follow.

2. Linux Performance Tools (Velocity 2015)

At Velocity 2015, I gave a 90 minute tutorial on Linux performance tools, summarizing performance observability, benchmarking, tuning, static performance tuning, and tracing tools. I also covered performance methodology, and included some live demos. This should be useful for everyone working on Linux systems. If you just saw my PerconaLive2016 talk, then some content should be familiar, but with many extras: I focus a lot more on the tools in this talk.

A video of the talk is on youtube (playlist; part 1, part 2) and the slides are on slideshare or as a PDF.

This was similar to my SCaLE11x and LinuxCon talks, however, with 90 minutes I was able to cover more tools and methodologies, making it the most complete tour of the topic I’ve done. I also posted about it on the Netflix Tech Blog.

3. Broken Linux Performance Tools (SCaLE14x, 2016)

At the Southern California Linux Expo (SCaLE 14x), I gave a talk on Broken Linux Performance Tools. This was a follow-on to my earlier Linux Performance Tools talk originally at SCaLE11x (and more recently at Velocity as a tutorial). This broken tools talk was a tour of common problems with Linux system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. It also includes advice on how to cope (the green “What You Can Do” slides).

A video of the talk is on youtube and the slides are on slideshare or as a PDF.

4. Linux Profiling at Netflix (SCaLE13x, 2015)

At (SCaLE 13x), I gave a talk on Linux Profiling at Netflix using perf_events (aka “perf”), covering CPU profiling and a tour of other features. This talk covered gotchas, such as fixing stack traces and symbols when profiling Java and Node.js.

A video of the talk is on youtube, and the slides are on slideshare or as a PDF.

In a post about this talk, I included the interactive CPU flame graph SVG I was demonstrating.

5. Linux Performance Analysis: New Tools and Old Secrets (ftrace) (LISA 2014)

At USENIX LISA 2014, I gave a talk on the new ftrace and perf_events tools I’ve been developing: the perf-tools collection on github, which mostly uses ftrace: a tracer that has been built into the Linux kernel for many years, but few have discovered (practically a secret).

A video of the talk is on youtube, and the slides are on slideshare or as a PDF.

The perf-tools collection, inspired by my earlier DTraceToolkit, provides advanced system performance analysis tools for Linux. Each tool has a man page and example file. They are unstable and unsupported, and they currently use shell scripting, hacks, and the ftrace and perf_events kernel tracing frameworks. They should be developed (and improved) as the Linux kernel acquires more tracing capabilities (eg, eBPF).

In a post about this talk, I included some more screenshots of these tools in action.

6. Give me 15 minutes and I’ll change your view of Linux tracing (LISA, 2016)

I gave this demo at USENIX/LISA 2016, showing ftrace, perf, and bcc/BPF. A video is on youtube.

7. BPF: Tracing and More (LCA, 2017)

This talk introduces enhanced BPF (aka eBPF) for Linux, summarizes different use cases, and then covers tracing/observability in detail. I’ve given BPF tracing talks before, this time I covered some other topics and included an extended live BPF demo.

A video of the talk is on youtube, and the slides are on slideshare or as a PDF.

For another talk on BPF that discuses off-CPU analysis in detail, see my earlier talk Linux 4.x Performance: Using BPF Superpowers from Performance@Scale 2016.

8. Performance Checklists for SREs (SREcon, 2016)

At SREcon 2016 Santa Clara, I gave the closing talk on performance checklists for SREs (Site Reliability Engineers). The later half of this talk included Linux checklists for incident performance response. These may be useful whether you’re analyzing Linux performance in a hurry or not.

A video of the talk is on youtube and usenix, and the slides are on slideshare and as a PDF. I included the checklists in a blog post.

9. What Linux can learn from Solaris performance and vice-versa (SCaLE12x, 2014)

At SCaLE 12x, I gave the keynote on What Linux can learn from Solaris performance and vice-versa. This drew on my prior experiences doing head to head comparisons, and work for the Systems Performance book. I’d never seen a good talk comparing performance features of both, I suspect in part because it’s hard to know them both in enough depth, and also hard to choose from the many differences which should be highlighted.

A video of the talk is on youtube, and the slides are on slideshare and as a PDF.

This presentation also contains ponies. Lots of ponies. These are the unofficial mascots for DTrace, perf_events, SystemTap, ktap, and LTTng, and were designed by the same person (Deirdré) who designed the original (and popular) DTrace ponycorn.

Resources

Other resources (not by me) I’d recommend for the topic of Linux performance:

 

Reference:

http://www.brendangregg.com/linuxperf.html

%d bloggers like this: