Skip to content

How Spanning Tree Protocol prevents loops – retro video

This is a hand drawn retro video on STP, which explains STP in great yet simple detail, in a pratical way.



How to stop a switching loop


  1. Discover the topology (scope) of the loop.

    Once it has been established that the reason for the network outage is a forwarding loop, the highest priority is to stop the loop and restore the network operation. In order to stop the loop, you must know which ports are involved in the loop: look at the ports with the highest link utilization (packets per second). The show interface Cisco IOS software command displays the utilization for each interface.

    In order to display only the utilization information and the interface name (for a quick analysis), you might use Cisco IOS software regular expression output filtering. Issue the show interface | include line|\/sec command to display only the packet per second statistics and the interface name:

    cat# show interface | include line|\/sec
    GigabitEthernet2/1 is up, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/2 is up, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/3 is up, line protocol is up
      5 minute input rate 99765230 bits/sec, 24912 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/4 is up, line protocol is up
      5 minute input rate 1000 bits/sec, 27 packets/sec
      5 minute output rate 101002134 bits/sec, 25043 packets/sec
    GigabitEthernet2/5 is administratively down, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/6 is administratively down, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/7 is up, line protocol is down
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
    GigabitEthernet2/8 is up, line protocol is up
      5 minute input rate 2000 bits/sec, 41 packets/sec
      5 minute output rate 99552940 bits/sec, 24892 packets/sec

    Pay particular attention to the interfaces with the highest link utilization. In this example, these are interfaces g2/3, g2/4, and g2/8; they are probably the ports that are involved in the loop.

  2. If you see maxed out input traffic on a trunk (meaning the loop traffic is coming into that device from somewhere else), go to the device on the other end of the trunk and issue a “show interfaces” command on that device. Keep doing this until you reach a device that only has maxed out output traffic on the trunks. This means that the culprit is directly connected to the device you are currently logged into and the loop traffic is originating on the device you are currently logged into,
  3. Break the loop.

    To break the loop, you must shut down or disconnect the involved ports.

    It is very important to not only stop the loop but to also find and fix the root cause of the loop. It is relatively easier to break the loop

The OpenVPN Audit Begins February 15th 2017

The OpenVPN audit is going to be carried out as planned by QuarksLab’s Gabriel Campana and Jean-Baptiste Bedrune on February 15th 2017. There will be 90 man-days of work completed throughout this audit and it will take approximately 45 days to complete.


During this time period, we will work with the OpenVPN team to address issues as they are found on the fly. Once we are confident that the issues have been addressed, we will release the full results of the audit to the public. We expect to have final results released to the public by April 7th 2017.


The details about the value of this effort and the power of OSTIF are below:

Why OpenVPN?

OpenVPN is a cross platform and well supported Virtual Private Network application that is free to use for everyone around the world. It is available for all versions of Windows, OSX, Android, iOS, Chrome OS, most versions of Linux, and BSD. It is used worldwide by businesses to protect information that goes over insecure networks, and increasingly it is used by private users who want to protect their information from cybercrime and surveillance. OpenVPN is powerful, free, and used by millions of people. An audit of OpenVPN will add further integrity to the software, allowing users around the world to trust that the software is strong and resistant to intrusion.

Why now?

OpenVPN version 2.4 has just entered beta, with a full release to soon follow. As the first major revision of the software to be released in years, it has a huge number of bug fixes and feature changes under the hood, giving an audit a great opportunity to review these new areas of code that will likely persist for years with small changes until OpenVPN 2.5 is ready to go. It is the most effective time for an audit to take place.


We are an independent advocate for free and open software that improves the security of users around the world. We are efficient, open, and ready to tackle the problems that face free software. We have just completed our audit of VeraCrypt 1.18 a few months ago, and we are striking while the iron is hot to continue making harder and more powerful security software for all of us.

Early credits for groups that are contributing to the effort:

Leading the effort, the groups that have made significant contributions to the cause:
iPredator has contributed 10 BTC or about $7700
OpenVPN Technologies Inc. has contributed $5000
Perfect Privacy has contributed $3500 has contributed $2650
ExpressVPN has contributed $2500
SmartDNSProxy has contributed $2500
iVPN has contributed $2100
VikingVPN has contributed $2000, making their total contribution to OSTIF $2000.
ZorroVPN has contributed $1600 has contributed $1500 has contributed $1500
GetFlix has contributed $1350
CryptoStorm has contributed $1337
TrickByte has contributed $1150
NordVPN has contributed $1000
BlackVPN has contributed $1000

Also contributing to the effort:
FatDisco has contributed $650
BestSmartDNS has contributed $600
StrongVPN has contributed $500
Windscribe has contributed $500 has contributed $400
BolehVPN has contributed $300
ThatOnePrivacySite has contributed $200. They have also agreed to add a field to their famous VPN Comparison Chart showing which VPN services contribute back to the privacy community. (under activism – gives back to privacy causes) has contributed $201
BestVPN has contributed $300 and plans to write an article about the fundraiser, helping us reach out the privacy community!
InvizBox has contributed $100
Ender Informatics GmbH has contributed $100



OpenVPN is the industry standard for private VPN’s.   I wish both OpenVPN and the audit, the utmost success.  We all need OpenVPN to continue to be our flag-bearing privacy software.


WARNING: Parents told DESTROY talking doll as paedos could use it to watch YOUR children

PARENTS in Germany have been told to destroy a ‘Big Brother’ child’s doll because it can be used for spying.



Parents told to destroy talking doll because it could be used for hacking

Officials fear the doll called ‘My Friend Cayla’ – which is linked to the Internet – can be easily hijacked by hackers or paedophiles.

Consumer watchdog officials say My Friend Cayla is equipped with an insecure Bluetooth device, which cyber-criminals could take over in order to steal personal data and listen and talk to the child playing with it.

Germany’s Federal Network Agency, the regulatory office for electricity, gas, telecommunications, post and railway markets, now tells parents: “Destroy it immediately.”



Officials fear the doll can be easily hijacked by hackers or paedophiles.

A spokesman for the agency said My Friend Cayla was a “concealed transmitting device” – illegal in Germany.

The doll answers questions by accessing the Internet but also asks for sensitive personal information, such as the user’s name, school, parents’ names and hometown.

Manufacturer Genesis Toys of the USA has not yet commented on the warning.



Congratulations to the German regulators for this sound advice to parents.

It does beggar belief that the toy vendor could consider connecting a doll to the internet, when a toy is clearly designed for children.



Will You Graduate? Ask Big Data – AI predicts those who will graduate

“You could get a C or an A in that first nursing class and still be successful,” said Timothy M. Renick, the vice provost. “But if you got a low grade in your math courses, by the time you were in your junior and senior years, you were doing very poorly.”

The analysis showed that fewer than 10 percent of nursing students with a C in math graduated, compared with about 80 percent of students with at least a B+. Algebra and statistics, it seems, were providing an essential foundation for later classes in biology, microbiology, physiology and pharmacology.


A little less than half of the nation’s students graduate in four years; given two more years to get the job done, the percentage rises to only about 60 percent. That’s no small concern for families shouldering the additional tuition or student debt (an average of more than $28,000 on graduation, according to a 2016 College Board report). Students who drop out are in even worse shape. Such outcomes have led parents and politicians to demand colleges do better. Big data is one experiment in how to do that.


Different courses at different universities have proved to be predictors of success, or failure. The most significant seem to be foundational courses that prepare students for higher-level work in a particular major. Across a dozen of its clients, the data analysts Civitas Learning found that the probability of graduating dropped precipitously if students got less than an A or a B in a foundational course in their major, like management for a business major or elementary education for an education major. El Paso Community College’s nursing hot spot was a foundational biology course. Anyone who got an A had a 71 percent chance of graduating in six years; those with a B had only a 53 percent chance.

At the University of Arizona, a high grade in English comp proved to be crucial to graduation. Only 41 percent of students who got a C in freshman writing ended up with a degree, compared with 61 percent of the B students and 72 percent of A students.

“We always figured that if a student got a C, she was fine,” said Melissa Vito, a senior vice provost. “It turns out, a C in a foundation course like freshman composition can be an indicator that the student is not going to succeed.” The university now knows it needs to throw more resources at writing, specifically at those C students.

At Middle Tennessee State University, History 2020, an American history course required for most students, has been a powerful predictor. The most consistent feature for those who did not graduate was that they received a D in it. “History is a heavy reading course,” said Richard D. Sluder, vice provost for student success, “so it signifies a need for reading comprehension.”

Before predictive analytics, Dr. Sluder said, many of the D’s went unnoticed. That’s because advisers were mainly monitoring grade-point averages, not grades in specific courses. “You take a student who’s getting A’s and B’s and you see a C in this one class,” he said, “and you’d say, ‘He looks fine.’ But, really, he was at risk.”

Such insight may revolutionize the way student advising works.

One woman who was planning to major in psychology had taken it and three other courses as a freshman in the fall of 2014. She earned three A’s and a C.

“It was a pretty decent start,” Ms. Mercer said. “But guess what? The C was in Psych 1100.” In the spring of 2015, the student signed up for five classes. She withdrew from one. The next semester she withdrew from three of her five classes. This fall she took four classes and withdrew from all of them.

“It was just what the analytics had predicted,” Ms. Mercer said. “I tend to be a little skeptical. It wasn’t until I dove into the records and I saw, ‘Yes, indeed, this is a problem.’ ”



What is the Legal status of Kodi users in the UK – BBC Radio 5

In recent weeks, the legality or otherwise of so-called fully-loaded Kodi boxes has become a big topic. The devices are massively popular in the UK but are people going to get busted for using them? Almost certainly not, an intellectual property lawyer told the BBC this morning.


Unlike most other kinds of unauthorized online sharing, the way content is delivered through Kodi has exposed a whole new legal gray area. While it’s definitely illegal in Europe and the US to share copyrighted content without permission using BitTorrent, no one is really clear whether streaming content via Kodi has the same status.

In recent weeks, this has led to the publication of dozens of articles which claim to answer that question. Upon review, none of them actually do, so the topic remains hot in the UK.

To that end, BBC 5 Live ran a pretty long feature this morning which had host Adrian Chiles discussing the topic with FACT chief Kieron Sharp, intellectual property lawyer Steve Kuncewicz and technical guy Tom Cheesewright who really knew what he was talking about.

The start of the interview was marked by Chiles noting that when he found out what a Kodi device could do, he immediately wanted one.


“I’d never heard of them,” he said. “I heard what they were and then I wanted one. And then someone told me that they’re probably illegal, so I better not get one.”

Chiles’ reaction is probably held in common with millions of others who’ve learned about what Kodi devices can do. There’s a clear and totally understandable attraction, and it was helpful for the broadcaster to acknowledge that.

After a brief technological description from Cheesewright, Chiles turned to IP lawyer Steve Kuncewicz, who was asked where the law stands. His response was fairly lengthy but clearly focused on the people supplying the devices.

“You’ve got big content producers like HBO that are used to producing premium content that people pay for,” Kuncewicz said.

“Where they are directing their attention is on the people who sell these boxes loaded with software that lets you get around paying a subscription.”


The lawyer acknowledged that there are some ongoing cases in the UK which involve suppliers of devices which effectively allow users to get around copyright protection.

“That’s been the focus of the strategy and it’s a big, big, big issue,” he said.

But for those who know Chiles’ down-to-earth style, it was always obvious that he would want to know how the law views the man in the street.

“From the punter’s point of view, if you’re watching something made by HBO that Netflix would hope that you’d be paying them to watch, but you’re watching it for free via your Kodi stick, then are you going to get a knock on the door?” Chiles asked.

Chiles didn’t get a straight answer about the law, but after a breath, Kuncewicz offered the reality.

“In all likelihood, no,” the lawyer responded.

Noting that there have been cases against file-sharers, the IP expert said that there is a difference – a legal gray area – when it comes to streaming versus file-sharing.


“What tends to happen is that the content providers go after the ISPs, they go after platforms [offering pirate content], not the individual people,” he said, adding that getting a knock on the door at home would be fairly unlikely.



Trump elected by Big Data – the impact of Cambridge Analytica

Imagine the influence of a London based company, which acted as the catalyst that powered both BREXIT and President Trump’s campaign to success.

What if I told you that Trump was elected by Big Data analysis, carried out by a British company, and that this company can swing elections.  You probably already know the strength of Cambridge Analytica in winning elections, but the video below is for those who may not have realised what was happening.

Here’s the Trump campaign video:

Every single adult in America has been analysed by Cambridge Analytica.  Next they altered the campaign message to each individual’s personality.

So before you get bored.. I always warned you about the dangers of Big Data. This is one of the side effects – one British company can make Presidents.


In Europe there are several elections in the next year.  How much would you pay Cambridge Analytica to win an election for you….

By the way, Alexandra Nix is a great name.  I wonder if it’s real, or provided as a result of data analysis?



The Kali Linux Certified Professional

Introducing the KLCP Certification

After almost two years in the making, it is with great pride that we announce today our new Kali Linux Professional certification – the first and only official certification program that validates one’s proficiency with the Kali Linux distribution.

If you’re new to the information security field, or are looking to take your first steps towards a new career in InfoSec, the KLCP is a “must have” foundational certification. Built on the philosophy that “you’ve got to walk before you can run,” the KLCP will give you direct experience with your working environment and a solid foundation toward a future with any professional InfoSec work. As we continually see, those entering the Offensive Security PWK program with previous working experience with Kali, and a general familiarity with Linux, tend to do better in the real world OSCP exam.

For those of you who already have some experience in the field, the KLCP provides a solid and thorough study of the Kali Linux Distribution – learning how to build custom packages, host repositories, manage and orchestrate multiple instances, build custom ISOs, and much, much, more. The KLCP will allow you to take that ambiguous bullet point at the end of your resume – the one that reads “Additional Skills – familiarity with Kali Linux”, and properly quantify it. Possession of the KLCP certification means that you have truly mastered the Kali penetration testing distribution and are ready to take your information security skills to the next level.

The KLCP exam will be available via Pearson VUE exam centres worldwide after the Black Hat USA 2017 event in Las Vegas.


“Kali Linux Revealed” Class at Black Hat USA, 2017

This year, we are fortunate enough to debut our first official Kali Linux training at the Black Hat conference in Las Vegas, 2017. This in depth, four day course will focus on the Kali Linux platform itself (as opposed to the tools, or penetration testing techniques), and help you understand and maximize the usage of Kali from the ground up. Delivered by Mati Aharoni and Johnny Long, in this four day class you will learn how to:

  • Gain confidence in basic Linux proficiency, fundamentals, and the command line.

  • Install and verify Kali Linux as a primary OS or virtual machine, including full disk encryption and preseeding.

  • Use Kali as a portable USB distribution including options for encryption, persistence, and “self-destruction”.

  • Install, remove, customize, and troubleshoot software via the Debian package manager.

  • Thoroughly administer, customize, and configure Kali Linux for a streamlined experience.

  • Troubleshoot Kali and diagnose common problems in an optimal way.

  • Secure and monitor Kali at the network and filesystem levels.

  • Create your own packages and host your own custom package repositories.

  • Roll your own completely customized Kali implementation and preseed your installations.

  • Customize, optimize, and buld your own kernel.

  • Scale and deploy Kali Linux in the enterprise.

  • Manage and orchestrate multiple installations of Kali Linux.


Linux Performance

This page links to various Linux performance material I’ve created, including the tools maps on the right. The first is a hi-res version combining observability, static performance tuning, and perf-tools/bcc (see discussion). The remainder were designed for use in slide decks and have larger fonts and arrows, and show: Linux observability tools, Linux benchmarking tools, Linux tuning tools, and Linux sar. For even more diagrams, see my slide decks below.





In rough order of recommended viewing or difficulty, intro to more advanced:

1. Linux Systems Performance (PerconaLive 2016)

This is my summary of Linux systems performance in 50 minutes, covering six facets: observability, methodologies, benchmarking, profiling, tracing, and tuning. It’s intended for people who have limited appetite for this topic.

A video of the talk is on, and the slides are on slideshare or as a PDF.

For a lot more information on observability tools, profiling, and tracing, see the talks that follow.

2. Linux Performance Tools (Velocity 2015)

At Velocity 2015, I gave a 90 minute tutorial on Linux performance tools, summarizing performance observability, benchmarking, tuning, static performance tuning, and tracing tools. I also covered performance methodology, and included some live demos. This should be useful for everyone working on Linux systems. If you just saw my PerconaLive2016 talk, then some content should be familiar, but with many extras: I focus a lot more on the tools in this talk.

A video of the talk is on youtube (playlist; part 1, part 2) and the slides are on slideshare or as a PDF.

This was similar to my SCaLE11x and LinuxCon talks, however, with 90 minutes I was able to cover more tools and methodologies, making it the most complete tour of the topic I’ve done. I also posted about it on the Netflix Tech Blog.

3. Broken Linux Performance Tools (SCaLE14x, 2016)

At the Southern California Linux Expo (SCaLE 14x), I gave a talk on Broken Linux Performance Tools. This was a follow-on to my earlier Linux Performance Tools talk originally at SCaLE11x (and more recently at Velocity as a tutorial). This broken tools talk was a tour of common problems with Linux system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. It also includes advice on how to cope (the green “What You Can Do” slides).

A video of the talk is on youtube and the slides are on slideshare or as a PDF.

4. Linux Profiling at Netflix (SCaLE13x, 2015)

At (SCaLE 13x), I gave a talk on Linux Profiling at Netflix using perf_events (aka “perf”), covering CPU profiling and a tour of other features. This talk covered gotchas, such as fixing stack traces and symbols when profiling Java and Node.js.

A video of the talk is on youtube, and the slides are on slideshare or as a PDF.

In a post about this talk, I included the interactive CPU flame graph SVG I was demonstrating.

5. Linux Performance Analysis: New Tools and Old Secrets (ftrace) (LISA 2014)

At USENIX LISA 2014, I gave a talk on the new ftrace and perf_events tools I’ve been developing: the perf-tools collection on github, which mostly uses ftrace: a tracer that has been built into the Linux kernel for many years, but few have discovered (practically a secret).

A video of the talk is on youtube, and the slides are on slideshare or as a PDF.

The perf-tools collection, inspired by my earlier DTraceToolkit, provides advanced system performance analysis tools for Linux. Each tool has a man page and example file. They are unstable and unsupported, and they currently use shell scripting, hacks, and the ftrace and perf_events kernel tracing frameworks. They should be developed (and improved) as the Linux kernel acquires more tracing capabilities (eg, eBPF).

In a post about this talk, I included some more screenshots of these tools in action.

6. Give me 15 minutes and I’ll change your view of Linux tracing (LISA, 2016)

I gave this demo at USENIX/LISA 2016, showing ftrace, perf, and bcc/BPF. A video is on youtube.

7. BPF: Tracing and More (LCA, 2017)

This talk introduces enhanced BPF (aka eBPF) for Linux, summarizes different use cases, and then covers tracing/observability in detail. I’ve given BPF tracing talks before, this time I covered some other topics and included an extended live BPF demo.

A video of the talk is on youtube, and the slides are on slideshare or as a PDF.

For another talk on BPF that discuses off-CPU analysis in detail, see my earlier talk Linux 4.x Performance: Using BPF Superpowers from Performance@Scale 2016.

8. Performance Checklists for SREs (SREcon, 2016)

At SREcon 2016 Santa Clara, I gave the closing talk on performance checklists for SREs (Site Reliability Engineers). The later half of this talk included Linux checklists for incident performance response. These may be useful whether you’re analyzing Linux performance in a hurry or not.

A video of the talk is on youtube and usenix, and the slides are on slideshare and as a PDF. I included the checklists in a blog post.

9. What Linux can learn from Solaris performance and vice-versa (SCaLE12x, 2014)

At SCaLE 12x, I gave the keynote on What Linux can learn from Solaris performance and vice-versa. This drew on my prior experiences doing head to head comparisons, and work for the Systems Performance book. I’d never seen a good talk comparing performance features of both, I suspect in part because it’s hard to know them both in enough depth, and also hard to choose from the many differences which should be highlighted.

A video of the talk is on youtube, and the slides are on slideshare and as a PDF.

This presentation also contains ponies. Lots of ponies. These are the unofficial mascots for DTrace, perf_events, SystemTap, ktap, and LTTng, and were designed by the same person (Deirdré) who designed the original (and popular) DTrace ponycorn.


Other resources (not by me) I’d recommend for the topic of Linux performance:



DuckDuckGo Hits Milestone 14 Million Searches in a Single Day

DuckDuckGo revealed it has hit a milestone of 14 million searches in a single day. In addition, the search engine is celebrating a combined total of 10 billion searches performed, with 4 billion searches conducted in December 2016 alone.


For a niche search engine that many people don’t know exists, that’s some notable year-over-year growth. Around this same time last year, DuckDuckGo was serving 8–9 million searches per day on average.

DuckDuckGo is a search engine which has built its reputation on privacy and transparency. All searches are performed anonymously, meaning the company doesn’t track or record data about its users. It’s also one of the most transparent search engines in the sense that it makes its own data publicly available for everyone to see.

According to the company, it is growing faster than ever, which could be credited to the fact that people are actively looking for ways to reduce their digital footprint. DuckDuckGo cites a study from Pew Research which states: “40% think that their search engine provider shouldn’t retain information about their activity.”


Staying true to its mission, the company donated $225,000 to nine organizations that are also dedicated to raising the standard of trust online. DuckDuckGo is on the hunt for privacy-focused organizations to donate to this year, so if you have any in mind give them a shout.



Corruption of Crypto by the NSA

Here’s an interesting blog article, on how the NSA succeeded in weakening encryption.

Speaking as someone who followed the IPSEC IETF standards committee
pretty closely, while leading a group that tried to implement it and
make so usable that it would be used by default throughout the
Internet, I noticed some things:
  *  NSA employees participted throughout, and occupied leadership roles
     in the committee and among the editors of the documents

  *  Every once in a while, someone not an NSA employee, but who had
     longstanding ties to NSA, would make a suggestion that reduced
     privacy or security, but which seemed to make sense when viewed
     by people who didn't know much about crypto.  For example, 
     using the same IV (initialization vector) throughout a session,
     rather than making a new one for each packet.  Or, retaining a
     way to for this encryption protocol to specify that no encryption
     is to be applied.

  *  The resulting standard was incredibly complicated -- so complex
     that every real cryptographer who tried to analyze it threw up
     their hands and said, "We can't even begin to evaluate its
     security unless you simplify it radically".  See for example:

     That simplification never happened.

     The IPSEC standards also mandated support for the "null"
     encryption option (plaintext hiding in supposedly-encrypted
     packets), for 56-bit Single DES, and for the use of a 768-bit
     Diffie-Hellman group, all of which are insecure and each of which
     renders the protocol subject to downgrade attacks.

  *  The protocol had major deployment problems, largely resulting from
     changing the maximum segment size that could be passed through an
     IPSEC tunnel between end-nodes that did not know anything about
     IPSEC.  This made it unusable as a "drop-in" privacy improvement.

  *  Our team (FreeS/WAN) built the Linux implementation of IPSEC, but
     at least while I was involved in it, the packet processing code
     never became a default part of the Linux kernel, because of
     bullheadedness in the maintainer who managed that part of the
     kernel.  Instead he built a half-baked implementation that never
     worked.  I have no idea whether that bullheadedness was natural,
     or was enhanced or inspired by NSA or its stooges.

In other circumstances I also found situations where NSA employees
explicitly lied to standards committees, such as that for cellphone
encryption, telling them that if they merely debated an
actually-secure protocol, they would be violating the export control
laws unless they excluded all foreigners from the room (in an
international standards committee!).  The resulting paralysis is how
we ended up with encryption designed by a clueless Motorola employee
-- and kept secret for years, again due to bad NSA export control
advice, in order to hide its obvious flaws -- that basically XOR'd
each voice packet with the same bit string!  Their "encryption"
scheme for the control channel, CMEA, was almost as bad, being
breakable with 2^24 effort and small numbers of ciphertexts:


To this day, no mobile telephone standards committee has considered
or adopted any end-to-end (phone-to-phone) privacy protocols.  This is
because the big companies involved, huge telcos, are all in bed with
NSA to make damn sure that working end-to-end encryption never becomes
the default on mobile phones.

        John Gilmore
%d bloggers like this: