“Wind extinguishes a candle and energizes fire. Likewise with randomness, uncertainty, chaos: you want to use them, not hide from them. You want to be like the fire, and wish for the wind.” – Nassim Taleb, Antifragile
Yahoo, eBay, JP Morgan Chase, Home Depot, Target Stores, Adobe, Sony’s PlayStation Network, all of these organizations have had data leaks in the millions. And those leaks have included leaks of people’s names, emails, addresses, phone numbers and passwords.
With the explosion of personal information collection and the growing prevalence of “know your customer” frameworks, the consequences of bad security have never been greater. But in a world were organizations are spending billions on more and more layers of shielding around their servers, Satoshi Nakamoto did the unthinkable: he built a system that purposefully broadcasts people’s transactional data to everyone who cares to listen.
The significance of this move has been under-considered as a result of the persistent misconception that Bitcoin is fundamentally about anonymity. In this piece I want to dive deeper into the philosophy behind the security of the Bitcoin network, and how it’s turning traditional understanding of security on its head.
On Security By Correctness
“Security by correctness” is a security paradigm that, simply put, tries to avoid vulnerability exploits by making sure that there are no vulnerabilities in the first place.
The problem with security by correctness is the increasing marginal cost of flaw identification. As bugs become more and more rare, any security system will increasingly have to contend with the paradox of the false positive, and the cost of security will increase exponentially with the cost of handling false positives. Cory Doctorow explains this phenomenon in Little Brother as follows:
“Terrorists are really rare. In a city of twenty million like New York, there might be one or two terrorists. Maybe ten of them at the outside. Assuming New York has twenty million people, a 99 percent accurate test will identify two hundred thousand people as being terrorists. But only 10 of them are terrorists. To catch ten bad guys, you have to haul in and investigate two hundred thousand innocent people.”
Coding is difficult. It should be fair to assume that no matter what level of governance procedures a development team uses, vulnerabilities will always be present in software. This is why other paradigms of security don’t focus on eliminating bugs, but rather on the question of how to handle the fact that there will be bugs.
Security By Isolation
“Security by isolation” is a paradigm that assumes that software will have bugs, and instead of trying to remove them, it tries to isolate them. As the old adage goes “The only way to secure a server is to turn it off”.
Of course, servers can’t actually be turned off. But they can be isolated from the broader internet using intranets, firewalls and, in the most recent trend to hit isolationists, by using virtualization. Or at least that’s the theory.
Systems designed to be isolated are probably the most vulnerable of all to the biggest point of failure of any system: people. Isolated systems require that everyone accessing them enters their bubble of protection ‘clean’, does what they need to do, and then exits again without taking anything with them.
Systems that require this level of human competence fail all the time. For example, the US Centre for Disease Control has regular exposure incidents with deadly bacteria and viruses. A famous computer virus example is Stuxnet, which is usually injected into networks using a flash drive. Studies have since shown that about 50% of people will plug in a flash drive that they find in a parking lot.
This has lead some people to start making an even stronger assumption about security: what happens if we assume that all systems have vulnerabilities and that those vulnerabilities will necessarily be exploited? Following this line of reasoning leads to one of the most important insights about security:
“The important thing about security systems isn’t how they work. It’s how they fail.” – Cory Doctorow, Little Brother
“It does not matter how often something succeeds if failure is too costly to bare” – Nassim Taleb
Security By Exposure
Andreas Antonopoulos introduces the idea of Bitcoin security by analogy with soap. People in the developing world have fewer allergies than people in the developed world. The reason is simple: they’re exposed to viruses and bacteria all the time. They don’t disinfect everything that their kids play with. As a result, people develop an immune system.
Bitcoin is likewise exposed to attacks all the time. Thousands of nodes are sitting with open ports listening all day. Servers, websites and exchanges live under a constant barrages of DDOS attacks. The media is relentlessly proclaiming that Bitcoin is doomed to fail (as hilariously detailed in this Bitcoin Obituaries Song), and governments are increasingly cracking down on the network.
And yet, despite its unprecedented exposure to attack, the Bitcoin network hasn’t had a minute of downtime in almost a decade of existence. And despite the price “crashing” dozens of times, the digital currency refuses to die.
Andreas calls Bitcoin the software “sewer rat”. It’s walking around carrying 6 different strains of the bubonic plague, and it just brushes it off like its nothing. Because that’s all it’s ever known.
At its core, this is an expression of antifragility: Nassim Taleb’s term for systems that aren’t fragile, or even robust against chaos, but actually grow stronger when exposed to stress. Bitcoin exhibits this property because of its decentralization. If you want to rob a bank, there’s a single point of failure that you can attack. If you want to steal Bitcoins, you need to go after multiple points of failure, because there is not trusted centralized third party. While these attacks can succeed against individuals, the network as a whole grows stronger in response to them.
One part of this defense mechanism has been the curious extent to which the general public is now taking a serious interest in security. Suddenly everyone and their parents is checking for HTTPS protocols, using double-authentication logins, checking URLs for phishing, et cetera.
Andreas explains that “banking systems have bugs, and our systems have bugs, but the difference is that we find out about ours immediately and respond to them. But when a banking system’s exploits are found, it’s bad”.
Indeed, the question of severity of failure is at the core of Taleb’s writings on antifragility. To him, stability anywhere is always a “ticking time bomb”. For example, he explains how the absence of fire leads to the accumulation of flammable material, leading to catastrophic devastation when a spark finally catches. If the area had suffered frequent, smaller fires, the flammable material would have been cleared with minimal damage.
Likewise, when software systems are isolated, their flaws accumulate over time, leading to spectacular failure when the security bubble is eventually breached.
The point being made here is deeper than it might first appear. What is really being argued is that security measures designed only to influence the frequency of breaches will inevitably also impact the severity of breaches. And, often, the severity increase is not linear but exponential.
Of course, along the way, some people are being burned. For the system as a whole to grow stronger, individuals will inevitably lose out. Whether this is worth it overall is very much open for debate. As for me, I like the idea of being with team sewer rat.
As for me, I like the idea of being with team sewer rat.