Security through Obsucrity. Why is open source software safer than closed source software?

Right off the bat – yes, an open system is more secure than a closed system, no doubt about it, because the most secure system in the world in software is OpenBSD, where not a single line of closed code is allowed. Now let’s get to the question of why.

For people who are not in the IT industry, and often even for people in IT, it’s better to talk in similes because IT is a very abstract field.

So now we’ll take a very simple real-world example that everyone can understand. That example would be a prison building, which, by principle, has to be very well secured. We don’t want prisoners escaping like rats from a sinking ship. Logic dictates that everything should be hidden, concealed, covered. Including the prison plan. So that no one knows or learns anything, and therefore can’t use the knowledge to escape. But prisoners do have plenty of time to plan. Just like the so-called Black Hat hackers have. Real life crackers. People who break into systems and do damage for their own advantage. Their main advantage is their invisibility and the fact that they often have tons of time that it usually takes to breach the security.

Now, back to the prison plans. Surely as administrative architects and future prison builders we will bring in a security consultant to work with us. But we only take on a handful of consultants to show the plans to, who depending on their time and experience will help expose the flaws. We’ll then paint all the plans black. In software, this is the equivalent of compiling and obfuscating the code and making the source code unavailable. However, this does not get rid of the bugs and security flaws. They’re still there, they’re still just as dangerous as they were before, they’re just harder to find. And it’s the prisoners who have a lot of time on their hands, so they expose and share them for fun and for an escape plan. In no way did we actually make our prison safer by making the plan unavailable, we just mistakenly thought we did.

Now let’s do it the other way around. Let’s put our prison plans publicly on the internet. There, the broad professional community of prison architects has the opportunity to comment on the potential danger of the plans. And they have the opportunity to constantly comment on modifications and structural changes to our prison because we make them frequently. Just to improve security and because of the advice of the community. And strangely, that community can include ex-convicts from other prisons and they can also advise us on what to change where, how they would challenge the system and where they see the weakest point, for example. This will not happen if we keep the plans secret and hidden.

Okay – prison is not an ideal demonstration, but it is clear that painting a leaky box black does not mean I painted the holes. The human brain says that to lock down a system is to make it safe. But it’s exactly the opposite. There are dozens of articles on the Internet that explain this issue in more detail, but the principle idea is that potential security holes are found and exposed much more quickly in open source code than in closed source code.

There is probably no better example than the aforementioned OpenBSD, which has had only two remote holes in its entire existence, compared to Windows and other systems. The philosophy of Theo de Raadt is that absolutely everything must be open in order for code to be auditable. That is why OpenBSD is the most secure system in the world, and that is an irrefutable fact. And based on this assumption, open systems can be classified as much more secure than closed systems that rely on security through ignorance. The more cryptically I hide my security flaw in a system, the less likely an attacker is to find it. This is not true because the flaw still exists, and unlike an open system where even an auditing tool that can be used by a security expert from the other side of the planet can report it from the code, such a flaw is not fixed.

There is probably no better example than the aforementioned OpenBSD, which has had only two ACE (Arbitrary Code Execution) exploits in its entire existence compared to Windows and other systems. The philosophy of Theo de Raadt is that absolutely everything must be open to make code auditable. That is why OpenBSD is the most secure system in the world, and that is an irrefutable fact. And based on this assumption, open systems can be classified as much more secure than closed systems that rely on security obfuscation. The more cryptically I hide a security flaw in a system, the less likely it is that an attacker will find it. This may be true, e.g. the exploit developed by the NSA EternalBlue was unknown for a long time and if it had not been leaked to the public by a hacking group, it might still be undiscovered to this day. However, the bug still exists, and unlike in an open system, where even an audit tool, that a security expert from the other side of the planet can use, can find flaws and report them, such a bug is not fixed. Exploiting such a long-existing undiscovered flaw can have devastating consequences – just as EternalBlue gave rise to the WannaCry ransomware, which made a damage of $4 billion and is still one of the worst malware ever developed.

In software, open is safer than closed, even though the intuition might say otherwise.