Categories
Uncategorized

It’s the eggheads who let us down

A professor from SMU in Dallas tried to make a name for himself this week. Unfortunately, he did so by lying about the state of the industry in a way that the general public will believe. Unfortunately for all of us, it was in a mainstream outlet with credibility outside the tech world.

When I first read the article, I had to check to make sure this was not The Onion. It had all of the markings of lazily crafted satire, but alas it was a serious outlet.

Security professionals (and most Information Technology professionals) understand why this article was a bunch of straw-man arguments (a logical fallacy where an argument is rebutted that was never made in the first place) but the public does not. Further, he layers these fallacies in an effort to deflect attention from massive executive incompetence at Colonial Pipeline.

Gwinn starts with a massive argument from a false premise:

The hacking at Colonial Pipeline is the latest in a series of breaches that have impacted a long-and-growing list of other businesses — all ambushed by some individual or group that managed to hack through cyber security “industry best practices.”

He claims that Colonial Pipeline implemented industry best practices, which he later takes issue with. Evidence suggests that this couldn’t be further from the truth. The Associated Press reported last week on an external audit from 3 years ago that describes “atrocious” information management practices. They even secured a quote from the consultant hired to produce that report: “I mean an eighth-grader could have hacked into that system.”

We don’t know the scope of this audit, but AP reports that it cost about $50,000, “was not directly focused on cybersecurity” and identified “Colonial’s inability to locate a particular maintenance document.” From this, we can derive that the audit was likely a broadly scoped, limited depth compliance audit. Compliance is not security. This type of audit is unlikely to identify any but the most glaring security problems in a firm.

We do know that one of the recommendations was hiring a CISO with sufficient independence to do the job. Instead, Colonial hired somebody who reported to the CIO, and thus lacked the independence necessary to implement changes. This independence is critical to avoid conflicts-of-interest between those who built a system and those who evaluate it. The apparent conflicts-of-interest at Colonial don’t stop there, with their CIO sitting on the advisory board of a security firm they later hired to evaluate their protections.

When asked about other recommendations, including Data Loss Prevention (DLP) and security awareness training, Colonial has given answers indicating insufficient implementations and likely insufficient allocated resources. These are decisions made at the executive level and should thus fall on executive shoulders.

So now, we get to the crux of the issue. And why The Hill saw fit to publish such irresponsible claptrap. The core of Gwinn’s article is an attempt to distract from the impact of massively negligent executive decisions, preferring instead to blame the incident on the security eggheads that they a) never hired, b) didn’t listen to and c) didn’t appropriately fund.

If the article only served to misdirect blame with respect to this incident, we could safely ignore it. But it doesn’t. It continues on to establish or reinforce massive misconceptions about Information Security, which will serve to harm industry well into the future.

These impressively credentialed professionals are skilled in the art of tedium. They know all about audits. They can absolutely push paper.

Information Systems are massively complex beasts. The enterprises they live in are also incredibly complex. Formalising the approach to security controls is critical to ensure that we have addressed risks. Audits are one part of that formalisation. They allow for verification that appropriate controls are actually in place. Smart audits (and smart auditors) are not built on a paper-pushing checkbox effort. They take a risk-focussed approach and evaluate a particular framework of controls as a whole against the risk landscape of an organisation and also verify the implementation of each individual control. They are also regularly repeated, because the risks faced by an enterprise change, the environment changes and the perceived effectiveness of controls changes over time.

They can painfully examine endless lists of accounts and identify exactly who does, and who does not, need system or service access… dictate that network administrators should be boxed in administratively… Server administrators, likewise, should be administratively restricted from being able to monitor network information or anything else that is not directly related to one specific niche job function… network engineer, for instance, does not have the tools or access to investigate the activity occurring on an innocuous sales department workstation at 3 a.m. A server administrator lacks the access to explore why the network throughput seems painfully slow while trying to copy files… The “good guys” are administratively prevented from having a holistic view of systems, networks, applications, workstations and other resources — when this holistic view is exactly what is needed to prevent cyber attacks.

This is Gwinn’s most pernicious line of argument, spread through the article. It directly attacks the Principle of Least Privilege – a core concept in Information Security. And it does so in an intentionally dishonest manner. Ultimately the decisions of who and what access to grant are executive prerogative, advised by security professionals on the basis of risk assessment.

The Principle of Least Privilege is an information security concept that holds that an individual should have the minimum level of access required to perform his / her duties. This is absolutely critical in most attack scenarios. In most such situations, an attacker is impersonating a “good guy” – the “system” can’t distinguish between the good guy and the bad guy. Allowing the “good guy” full access allows the attacker to pivot from impersonating a low-value target (like an entry-level server admin responsible for a static website) to attacking a high-value target (by intercepting communications that might include passwords). Instead, we limit what that particular “good guy” can do, so that a “bad guy” pretending to be him is similarly limited.

The examples provided in this argument are themselves fallacies. Nobody has claimed that organisations structured this way are examples of “industry best practices.” Mature organisations have Security Information and Event Management (SIEM) solutions, which allow for a controlled view of security relevant events on the corporate network. Even stronger organisations make use of Security Orchestration, Automation and Response (SOAR) solutions that ensure that this information finds its way to the people that need it and can act on it.

So in the examples above, the network engineer can identify and flag anomalous activity without himself digging into the workstation. And that anomaly can be followed up on by somebody more suited to that task – a Security engineer, or Security Operations Centre (SOC) analyst for example.

Gwinn makes the “lone wolf” assumption. In his examples, the core problem isn’t that the network admin or server admin or developer lacks the permission to investigate, it’s the assumption that that’s the only person who can investigate. Instead, we should ensure that the network admin, server admin and developer know that they have a support team that they can communicate with which is capable of following up and investigating.

The security approaches that existed before “industry best practices” really do work. Ask the next hacker who breaches security.

Nonsense. Sheer, unadulterated nonsense.

In 1988, the Morris worm brought down the Internet. Kevin Mitnick is known to have first breached a network in 1979, and continued straight through his conviction in 1999. The stakes just weren’t as high. Payment systems, electrical grids and pipelines weren’t connected to the Internet. Breaches were still happening. You just didn’t hear about them.

This argument is made all the more rich by the fact that most process control systems (like the one running Colonial’s pipeline) are dragging along legacy code and systems developed in Gwinn’s mythical time that existed before industry best practices. Process control systems move slowly and are resistant to change precisely because of the risk to life and safety they pose. As a result, the industry best practices haven’t found their way into these systems. Risk used to be managed by not connecting these systems to the Internet and so reducing their attack surface. But now, these systems are being connected to the Internet (either directly, or by connecting them to corporate networks which are themselves connected). Code written and systems built during this “golden age” before security best practices is a huge part of the problem. The industry is only now starting to untangle the mess created by legacy code written without any real understanding or consideration of security.

Implement a “one strike and you are out” hiring policy for information security employees. When they fail, do not let it happen twice.

Also, never hire an information security employee who has ever worked for a firm that has had a security incident. Their “industry best practices” did not work for the previous employer, why would they work better for the next victim? These former employees bring disaster. 

Then we have this. After being raked over the coals, Gwinn has disavowed this statement, saying “I regret how I worded the sentence.” But he misses the mark even there. The entire concept is irretrievably wrong and it is difficult to understand how after nearly 4 decades in Information Technology, he could come even remotely close to this idea.

Information Security incidents happen. New attack techniques are developed all the time. New vulnerabilities are discovered all the time. Systems and code are used in new contexts with different risks. We learn from these, and build new protective controls. We decompose and segregate systems to limit the impact of a breach – to hem in attackers so they can’t move from a low-value target to a higher value target. We limit risk because we can’t completely eliminate risk.

I’d be much more reluctant to hire an Information Security employee who claims his firms have never had a security incident. He’s either lying or lacked the visibility to know.

Ultimately, the goal of Gwinn’s article was to whitewash the poor decisions made by Colonial executives and pin blame for this incident on people who had no ability or authority to make any change. Security decisions are business decisions. They need to be evaluated with the same seriousness and rigour as risk management, corporate structuring and finance decisions. The result of not doing so is significant business continuity risk, and in this case substantial economic continuity risk.

The uncertainty created by Gwinn muddying the waters here will only serve to diffuse legal responsibility of the executives who made these decisions. They’ll point to articles like this and say “How could we make a decision when even the industry doesn’t agree on the way forward.” The industry does agree on the way forward – it’s the path directly away from practitioners like Gwinn.