Patching the Iron Tail Is Easier Said Than Done
Cyber Defense Magazine, August 13, 2019, by Willy Leichter, Vice President of Marketing, Virsec
Challenges with Patching Industrial Control Systems Leave Significant Risk
Everyone knows that you should patch your application servers as often as possible. You should also brush your teeth, eat your broccoli and call you mother. But all good intentions aside, we know that in practice, patching servers fall woefully behind in many organizations, even ones with efficient and security-minded IT. There are good reasons why patching gets put off – it’s often difficult, time-consuming, disruptive, or even impossible.
Given the substantial number of known and unknown vulnerabilities affecting users, applications and critical infrastructure, conventional wisdom is that patching vulnerabilities should be on the top of your security to-do list. But in reality, there is a disconnect between security strategies and practical reality. According to the Gartner, “the lofty goal of ‘patch everything, all the time, everywhere’ is not only rarely fulfilled, but it is also causing friction between IT security and IT operations.”
The Hidden Costs of “Doing the Right Thing”
The risks of falling behind on patching are highly publicized. For example, the recent WannaCry attacks exploited the Windows SMBv1 vulnerability, with the EternalBlue tools originally created by the NSA. This vulnerability affected Windows XP systems, which Microsoft would have you believe have all been long since retired, and no longer receive patches. Yet this attack and others like it painfully exposed the fact that millions of Windows XP systems are still running legacy, mission-critical applications.
This caused lots of soap-box lecturing that unpatched servers were the culprit, and organizations need to take security more seriously. But this kind of finger-pointing ignores the practical decision making and security tradeoffs that many businesses face. While no organization wants to be the victim of the next cyberattack, the abstract security fear can easily take a back seat to the more immediate labor and disruption costs of “doing the right thing”. Faced with this, even the most diligent teams find it easy to kick the can down the road and deal with more immediate day-to-day priorities.
In fact, in many cases, patching is viewed more as a liability, than a best practice. In areas like industrial control systems (ICS), and healthcare, the risk of unexpected results from patches, unpredictable downtime, or even forced system reboots can be enormous and are avoided if at all possible. In many industries where equipment is supposed to be “built to last” for 20+ years, the use of out-of-date and un-patchable operating systems (such as Windows XP) is widespread, and these legacies, embedded applications are difficult or impossible to upgrade.
Frankly, it’s fair to question the premise that effective security should be dependent on constant patching. Despite decades of investment in security and patch management tools, the overall security situation seems to be getting worse – not better. Security based on best practices that routinely get ignored seems a best impractical, and at worst, delusional.
How Much Really Gets Patched?
According to the 2019 Verizon Data Breach Investigation Report, within 30 days of finding a new vulnerability the average enterprise will have patched fewer than 40% of the systems affected. Within 100 days, the average only goes up to about 75%. Effectively, this leaves a huge window of exposure, with a significant long tail that may never get patched. And these figures don’t account for vulnerabilities that have not yet been discovered or zero-day exploits that bypass security controls entirely.
These numbers also don’t reflect more complex environments with entangled dependencies between systems, where a patch to one system might cause significant ripples of disruption downstream. In the ICS industry, estimates of the average time to patch systems is around 120 days, although exact numbers are hard to find. These numbers are sobering for an industry that manages complex systems for critical infrastructure such as power plants – a growing target for cyberattacks.
Another disconnect is that most automated patching is focused on end-user devices, while business-critical servers often get left behind. As Gartner states, “organizations have had good success patching endpoints, but successfully patching servers and applications has been much more elusive.”
Who Wants to Rule the Iron Tail?
The Iron Tail may sound like a location in Game of Thrones, but it refers to a major challenge that many industries face, running a wide range of applications, connected to a long line of industrial controls that have been assembled over decades. The challenges of applying timely patches to this long iron tail of legacy apps can be daunting for a number of reasons:
- Many critical control systems require 100% uptime. Simply rebooting an app is problematic, especially if it’s connected to a nuclear power plant or electrical grid. Installing, validating, and testing system updates for unpredictable periods of time can be a non-starter.
- Security for older systems often depended on an “air gap” from the outside world. While security-by-isolation was easy in the ’70s, it’s less practical now. Today’s air-gapped systems can’t be automatically patched or receive virus signature updates, and even the most isolated system is usually only a desktop away from a connected, and potentially malicious insider.
- Older apps often run on operating systems that are out-of-date or no longer get patched. Many critical functions are run on platforms than maybe 20 to 30 years old, and basic compatibility between modern 64-bit systems, and older 32- or 16-bit applications can be very problematic.
- Legacy apps often were created by staff no longer there, using tools no longer supported. “If it ain’t broke” there is a strong incentive not to touch older purpose-built applications. Just keep your fingers crossed and hope for the best…
The Race to Claim Victory over Malware
Whenever there is a major cybersecurity incident (about every week these days) the race begins for the security and software industries: name the malware (ideally with a cool and threatening sounding name), create signatures, patch the newly discovered vulnerabilities, and push the patches out to customers as quickly as possible. At that point, security and software vendors like to pat themselves on the back, and announce publicly how sophisticated their defenses are because “we caught this one…”.
But the reality is that most malware damage happens in the days or weeks before this public frenzy begins, and when a patch is finally released, it may be weeks, months, or never, before it has been implemented by most customers. As we saw with WannaCry, months after Microsoft had released a patch to its SMBv1 vulnerability, a shocking number of servers globally were unpatched and exposed.
Even more troubling is that the NSA knew about this vulnerability since at least 2013 (when the EternalBlue toolkit was put together to exploit it), and other nation-state attackers may have been exploiting this since 2001 when Windows XP was first released with the vulnerability.
Protecting Unpatched Systems in the Real World
Rather than continuing to focus on “best practices” that in reality are often avoided or viewed as a liability, it’s time to look for security solutions that accept the piece-meal nature of complex networks and legacy systems, but still, apply effective security across the boards. The holy grail for many security professionals is to have the protection that can be applied to systems as they are – old or new, patched or unpatched. But, in order for this to happen, there has to be a paradigm shift in security thinking.
For the past 25+ years, most security has been built around a perimeter mindset. The old security adage has been “keep the good stuff in, and keep the bad guys out”. The primary tools for this battle were gateway security devices, like firewalls (including IDS/IPS, next-gen firewalls, and web application firewalls), and growing lists of known vulnerabilities used for virus signatures and pattern matching to detect recurring malware. These gateway and list approaches may have eliminated repetitive, static threats, but they have not kept up with innovative and resourceful full hackers that are continually devising new ways to elude conventional defenses.
The latest wave of fileless memory-based attacks is effectively invisible to conventional security controls. They manipulate legitimate application processes to corrupt memory, and hijack control over systems to steal or ransom data, or merely cause painful disruption. Even in a mythical world where all servers were immediately patched, these new class of threats would fly under the radar of most security tools.
Because it’s impossible to anticipate and prepare for the infinite amount of unknown and future threats, and patching is always slow and reactive, there is a new approach that is garnering interest, especially in the ICS space, where legacy systems are a fact of life. Rather than focus on external threats, or holding together the disappearing network perimeter, a new class of security products is focused on monitoring the run-time activity of applications, mapping correct application behavior and taking immediate action if the application goes off the rails.
Applications should be predictable. Whether it’s a legacy, purpose-built app, or a modern interconnected system, the path an application takes follows predetermined programming. A good analogy is a Google map: if you are driving from Los Angeles to San Francisco, there are only a few acceptable, pre-determined routes. If you start heading of Las Vegas, or Mexico, something is seriously wrong, and your car has likely been hijacked.
This deterministic process has the advantage of limiting the scope of security and focusing on what matters – the application and associated data. It also accepts the fact that many applications won’t be patched with the latest security updates, and need to be protected as is. According to a white paper from security vendor Virsec, “this approach differs from legacy security solutions by focusing on application execution integrity – ensuring they run as designed by their original coding.”
Regardless of the specific approach, it’s clear that cybersecurity needs to be pragmatic to be effective. The current over-dependency on patching as the security panacea will continue to fail because it ignores the challenges that legitimately hinder timely updates to legacy systems. Until we can shift to a new mindset and secure applications as they really exist, the hackers will continue to stay ahead, find holes and wreak havoc.
About the Author
Willy Leichter, Vice President of Marketing, Virsec. Willy Leichter has over twenty years of experience helping global enterprises meet emerging cybersecurity and compliance challenges.
With extensive experience multiple IT domains including threat prevention, cloud security, global data privacy laws, data loss prevention, and email security, he is a frequent speaker at industry events and author on IT security and compliance issues, including the Global Guide to Data Protection Laws.
A graduate of Stanford University, he has held leadership positions in the US and Europe, at CipherCloud, Axway, Websense, Tumbleweed Communications, and Secure Computing (now McAfee/Intel).
Read article at Cyber Defense Magazine