Zero-day exploits have made headline news over the past two years, attracting newfound attention from regulators and increasing pressure on software manufacturers and security leaders. The most recent exploit comes from the Log4J vulnerabilities. However, zero-day attacks have persisted for years, notably NotPetya and SolarWinds but also countless others, and given their success rates, they will continue.
In the case of Log4j, within the first 12 hours over 40,000 attacks were reported worldwide, rising to 830,000 after three days. In North America, 46% of networks were impacted and the global average reached almost 49%. Companies are still trying to determine the full extent of their data loss, but we know the impact of the attack is widespread and may take years to address. Clearly, traditional approaches to cybersecurity are flawed. But new models are emerging that protect critical application workloads from the inside.
I had the opportunity to sit down with Buck Bell, EVP, Technology Integration, Focal Point (now a CDW company) to discuss recent high-profile exploits and explore how automated runtime detection could have reduced or eliminated impacts from the vulnerabilities. Here are just a few of the topics we covered in our recent webinar, “Milliseconds Matter: Defending Against the Next Zero-Day Exploit.”
Common zero-day exploit responses
At a basic level, with zero-day vulnerabilities, the attacker finds a vulnerability, prepares an exploit, and executes an attack, putting defenders back on their heels. Traditional controls have been bypassed so they are reacting to a vulnerability they didn’t know existed and an exploit they don’t know how to respond to. Once there’s enough information available to help defend against it, they try to understand how to manage their control framework. It can take days to detect the initial entry point and even longer determine if there has been any lateral movement across network assets.
As an initial response patching helps but patching proactively or even after the fact can leave exploits resident. As dwell time rises so do opportunities for attackers to install malware or exfiltrate data. As we saw with Log4j, variants proliferate quickly (45 within the first 72 hours) and sometimes patches need patches, making it virtually impossible to keep up. Patching is part of the solution but doesn’t let you get ahead of the game, entirely.
A silver lining of the surge in cyber threats is that CISOs have greater support from the board. We’re seeing the following priorities rise to the top of their lists and reflected in more security spending. These include:
- Increasing adoption of Zero Trust where the intent is to limit trust and put mechanisms in place to always know who is trying to access what. Constant reauthorization and reauthentication is effective, but a Zero Trust architecture takes a while to implement.
- Training to respond to unexpected security events helps users and staff understand how to recognize an attack and the processes to respond. But each attack is slightly different, so training isn’t foolproof and needs continuous updates.
- Security outsourcing is gaining speed as many entities have realized they may not have internal expertise to protect their data. Outsourcing may provide an easier, faster, and perhaps lower cost approach if they can consolidate tools, but the need for the right tools still remains.
- Cyber insurance is also a factor as boards and C-suites consider new technologies and processes. Cyber insurance providers have already imposed greater requirements such as multi-factor authentication (MFA) for VPNs. We can expect to see more business pressure with additional requirements with respect to vulnerability management.
Defense-in-depth is ingrained in all of us because we know there is no silver bullet solution. However, conventional approaches to zero-day attacks are labor-intensive and applied after the fact. The more sensitive the event is, the more resources we pour into the effort. So, operational costs continue to escalate for incident response.
Most security vendors aim to reduce the time between detection and mitigation to hours or minutes. But it’s increasingly clear that milliseconds matter. Any elapsed time creates significant risk of compromise and data exfiltration for your organization. Ultimately, what companies must achieve is data protection that can stop attacks in their tracks before damage is done.
How to break the cycle of scan-and-patch to protect against the next Log4j
Now, we can apply a deterministic approach to break the reactive approach to dealing with zero days. Deterministic protection combines application awareness with runtime protection to understand the boundaries applications are predetermined to perform within. If there are attempts to break out of those boundaries then that’s indicative of malicious intent and the product of threat actor activity, so the solution automatically triggers protection actions. When a deterministic protection control operates in runtime it can see a tremendous amount of detail, including attempts to conduct reconnaissance and drop malware or exfiltrate data, so it can defend against zero days and other dangerous events with precision. Deterministic protection stops an attack in progress, preventing damage and access to sensitive data at the time of execution at runtime – and that’s a game-changer.
Shifting from spending time after the fact to true application-aware workload protection at runtime also changes the economics of how we apply valuable resources. Instead of chasing threats, expert analysts can spend more time on higher-value, proactive activities. It is also an efficient way to enforce a Zero Trust model for workloads by only allowing authorized code to run. And it gives teams aircover and time to patch without burning out staff and with minimal business disruptions.
Deterministic protection is a backstop against all other controls and can even take the place of some solutions to protect against unknown vulnerabilities. It fundamentally improves your security posture and cost model over time and is the paradigm every organization should consider.