Securing the Edge: Zero Trust & Least Functionality

Securing the Edge

By John Walsh | SVP Business Development & Strategy

Today’s sophisticated nation-state attacks are exploiting a growing attack surface with expanded focus on the embedded stack/semiconductors. Combined with the challenges experienced by traditional systems capabilities to identify, detect, mitigate, and restore, it’s time to focus on crucial, emerging, necessary and complementary capabilities. Prevention is the new normal. 

In a recent panel at the U.S. Department of Energy (DOE) Conference in Portland, I recall a CISO emphasizing that “ideally one needs to identify and detect in one minute, mitigate in 10, and restore in one.” Adversaries are increasingly pivoting towards attacks that blend in with legitimate traffic through highly advanced Tactics, Techniques, and Procedures (TTPs) in OT legacy environments where there are inadequate resources or prior data to support robust monitoring and detection. These attacks take full advantage of privileges, certificates, kernel root, and other TTPs that are obtained in undetected ways. Furthermore, through these sophisticated attacks, bad actors have the ability to cover their tracks. As such, many attacks are present within the system operating for 45-90 days before they are identified.

Another challenge is that most organizations do not have Security Operations Centers (SOCs) or well-trained incident response teams. For perspective, I have experience in developing a Nation State Cyber Range teamed with former TSC, working with one of the 12 Department of Defense (DoD) approved penetration testing (PenTest) teams in the country and providing SOC training at the Nation State level. My take-away is that it is a huge expense and commitment that most utilities and others have not made.

My point being: To achieve a goal of one minute to identify and detect, ten minutes to mitigate, and one minute to restore is out of reach for most organizations and current state of the art security systems architectures. While zero trust is a step in the right direction, it still relies on many of the capabilities and precepts of a traditional monitoring and detection construct. We tend to not want to implement an autonomous response due to the concerns associated with false positives that lead to loss of availability.

If we follow the NIST Cybersecurity Framework, my conclusion is that we must enhance the ability of the architecture to be inherently secure by adding PREVENT to Identify, Protect, Detect, Mitigate, and Restore. Especially at the edge where there are resource-constrained devices and where the systems and sensors do not have adequate history or data collection on next generation zero-day attacks. Tools such as artificial intelligence (AI) and machine learning (ML) are also less effective in these environments, again relying on detection and response – especially with the proliferation of connectivity, ability to spoof data, emerging TTPs, and so forth.

Being able to prevent the attack from executing reduces the burden on the IT team by reducing the attack surface. Implementing least functionality minimizes the risk of false positives by utilizing systematic methods to overlay system functionality, stripping applications and operating systems to only the binaries required for the specific functionality required. 

BedRock Systems’ formally proven trusted virtualization with Active Security™ has demonstrated Least Functionality as a method to prevent proliferating attack vectors and TTPs that others cannot. The BedRock platform extends semi-conductor bare metal properties in a capabilities-based model that dedicates and isolates only the resources required for each virtual machine (VM). Furthermore, a Layer 2 virtual switch enables composability to provide fine grain segmentation and segregation, and a policy wall at the edge to substantially reduce the attack surface, inherently preventing attacks from executing. Integrating least functionality with a zero-trust model of “Deny All – Allow by Exception” provides many degrees of freedom for implementing policies to counter threats and strip out functionality commonly used by adversaries without the cost of re-writing applications and operating systems.  

Further, digital twin and network modeling/simulation used in conjunction with a robust Cyber Range to insert attack vectors provides assurance and reduces system risks by demonstrating system resilience and security policies against attack vectors at scale. This capability also enables verification and validation of patches and updates, minimizing risk associated with availability prior to deployment.

BedRock Systems recently demonstrated the ability to secure an existing legacy system architecture using trusted virtualization to provide segmentation/segregation and isolation of certain network assets in combination with stripping the Ubuntu Linux OS from approximately 4,300 executable binaries to 65; and implementing a “Deny All – Allow by Exception” policy. In doing so, this approach demonstrated kernel and root integrity (denying specific zero-day attack vectors) as well as blocking TTPs on several well-known ransomware, rootkit, web server and other attacks. 

Given the state of most industries and the proliferating threat, accomplishing the objective stated at the DOE conference will require the capabilities described above. This is especially true where the majority of Identify/Detect architectures are passive, do not have the ability to detect these sophisticated attack vectors, and require an incident response team with proven TTPs to mitigate and restore without introducing loss of availability. It’s better to PREVENT with high assurance!

 

Share This Post

Share on linkedin
Share on facebook
Share on twitter
Share on email