BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The (Stick) Shift Left Route To Drift Control Secured Software

Following

Left comes first. If we happen to live in the left-to-right script universe as opposed to the right-to-left world of Arabic, Hebrew and Persian - or the top-to-bottom world of Far Eastern languages - then we normally associate leftward items with coming first. An industry label coined back at the turn of the millennium, we now use the term shift-left testing in computer science to denote projects that start their debugging, configuration and defect-checking procedures earlier on the page - on the left side.

But shift left has been around for almost a quarter century since Larry Smith coined the concept while writing on Dr Dobb’s Journal, so shouldn’t we have shifted by now?

The truth is that in many areas of software application development, we understand the need to shift left, but we’re perhaps too reliant on existing procedures, too rooted in brittle methodological practices that are tough to change and - in the new world of cloud - too quick to embrace automation shortcuts that may be inefficient coding at best, or insecure coding at worst.

We should know this by now i.e. secure-by-design principles are not new and the DevOps movement has allowed security into the middle of its sandwich with the arrival of DevSecOps for some time now. The US Cybersecurity & Infrastructure Security Agency (CISA) defines ‘secure-by-design’ as, “[Building] technology products… in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data and connected infrastructure.” This kind of thing should be tech-tablestakes, surely?

“As with any other approach to secure software system design, shifting left in the software development lifecycle is only one piece of the puzzle,” advises Anna Belak, director, office of cybersecurity strategy at Sysdig. “There are some nuanced and interesting examples of secure-by-design wherein the architecture of the application itself can determine which security controls will or will not be necessary to ensure healthy operation. For developers, these decisions are ideally made at the start through threat modeling that’s promoted as part of secure design principles. However, this is often a luxury for most organizations because of the time and expertise that’s needed to generate accurate threat models.”

Stick shift left

Belak’s comment on shift left in the world of secure software automation has almost too many parallels to automobiles to count i.e. it’s not so much shift and turn earlier when the car’s transmission tells you it’s ready, it’s more a case of manual stick shift controls needed where software engineers take more a positive steer on direction in many cases.

Part of the challenge here is that technical teams on both sides have historically been bad at communicating context from Dev/DevOps to SecOps, because they may not know what information to communicate or to whom. Talking about security effectively can remove some of the biggest challenges in how things work over time and make life easier for software application developement engineers overall.

Driving with drift control

The automotive analogies continue in this story because software developers can affect and apply security controls to work amazingly well - or disastrously poorly based on the application context - through what is known as drift control.

“Drift control relies on the assumption that cloud-native workloads are immutable - that is, they do not change during runtime. Thus, if a workload ‘drifts’ from its initial state (and thus its declarative configuration), it is misbehaving in a potentially malicious manner. However, even containers can change over time when other elements - like the applications hosted within them or the operating model around them - are not yet fully immutable. Implementing drift control results in wildly different outcomes depending on how cloud-native an organisation’s application architectures are,” explained Belak.

Sysdig is known for its cloud-focused monitoring, security, cost-optimization and alerting suite that promises to provide deep, process-level visibility into dynamic, distributed production environments. As such, Belak and the team offer two drift control scenarios as illustrative examples.

The legacy application scenario

In scenario #1, we have a legacy application that is not designed for a cloud-native existence where it aligns to what we can call distributed, immutable, ephemeral (DIE) principles. Because this application expects to be long-running with frequent management interaction from administrators, such as upgrades and patches and so on, we don’t necessarily want drift control on. If the security operations team turns on drift control, it drowns in a sea of irrelevant and useless alerts because, in this case, drift behavior is normal behavior. Realistically, this would never be a recommendation because traditional infrastructure is highly mutable.

“In this environment, the security team is applying an inappropriate control for the target environment. The application is not cloud-native, but the security control assumes that it is. This mismatch results in a very bad experience for the security operations center (SOC), which is now faced with a deluge of useless alerts. In this case, drift control should simply not be used or be used in some very narrow scope. Instead, using systems management or infrastructure automation tools can deliver the necessary results that the team wants to achieve,” explained Belak.

Distributed, immutable & ephemeral

In scenario #2, we do have an application built for the distributed, immutable, ephemeral (DIE) universe. It is most likely using orchestrated containers and everything-as-code deployment and management strategies. The security operations team turns on drift control in blocking mode. All workloads that exhibit runtime drift are destroyed and redeployed from clean manifests. The SOC wastes 9% less time looking at irrelevant alerts. At the same time, you do have to know why this drift is taking place, so that you can stop this from happening again.

“In this scenario, the application development team has actually removed a whole category of security concerns and threat detection requirements from the SOC’s plate with their choice of software architecture. This is a manifestation of the cloud-native dream and what ‘secure-by-design’ always intended. The security control is in place and functioning correctly by reducing the burden on both the SOC (which isn’t seeing unnecessary alerts) and the development team (which is not being asked to revise work they believe is already completed). Of course, in the real world, few organizations are 100% cloud-native,” clarified Belak.

Driving in both lanes

There is a third scenario to consider where an application is undergoing refactoring or expansion and has both cloud-native and legacy components. The same logic applies to monitoring a collection of applications, some of which are cloud-native and some are legacy. The security operations team turns on drift control and has no idea which signals are worth investigating and which are not. Again, because of the traditional, mutable infrastructure aspect, drift control on traditional IT leads to too many alerts and wasted time for both security and developers.

This is the most common scenario that security and developer teams face, where both types of architecture are present in an environment. This is exactly why the only way forward is for the relevant teams to communicate with each other about the parameters of what they are deploying. Security controls can then be tailored - with stick shift manual control and management - to different scopes to optimize for true positives and reduce wasted effort. For example, a hybrid application can have drift prevention enabled on the components that are immutable for certain, drift detection (alert, but don’t block, like 25% of Sysdig customers apparently) enabled on components that are in the process of being redesigned, and no drift control at all on legacy elements that are expected to drift.

However, the team imposing the security monitoring cannot possibly know how to define those scopes without clear input from the development and DevOps teams.

“Whatever deployment approach exists in practice, developers often make decisions that affect how secure their applications are over time without even realizing it. By thinking about this at the beginning and by being equipped with tooling that provides them with appropriate security expertise to make informed risk decisions, developers can take out some of the most common problems and save themselves - and their security colleagues - a lot of time and effort,” said Belak, bringing the whole discussion back to the parking bay, hopefully.

Software delivery has to include ongoing and clear communication about applications’ expected behavior. This enables security teams to set up controls around those applications to understand what they should be doing at all times and equally automate the process to spot any deviations in that behavior too. Communication enables the appropriate tuning of those controls, so both developers and security professionals can achieve their goals over time.

There’s stick shift gearing here for human intervention and there’s drift control too, we just need to make sure we don’t run out of gas.

Follow me on Twitter or LinkedIn