Safety first, always. In the world of enterprise software application development and systems management, prudent organizations always put user safety and data security at the forefront of every design, development and deployment decision they make. In a software universe beleaguered by the specter of ransomware and nation-state bad actors – and in world where even your grandparents understand not to click on money scam emails from fabricated foreign princes – we all appreciate the importance of keeping our apps and electronic services safe.

But security sits inside a paradox i.e. whether we’re talking about airport security, event security guards who check IDs, upmarket designer fashion stores with their menacing front door security welcomers or straight up computer security – all safety checking mechanisms slow things down, which paradoxically means they’re working really well.

The Preparedness Paradox

CNN national security analyst and former assistant secretary at the Department of Homeland Security Juliette Kayyem calls this the preparedness paradox.

“If we know bad things are going to happen, when we talk to company leaders, we have to go through the hard sell of convincing them they need to be ready. But because security is ‘friction in the system’ (i.e. it slows people, things and processes down), people don’t always want to do it. But equally, the better we do it (as security practitioners) the safer we manage to make things, but that also presents a challenge. That is to say, leadership may well start to think that the security function itself isn’t needed because things are so safe. In reality, there’s no such thing as safe, so what matters is failing safer,” said Kayyem, speaking at an event hosted by Qualys, a provider of disruptive cloud-based IT, security and compliance solutions.

In his role as CEO and president of Qualys, Sumedh Thakar used Kayyem’s entertaining take on security as a leverage point to illustrate why his company continues to engage in technology innovation to de-risk modern enterprise IT stacks. Talking about how businesses should manage risk today, Thakar says that organizations need to start talking about risk in a business context.

Risk, In Business Context

“What risk in the context of business operations means is talking about system robustness in terms of actual business units and the material impact possible upon those business functions if assets in any particular division are impacted. But if the organization doesn’t know enough about its assets in the first place, then it fails at the first step i.e. even before it starts to analyse risk,” said Thakar. “There’s a lot to think about here. For example, some technologies may be coming to end-of-life status, so these assets should be ranked lower in the total picture of risk management if we want to be able to correlate those into a clear picture of the total state of the business. This is required if we are going to layer our alerts with threat intelligence so that we can contextualise and prioritize.”

If high-end vulnerabilities do exist, Thakar reminds us that organizations should realize that as little as 20% of the time these weak points in the business could or would actually be impacted by exploits, simply because exploits don’t currently exist for those areas of the business. If an organization spends a large amount of time working to remediate vulnerabilities that may not be exploited, then that’s inefficient risk management practice from the start.

No Whack–A-Mole

“We also need to consider the operational risk (and economic cost) that comes with fixing a security risk i.e. if an organization needs to deploy 1000 patches to completely clean its shop up, that’s almost inevitably overload. If can reduce this number to hundreds, or a figure in the tens, then this becomes a more achievable goal,” said Thakar. “The challenge we also have (this last year or so especially) is the rise of AI i.e. if the security teams come in to see the board, they may well find that they are deprioritized in favor of some glossy new ideas that stem from suggestions being made by the AI team. This really just means that the security team needs to really know where the ‘noise’ is and be able to understand what the real risk factors are out there. Not every vulnerability is an exposure, not every cloud misconfiguration is an exposure, not every architectural disconnect is an exposure… so being able to asses risk in a true business context is really important if we don’t want to operate risk management like it’s some whack-a-mole game.”

As we would expect from an organization of its size and with the company this year celebrating its 25th year anniversary of operation, Qualys has underpinned these thoughts with product developments that it hopes will validate the technology proposition and business predicaments it is describing. The company this month launched its Risk Operations Center (known as ROC) with its branded Enterprise TruRisk Management technology.

The solution is designed to enable chief information security officers and indeed business managers to cope with cybersecurity risks in real-time. Positioned as a cyber risk management framework (i.e. this is not just some mega anti-virus tool), Qualys ROC is designed to consolidate both Qualys and non-Qualys security risk data, including from technology alliances like Forescout, Okta, and Oracle across cloud, on-premises and hybrid environments.

To overcome the challenges of modern business, Thakar and team say that organizations need an integrated approach that combines heterogeneous risk factors from various asset management tools and disparate cybersecurity solutions into a single platform with remediation and mitigation capabilities to reduce risk quickly. That is why Qualys is launching ROC with Enterprise TruRisk Management designed to unify asset inventory and risk factors, apply threat intelligence, business context, risk prioritizatio and orchestrate remediation, compliance and reporting through a single interface.

A validation for ROC

Backing up CEO Thakar’s comments this month was Richard Seiersen in his role as chief risk technology officer at Qualys.

“The need for the Risk Operations Center has come about because enterprise organizations have too many risk and security tools in place, typically spread across departmental silos and in between their various divisions. For many of any reasonable size it might be around fifty to seventy, for Global 2000 class companies it is in the hundreds… and the problem is that just a small number of people inside the organization need to try and interpret what all those tools mean,” said Seiersen.

“If any given risk alert related to a particular vulnerability can lead to the creation of a Jira ticket [ticketing software used for service management tasks so that software developers know what to work on] for the platform engineering team to process, then we need to ensure that only high-value work is carried out,” advised Seiersen, before explaining that the risk data in hand has to be normalized so that it is of prime value.

What Is Data Normalization?

When he talks about data normalization in this sense, it is a process that has to happen because many companies might have a) three copies of the same information datasets, so de-duplicaton needs to happen b) have data repositories with unstructured data that needs to be classified and brought in line with risk analysis efforts or c) have risk data that stems from different scoring systems (i.e. where 4 might be good on a 1-5 scale, it’s worrying low on a 1-100 scale) so a standardized scale needs to be brought about as part of the normalization process… and this happens in Qualys ROC.

Crucially here, all this risk data analysis has to be applied at a unified level. What this means is that Qualys vulnerability data must be able to dovetail with (for example) business information from an enterprise resource planning system (SAP, IFS and so on) or enterprise asset management data (Qualys actually has a direct integration with ServiceNow at this point, but other platforms obviously exist as well) on into connections with data warehouse platforms (Snowflake would be a good example) and beyond.

What Is A Vulnerability, Really?

A known (but not necessarily critical) vulnerability could be a retail app that makes a connection to a maps service that is not wholly secured, that’s an okay example but we can do better and be more specific. Think of a software engineering design decision that allows two separate and district software/data services to make a connection without having to go through some kind of firewall guardrail of any kind. These two services could exchange information and carry out their connection while no known exploit for that channel exists – that might make it a less-than-optimal software design decision and a known weakness, but it does not make it a critical vulnerability. Knowing the difference between the two and applying business context to risk analysis and de-risking is why Qualys ROC enables enterprises to keep increasing business ‘flow’ (transactional flow, operational flow and developer productivity flow) without increasing risk.

Alex Kreilein, vice president product security for Qualys has helped further define this somewhat tautological sounding enterprise technology occurrence that we might even call a modern phenomenon of connected systems.

“As a community of practitioners, we understand common vulnerabilities & exposures. To an often lesser extent, we understand common weakness enumeration occurrences (frailties that cover both hardware and software) as well. But design-related vulnerabilities are a different type altogether and are often more resource-intensive to address than others. Scenarios like this are supportive of that point,” explained Kreilein.

“A software ‘product’ is composed of multiple applications, workloads and assets, which often share a trust boundary. Those applications sometimes rely on shared session tokens [an authentication control]. Let’s say that one day, an application begins to connect to a new application programming interface with weak identity and access management, possibly allowing attackers to manipulate the API and access restricted functions or sensitive data. This trust boundary violation allows attackers to pivot within the boundary to access other applications, compromise services and/or exfiltrate data. A mature fix requires a fundamental change in the identity controls of the token to ensure the delivery of short-lived, cryptographically secure tokens with proper expiration and rotation mechanisms,” he clarified.

Kreilein says design vulnerabilities are hard to see, take effort to remediate and require room at the backend. The business context delivered through the latest Qualys ROC offering promises to give teams the ability to make risk-informed tradeoff decisions.

Are We Safer Now?

Have IT system security platforms integrated business data before in order to provide the operations team with a way of locking down technology stack health in a relatively robust way? Yes of course, this has been part of the picture, but what Qualys has done might be described as a more defined and deliberate move to make this happen in a more holistic and unified way. It’s worth remembering that system security (like everything else) is a simple question of economics i.e. if bad actors need to spend more time than they know is worth it trying to compromise a system, then they will look elsewhere.

This is not cybersecurity per se, this is systems risk management at the platform engineering level and it plays an interesting and compelling role in the the cloud-native software application development landscape today. One day, we might be able to say that risky business only stays with Tom Cruise.

Share.

Leave A Reply

Exit mobile version