We have been discussing establishing criteria for identifying and validating digital diseases. We also discussed establishing scientific criteria for positively identifying and validating both the affliction and the treatment. We can certainly point to examples where such methods work in the biological world, and I would surmise the same applies in the digital world.

The first important step is choosing to look for issues. Any organization or individual identifying a known problem must then decide. They can either hope it just goes away and ignore it or take some action. Interestingly enough, ignoring some issues and letting nature take its course of action sometimes works well in the biological world. 

That may have set a terrible precedent in our minds. While it may seem so, you're not ignoring a problem and not taking action. In the biological world, our built-in natural healing processes take place that can remedy many afflictions. Something is being done even if we don’t take explicit action.

We don’t have that luxury in the digital world. We see small moves toward designing self-healing systems, but they can’t create new fixes as biological systems do. While I believe that someday digital systems will evolve to self-heal, that time isn't today.

The reality is that, by and large, someone must look at the information and then decide to take action, which is a big part of the problem today. As previously discussed, we lack ways to identify issues, prioritize them, and take the correct course of action. Someone must decide to allocate resources to fix the problem. 

Unfortunately, a common decision is to either postpone the move to remediation or simply ignore it and hope it will either go away or not manifest itself in a way that forces the digital equivalent of an emergency room visit and possibly intensive care. There are many reasons for this, and sometimes it can be as simple as not daring to make a change that could impact the service associated with the problem. Sometimes it's a lack of understanding of what bad things could happen as a consequence.

The infamous Heartbleed bug was an example of everyone scrambling for an emergency fix, and in many cases, the people responsible for the system fixed it. In other cases, a lack of knowledge and globally agreed-upon fixes led to many snake oil remedies and, in many cases, just plain ignoring the problem in hopes that it wouldn't lead to intensive care. You can be sure this is still the case today. 

We call this the long tail of vulnerabilities. Initially, there were millions of systems affected by the bug. Heartbleed is still tracked in our Arctic EWS service, which tells us that the number of unique IP addresses with the vulnerability has dropped from an average of 200K in July 2020 to an average of 150K in April 2022. That is a measly reduction, and these systems will continue to stay online for years to come. 

Typically organizations don’t decide to take action until they're forced when inaction leads to heavy financial losses more significant than the cost of the fix. Unfortunately, many of the security problems don’t become visible without a bit of digging and data gathering, which comes with some effort. When that effort isn't spent, it leads to a false estimation about where they're in the potential financial loss spectrum. 

We have seen financial losses due to security issues increase steadily over the last several decades, but not to a level that forces organizations to take preventive measures with enough enthusiasm to get digital diseases under control effectively. 

So I guess the burning question is: what will it take to get us to the next step?

The next blog article will dive into who defines what are the acceptable actions for cybersecurity problems.

Latest news