Why Can’t We Figure out this Cybersecurity Thing? Let’s Start by Avoiding the Noise and Hype.

With RSA 2019 happening this week, some of the world’s best minds are coming together to try to figure out this darned cybersecurity thing. In fact, their theme this year is so transparent, there is little left to interpretation. It is as follows:

“This year’s theme is, to put it simply, Better. Which means working hard to find better solutions. Making better connections with peers from around the world. And keeping the digital world safe so everyone can get on with making the real world a better place.”

With so much of our world dependent on secure and stable IT networks, it’s a fair question to ask: Why aren’t we better already? It’s not like this IT flash creeped on us. Smart people, dating back decades even, identified vulnerabilities and sounded the alarm. So what gives?

Part of it – and I would dare to say, most, if not all of it – is a mentality issue. A couple decades ago, nobody really wanted to talk about vulnerabilities for fear of a lawsuit. In fact, if you pointed out vulnerabilities you could be asking for some trouble of your own. But, at least with the big companies, we have seen a bit of a shift change. More are willing to listen to you and bug bounty programs are gaining a toehold. But I still don’t think that will solve the problem. It’s a wholesale mentality shift that is needed, one that will refocus on the basics and shift technology from the “crutch” to “tool” category.

Technology Has Become a Crutch and We’re About to Lean on It Some More

Where you see a big conference, you will see many vendors. Some like to promise you the sky. Along with a bag of beans for that magic beanstalk that gets you to your rightful throne above the clouds. All good, as this is expected at these types of events. Personally, I’m more drawn to the vendors that will tell me what their product doesn’t do.

That aside, you can surely expect that AI will be hot talk, especially if AI is pitched as a solution to solve the struggle to find human workers. Therefore, expect the marketplace to razzle and dazzle us with the wondrous possibilities of Machine Learning and Big Data.

If you’ve been keeping up with my pieces over the last couple of years, you’ll see I’ve taken a few cautious taps at AI, while still showing support. Time to clarify for those who are unsure about my stance: It’s the development and application of AI that bother me, not the actual technology. In fact, I think the technology is absolutely necessary.

On the development side, I’ve seen some work and commentary on this issue, but it definitely feels like it’s in the minority: Is the algorithm correct? Let’s split this question into four discrete parts. First, have we defined what “correct” is? I’m not sure we have. Second, who is developing the algorithm? It matters. Third, what is the algorithm’s purpose? That also matters. And finally, what confidence do we have that the algorithm is performing as expected? Are we just taking somebody’s word for it or do we have the capability to independently verify this piece of tech is doing what it’s supposed to do?

Like really, are we dealing with a precision surgical tool here or a chain saw? I prefer the former, but I have no way of knowing for sure if it’s being deployed as the latter.

Does the Application Meet the Need?

Next, what are we using AI for? Do we really know? As I note in this March 2019 article, a revisit of an October 2016 article titled “Has Information Gone Rogue?” available here (or on LinkedIn, here), there is a clear marriage between cybersecurity and information warfare, especially so if you know how to game the system. I’ll pull again the line from Andy Patel’s original article:

“Once an adversary understands how those underlying algorithms work, they’ll game them to their advantage.”

In this case, Andy Patel is talking about social network analysis, but the premise is true for any application. If the adversary can figure out how the underlying algorithms work, they’ll manipulate them. And as we’re beginning to see, the tools we use to defend are being used by criminals to attack.

Ah, what will be, for when the day that AI is used to alter AI!

That is why we need to be cautious about treating algorithms as scripture. Not only could their foundations have some inherent flaw built into it, but they can be gamed. And decisions will then be based on flawed assumptions and analysis. This is how you find yourself down the road of unintended consequences and second, third, and fourth order effects that you cannot easily predict in a complex world.

Or to be a tad more colloquial about it: you may have just burned a whole lot of good money and increased your risk profile because your confidence in something was misplaced.

And right there is a major problem: confidence in something that may be flawed. Studies have shown this sobering fact: If we believe we can influence the outcome of a scenario, it’s possible for us to act in a manner that is “more risky” by a factor of 1,000 times. Keep in mind: this applies to all types of tech, not just algorithms.

So, allow me to translate: If I have the confidence that my tools can alter a scenario to my advantage, i.e., make me more secure, I am susceptible to taking on behavior that is risker.

That’s completely fine, except of course if the confidence is misplaced. Put another way: Is the tech doing what you really expected it to do? That’s where the confidence aspect comes in. Heaven help us if a database, such as this one which is used by law enforcement and tracks “negative” behaviour in a “risk” database, ever becomes compromised or manipulated.

All of which brings me back to the development and application of AI (and other cybersecurity tools). We need to be sure about what they’re doing if they are to be properly used as a tool and not a crutch. That crutch will break eventually if we become overconfident in its capabilities.

You’re Meddling With Powers You Cannot Possibly Comprehend

If you’re an Indiana Jones fan, you’ll recognize that classic line. I’m not scared of tech. In many ways, I’m a tech junkie. But I am scared of how we use the tech. The tech, in and of itself, is actually quite benign. It’s the application of it that can make our entire world go bonkers.

That’s why I still am a fan of the basics, such as: encryption, hashing, patching, backing up, employee and personal training, a security-minded organizational culture, understanding the business drivers that make the organization tick, honest communication between decision makers, good ole gap analyses, ready to go incident response, including back-end remediation ready in a flash, contractors/consultants you can rely on, and smart network design, which includes cleaning up all the tools that have caused a suffer[ing] from investments in disjointed, non-integrated security products that increase cost and complexity.” You don’t need any shelfware in your life. And you really don’t need any expensive shelfware in your life!

If you do all these basic things correctly – and yes, here, in defining the basics, we can define what “correct” is – you will be lowering your risk profile.

And ultimately, that is the goal, isn’t it?

You need to be clear and unambiguous about what you are trying to do across the organization, including how you intend to do it. Punting to the IT folks who will keep on asking for larger budgets for new toys is not the right way. You run out of other people’s money, always.

And with that, I will close with two quotes from very smart and successful people.

First quote, from Dee Hock, that guy who founded Visa. “Simple, clear purpose and principles give rise to complex intelligent behavior. Complex rules and regulations give rise to simple stupid behavior.” Have you defined your cybersecurity purpose and principles? Are they simple and clear? Is everybody on the same page? If not, look forward to some simple stupid behavior.

Next quote is from Nassim Nicholas Taleb. He’s that guy that will do a Twitter tap dance on your head if you try to have an uninformed discussion on risk with him … or if he senses that you are an intellectual (yet idiot). “They think that intelligence is about noticing things are relevant (detecting patterns); in a complex world, intelligence consists in ignoring things that are irrelevant (avoiding false patterns).”

It seems to me Messrs. Hock and Taleb are saying: In a complex world, do the simple things and avoid the noise and hype. Good advice.

By George Platsis

SDI Cyber Risk Practice

TwitterFacebookShare