AI & Cybersecurity: The Good, The Bad, and The Ugly (maybe)

Daisy, daisy, give me your answer do...

AI & Cybersecurity: The Good, The Bad, and The Ugly (maybe)
Image: The Good, the Bad and the Ugly movie poster

If you’ve never played around with ChatGPT, go ahead a fiddle around a bit and then come back. I’ll wait.

Cool right?! People have used it to write and analyze code and dabble with security issues. But as powerful as something like ChatGPT is, its impact on security is probably going to be one of those “you might feel a little pressure,” moments, right before your doctor plunges a 10-gauge needle into your arm. It’ll be alright in the end, but the journey there is going to hurt like a ************.

The Bad

The problem with AI and security issues is that AI does a lot of the heavy lifting at the lower levels of performance.

Wait, isn’t that a good thing?

Well, no. Because AI’s are good, just not good enough. Example? I asked ChatGPT to assess who was going to win the war in Ukraine and why. One of the arguments it used in support of Ukraine? Ukraine was a part of NATO and NATO has this thing called Article 5.

Yeah…..

So the detriment AI brings in the short term is that you don’t need entry level people, you need more senior people who really, truly know their s*** to oversee the AI to make sure they’re not making stupid who-is-and-isn’t-in-NATO-level mistakes. The AI might be right a lot of the time, but you need to catch it and teach it when it is wrong, otherwise things are going to go sideways to spectacular effect.

We don’t have enough people in this field, period. We’re chronically short people with the depth and breadth of experience sufficient to oversee teams of novices (or novice AIs). AI does nothing to alleviate the various fatigues and burn-outs we’re currently suffering.

Yet.

The Good

The longer-term prospects for AI in security on the other hand are a lot more promising, mostly because of issues of scale.

Imagine if you will a well-trained and consistently accurate and reliable AI and the impact it has on security training. Now you are producing practitioners to a high standard at a scale that was heretofore unthinkable. That generation, once deployed, eventually starts training the AI that trained them, further refining and enhancing its performance and the caliber of the next generation.

You know how we’re always complaining about the advantages offense has over defense? Well how interesting do things get when those gaps are closed or at least asymptotic?

The Ugly

For the past 30 years cybersecurity has been a growth industry. It has always been understaffed and under-equipped. The market for security products and services is something north of $150B as these words are being typed and those numbers have always been going from lower-left to upper-right.

But what happens to that market when products and services get so good, so fast, the need for more drops away? What happens when security and response technology is darn near perfect? What happens when security company stock are like shares in Mr. Ponzi’s retirement scheme, and a job in security is like being navigator on a commercial airliner circa the late 1980s?

Nobody is owed a return or a job, but suddenly finding yourself on the wrong end of an epic loss of value and a massive labor surplus has implications beyond the personal. “Just learn to code” and similar facile remedies don’t compute anymore when one GIAC and her AI are doing the work a brigade-sized element of people used to do. Intellectually we should be happy we’re not needed anymore, but that will be cold comfort in real terms.

I’m old enough to remember when the big threat to the planet was global cooling, and that robots and foreigners were going to take all our jobs.  Most of our societal nightmares don’t come true, but at least in this space I’m not sure that we’ve ever been this close to an actual revolution. And not the color-type revolution either, the messy head-separating type.