Presented by Zscaler
Early in 2023, we saw the explosive use of OpenAI’s ChatGPT accompanied by laymen fears of the Artificial General Intelligence (AGI) revolution and forecasted disruptions to markets. Without a doubt, AI will have a massive and transformative impact on much of what we do, but the time has come for a more sober and thoughtful look at how AI will change the world, and, specifically, cybersecurity. Before we do that, let’s take a moment to talk about chess.
In 2018, one of us had the opportunity to hear and briefly speak to Garry Kasparov, the former world chess champion (from 1985 to 2000). He talked about what it was like to play and lose to Deep Blue, IBM’s chess-playing supercomputer, for the first time. He said it was crushing, but he rallied and beat it. He would go on to win more than lose.
That changed over time: he would then lose more than win, and eventually, Deep Blue would win consistently. However, he made a critical point: “For a period of about ten years, the world of chess was dominated by computer-assisted humans.” Eventually, AI alone dominated, and it’s worth noting that today the stratagems used by AI in many games baffle even the greatest masters.
The critical point is that AI-assisted humans have an edge. AI is really a toolkit made up largely of machine learning and LLMs, many of which have been applied for over a decade to tractable problems like novel malware detection and fraud detection. But there’s more to it than that. We are in an age where breakthroughs in LLMs dwarf what has come before. Even if we see a market bubble burst, the AI genie is out of the bottle, and cybersecurity will never be the same.
Before we continue, let’s make one last stipulation (borrowed from Daniel Miessler) that AI so far has understanding, but it does not show reasoning, initiative or sentience. And this is critical for allaying the fears and hyperbole of machine takeover, and for knowing that we are not yet in an age where the silicon minds duke it out without carbon brains in the loop.
Let’s dig into three aspects at the interface of cybersecurity and AI: the security of AI, AI in defense and AI in offense.
Security of AI
For the most part, companies are faced with a dilemma much like the advent of instant messaging, search engines and cloud computing: they have to adopt and adapt or face competitors with a disruptive technological advantage. That means that they can’t simply outright block AI if they want to remain relevant. As with those other technologies, the first move is to create private instances of LLMs in particular, as the public AIs scramble like the public cloud providers of old to adapt and meet the market needs.
Borrowing the language of the cloud revolution for the era of AI, those looking to private, hybrid or public AI need to think carefully about a number of issues, not least of which are privacy, intellectual property and governance.
However, there are also issues of social justice since data sets can suffer from biases on ingestion, models can suffer from inherited biases (or hold a mirror up to us showing us truths in ourselves that we should address) or can lead to unforeseen consequences in output. With this in mind, the following are critical to consider:
AI in defense
There are also, however, applications of AI in the practice of cybersecurity itself. This is where the AI-assisted human paradigm becomes an important consideration in how we envision future security services. The applications are many, of course, but everywhere there is a rote task in cybersecurity, from querying and scripting to integration and repetitive analytics, there is an opportunity for the discrete application of AI. When a carbon-brained human has to perform a detailed task at scale, human error creeps in, and that carbon unit becomes less effective.
Human minds excel at tasks related to creativity, inspiration and the things a silicon brain isn’t good at: reasoning, sentience and initiative. The greatest potential for silicon, AI application in cyber defense, is in process efficiencies, data set extrapolations, rote task elimination and so on — so long as the dangers of leaky abstraction are avoided, where the user doesn’t understand what the machine is doing for them.
For example, the opportunity for guided incident response that can help project an attacker’s next steps, help security analysts learn faster and increase efficiency in human-machine interface with a co-pilot (not an auto-pilot) approach is developing right now. Yet, we need to make sure those who have the incident response flight assistance understand what is put in front of them, can disagree with the suggestions, make corrections and apply their uniquely human creativity and inspiration.
If this is starting to feel a little like our previous article on automation , it should! Many of the issues highlighted there, such as creating predictability for attackers to exploit by automating, can now be accounted for and addressed with applications of AI technology. In other words, the use of AI can make the automation mindset more feasible and effective. For that matter, the use of AI can make the use of a zero trust platform for parsing the IT outback’s “never never” (mentioned in our other article on visibility in network transformation) much more effective and useful. To be clear, these are not free or simply given by deploying LLMs and the rest of the AI toolkit, but they become tractable, manageable projects.
AI in offense
Security itself needs to be transformed because the adversary themselves are using AI tools to supercharge their own transformation. In much the same way that businesses can’t ignore the use of AI as they risk being disrupted by competitors, Moloch drives us in cybersecurity because the adversary is also using it. This means that people in security architecture groups have to join the corporate AI review boards mentioned earlier and potentially lead the way, considering adoption of AI:
In conclusion, we are entering an era not of AI dominance over humans but one of potential AI-assisted human triumph. We can’t keep the AI toolkits out because competitors and adversaries are going to use them, which means the real issue is how to put the right guidelines in place and how to flourish. In the short term, the adversaries in particular are going to get better at phishing and malware generation. We know that. However, in the long term, the applications in defense, the defenders of those who build amazing things in the digital world and the ability to triumph in cyberconflict far outstrip the capabilities of the barbarians and vandals at the gate.
To see how Zscaler is helping its customers reduce business risk, improve user productivity and reduce cost and complexity, visit https://www.zscaler.com/platform/zero-trust-exchange.
Sam Curry is VP, CISO at Zscaler. Sanjit Ganguli is VP Transformation Strategy & Field CTO at Zscaler. Nathan Howe is VP Emerging Technology at Zscaler.
This content was originally published here.