The dizzying capacity for OpenAI to vacuum up vast amounts of data and spit out custom-tailored content has ushered in all sorts of worrying predictions about the technology’s ability to overwhelm everything — including cybersecurity defenses.
Indeed, ChatGPT’s latest iteration, GPT-4, is smart enough to pass the bar exam, generate thousands of words of text, and write malicious code. And thanks to its stripped-down interface anyone can use, concerns that the OpenAI tools could turn any would-be petty thief into a technically savvy malicious coder in moments were, and still are, well-founded. ChatGPT-enabled cyberattacks started popping up just after its user-friendly interface premiered in November 2022.
OpenAI co-founder Greg Brockman told a crowd gathered at SXSW this month that he is concerned about the technology’s potential to do two specific things really well: spread disinformation and launch cyberattacks.
“Now that they’re getting better at writing computer code, [OpenAI] could be used for offensive cyberattacks,” Brockman said.
No word on what OpenAI intends to do to mitigate the chatbot’s cybersecurity threat, however. For the time being, it appears to be up to the cybersecurity community to mount a defense.
There are current safeguards put in place to keep users for using ChatGPT for unintended purposes, or for content deemed too violent or illegal, but users are quickly finding jailbreak workarounds for those content limitations.
Those threats warrant concern, but a growing chorus of experts, including a recent post by the UK’s National Cyber Security Centre, are tempering concerns over the true dangers to enterprises with the rise of ChatGPT and large language models (LLMs).
ChatGPT’s Current Cyber Threat
Work products of chatbots can save time taking care of less complex tasks, but when it comes to performing expert work like writing malicious code, OpenAI’s ability to do that from scratch isn’t really ready for prime time yet, the NCSC’s blog post explained.
“For more complex tasks, it’s currently easier for an expert to create the malware from scratch, rather than having to spend time correcting what the LLM has produced,” the ChatGPT cyber-threat post said. “However, an expert capable of creating highly capable malware is likely to be able to coax an LLM into writing capable malware.”
The problem with ChatGPT as a cyberattack tool on its own is that it lacks the ability to test whether the code it’s creating actually works or not, says Nathan Hamiel, senior director of research with Kudelski Security.
“I agree with the NCSC’s assessment,” Hamiel says. “ChatGPT responds to every request with a high degree of confidence whether it’s right or wrong, whether it’s outputting functional or nonfunctional code.”
More realistically, he says, cyberattackers could use ChatGPT the same way they do other tools, like pen testing.
ChatGPT Threat “Massively Overhyped”
The harm to IT teams is that overblown cybersecurity risks being ascribed to ChatGPT and OpenAI are sucking already scarce resources away from more immediate threats, as Jeffrey Wells, partner at Sigma7, points out.
“The threats from ChatGPT are massively overhyped,” Wells says. “The technology is still in its infancy, and there is little to no reason why a threat actor would want to use ChatGPT to create malicious code when there is an abundance of existing malware or crime-as-a-service (CaaS) that can be used to exploit the list of known and growing vulnerabilities.”
Rather than worrying about ChatGPT, enterprise IT teams should focus their attention on cybersecurity fundamentals, risk management, and resource allocation strategies, Wells adds.
The value of ChatGPT, as well as an array of other tools available to threat actors, come down to their ability to exploit human error, says Bugcrowd founder and CTO Casey Ellis. The remedy is human problem-solving, he notes.
“The entire reason our industry exists is because of human creativity, human failures, and human needs,” Ellis says. “Whenever automation ‘solves’ a swath of the cyber-defense problem, the attackers simply innovate past these defenses with newer techniques to serve their goals.”
But Patrick Harr, CEO of SlashNext, warns organizations not to underestimate the longer-term threat ChatGPT could pose. Security teams, meanwhile, should look to leverage similar LLMs in their defenses, he says.
“Suggesting that ChatGPT is low risk is like putting your head in the sand and carrying on like it doesn’t exist,” Harr says. “ChatGTP is only the start of the generative AI revolution, and the industry needs to take it seriously and focus on developing AI technology to combat AI-borne threats.”