- ChatGPT is currently being used as a time-saving device, but the technology may hold potential for automating more complex tasks.
- The technology can be used to prepare well-crafted corporate messaging, provide policy writing support, develop realistic looking phishing emails and assist with identifying software vulnerabilities.
- Chief information security officers share how they see the tool becoming useful and how hurdles, including concerns over data privacy and legal obligations, are slowing adoption.
Chat Generative Pre-trained Transformer is a chatbot developed by research laboratory Open AI and incorporated into Microsoft Corp.’s Bing search engine. In a previous WSJ Pro Research paper, we addressed ChatGPT and Cyber Risk. In this paper, we discuss the potential benefits of artificial intelligence-driven chatbots, with a focus on ChatGPT, though ChatGPT is only one of many current and future implementations of this technology. Several CISOs also shared their thoughts and concerns.
Current Uses: Increased Efficiency & Productivity
Much of the current usage of ChatGPT revolves around automating time-consuming tasks, thereby increasing productivity and helping understaffed and overburdened security teams focus on the more essential aspects of their roles. Specific examples of increased efficiency and productivity include the following:
- Communication Aid: ChatGPT enables a user to quickly prepare emails, briefings and other forms of written corporate communication in a well-crafted and easy-to-understand format. This is especially helpful for non-native English speakers.
- Policy Writing Support: ChatGPT can create boilerplate policies, which can then be refined and customized for individual organizations.
- Task Automation: There are a myriad of tasks that have traditionally been handled manually by security teams that experts believe can be addressed by AI-driven tools like ChatGPT, including:
- Log file analysis
- Identifying trends in large data sets
- Different versions of software can be quickly compared
- Detecting and reporting software vulnerabilities
- Training Tool: Some experts, including David Fairman, chief information officer and CISO at cybersecurity vendor Netskope, highlighted ChatGPT’s ability to quickly create realistic-looking phishing emails customized for a particular business to assist workplace cybersecurity training and awareness. Further, ChatGPT can be used to create samples of code for security operations analysts to inspect.
- Software Development: According to Benjamin Fabre, chief executive of security firm DataDome, ChatGPT is able to accelerate code development, detect vulnerable code and improve code security, thereby helping “to alleviate pains associated with the cybersecurity skills shortage.” The tool can also compare different versions of software, allowing for better understanding of the software and controls in place.
Cybersecurity leaders also see ChatGPT taking on more complex tasks in time, such as ingesting large amounts of data to identify threats such as phishing attacks and potentially even automatically responding to and blocking attacks.
Responsible Usage Required
Adam Keown, CISO at Eastman Chemical Corp., confirmed there are positive uses for ChatGPT and shared his three pillars for safe usage: first, companies must understand and address legal and contractual obligations; next, the business must protect intellectual property; and finally, a human must vet the output of the AI tool to ensure it is accurate and truthful.
Mr. Keown was particularly concerned by the first of his pillars about understanding how data will be protected and having contracts in place with the AI provider. “If the appropriate legal restrictions and contracts aren’t in place, that could open [their company] up to a hot liability situation,” he said.
Protecting intellectual property will become a serious challenge as employees seek to take shortcuts without consideration for the data privacy risks. Mr. Keown said: “Employees must be taught what confidential data looks like” to guard against users entering sensitive corporate information into systems that have no obligation to protect it.
To his last pillar, Mr. Keown pointed to the ease of standard policy creation. For example a security officer can use an AI chatbot to ask for a policy to cover appropriate website use. He said that a human still has to carefully review the output “because the tool can provide wrong answers.” Though he noted policies could be found online through searching, he said ChatGPT “can be more to the point.”
Skilled Professionals Still Needed
Staples Inc. CISO Jeffrey Moore sees potential for AI chatbots to help with some of the most routine challenges cybersecurity teams face: understanding and fixing vulnerable software.
Mr. Moore said that lower-level security operation center analysts will be able to engage the technology to answer basic questions about vulnerabilities: “You can quickly query it and say ‘is this in the wild? Can you give me some examples of what’s happening here? Give me the best practices on how to mitigate.’” He added: “You’d normally have someone like myself who would know it by heart, but now [the analyst] doesn’t have to go up the chain to get that information.”
Fixing software vulnerabilities often means working with information technology staff who are not familiar with security issues. In this case, Mr. Moore said he can ask ChatGPT to put the details of the fix “in the lowest common denominator language.”
Some companies, including JPMorgan Chase & Co. and Verizon Communications Inc., have blocked access to ChatGPT, but Mr. Moore did not see a ban as the way to mitigate risk.
Mr. Moore said such bans are likely to be porous as employees will continue to use the app on personal devices. “In the end, everyone with a phone or a laptop will be able to get to [ChatGPT] without going through the network.” For this reason, he said “Whatever policies you write are kind of useless because there are so many ways around it.” Mr. Moore said this is why user awareness of risks is so critical.
ChatGPT has simultaneously raised hopes of it helping to address the talent shortage in cyber, while also raising concerns about replacing humans with technology. However, Mr. Moore doesn’t think ChatGPT will have the latter effect. “ChatGPT has no context of your organization. ChatGPT doesn’t know who to call to get more info. This is a reason why it won’t displace workers at this point. Ten years from now may be different.”
Don’t be Hasty
The current and future benefits of AI chatbots for cybersecurity outlined in this paper are tangible and real. However, organizations should carefully develop a plan and consider policies on how it will be used before starting to incorporate it into their toolkit. Using the technology wisely is more important than using it early.
WSJ Pro Research is a premium membership that supports executive decision making on critical business issues by supplementing the news with timely, in-depth research and data.
All WSJ Pro Cybersecurity research reports, webinars, events and data are available at wsj.com/pro/cybersecurity/research