OpenAI CEO Sam Altman acknowledges risks of ChatGPT-style tech: ‘A little bit scared of this’

image

(SAN FRANCISCO, Ca.) — In an exclusive interview with ABC News, OpenAI CEO Sam Altman spoke of the potential future of artificial intelligence technology, like his company’s flagship product ChatGPT-4. He believes AI will reshape society as we know it, and that his company has a healthy fear of it.

“We’ve got to be careful here. I think people should be happy that we are a little bit scared of this.”

Altman emphasized the importance of external controllers in this new area of technology. Regulators and societal controls should be as involved as possible, he says, insisting that feedback will help to deter the potential negative side-effects.

ChatGPT is an AI language model, a vastly complex network which mimics the neural structure of the human brain. GPT stands for ‘generative pre-trained transformer.’

It’s been able to score in the 90th percentile on the Uniform Bar Exam, and a near-perfect score on the SAT math test.

As the systems get more efficient at tasks that have historically required human input, like writing code or generating user content, the company is concerned that bad actors could use it to effectively erase the line between user- and computer-generated content online.

“The thing that I try to caution people the most is what we call the ‘hallucinations problem,’” Altman said. “The model will confidently state things as if they were facts that are entirely made up.”

This issue, unique to the latest model of the AI, is largely due to its basis on deductive reasoning, rather than rote memorization. As the program does its best to predict the next word in a sequence, it isn’t able to inherently understand whether or not its information is accurate to the real world.

The AI still relies on human input, though. Fears of an autonomous intelligence making its own decisions, a la HAL 9000, seem easily dealt with. OpenAI has also put limitations on its software, preventing the model from conversations about, for example, how to construct an improvised explosive device.

“A thing that I do worry about is… we’re not going to be the only creator of this technology,” Altman said. “There will be other people who don’t put some of the safety limits that we put on it.”

There are a few solutions and safeguards to all of these potential hazards with AI, per Altman. One of them: Let society toy with ChatGPT while the stakes are low, and learn from how people use it.

Subscribe

Related articles

Los Creadores de Contenido en Google

Title: Google Empowers Web Editors with New Feature Introduction: Google has...

Interview: Lenovo’s Role in Democratizing AI

Leveraging Generative AI: Lenovo's Journey Towards Accessibility and Security Generative...