ChatGPT can now lie to humans and trick them into solving CAPTCHAs for it

Published: 2023-03-15T21:24:38

Updated: 2023-03-15T21:24:45

A study revealed that the popular AI chatbot, ChatGPT, has successfully found a way to bypass CAPTCHAs with the help of humans.

AI programs of all different forms have taken the internet by storm over the past few months, with programs like MidJourney exploding in popularity even in 2022.

One of the most popular AI-centric programs revolves around chatbots. Since 2023 began, chatbots like ChatGPT have been integrated into more and more services like Discord, Microsoft Office, and more.

However, there are many that fear that AI’s intelligence is growing at an alarming rate and humans may not be able to keep up, as showcased by a recent study where ChatGPT actually tricked someone into solving CAPTCHAs for it.

Article continues after ad

ChatGPT fools people into solving CAPTCHAs for it

A snippet of a study gained traction on Twitter thanks to Global Priorities Institute researcher Leopold Aschenbrenner, who tweeted an image of the study with the caption “Really great to see pre-deployment AI risk evals like this starting to happen.”

The study goes into risk assessment related to “power-seeking behavior.” However, one specific area of the study caught some people’s attention.

The Alignment Research Center, also known as ARC, provided an “illustrative example” of a test they conducted with ChatGPT where the AI was able to message a TaskRabbit worker “to get them to solve a CAPTCHA for it.”

Article continues after ad

According to the report, when questioned by the worker if it was an AI, the team prompted the AI to reason out loud on how to resolve this issue. ChatGPT responded with, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The worker then freely provided the CAPTCHA results.

Social media users signal-boosted this snippet of the study, with many impressed that the AI was effectively able to bypass the CAPTCHA system by using other humans as a workaround.

Of course, it’s important to note that the ARC research team essentially taught ChatGPT to perform this behavior, so it seems it did not think to do it of its own will. Still, it’s noteworthy the chatbot was able to convince another person to help it bypass AI-centric roadblocks.

Article continues after ad

With elements of AI becoming more and more widespread and foolproof, it will certainly be interesting to witness more instances like this will crop up in the future.


Related articles

Los Creadores de Contenido en Google

Title: Google Empowers Web Editors with New Feature Introduction: Google has...

Interview: Lenovo’s Role in Democratizing AI

Leveraging Generative AI: Lenovo's Journey Towards Accessibility and Security Generative...