OpenAI has announced a major upgrade to the technology that underpins ChatGPT, the seemingly magical online tool that professionals have been using to draft emails, write blog posts and more. If you think of ChatGPT as a car, the new language model known as GPT-4 adds a more powerful engine. The old ChatGPT could only read text. The new ChatGPT can look at a photo of the contents of your fridge and suggest a dinner recipe. The old ChatGPT scored in the 10th percentile on the bar exam. The new one was in the 90th. In the hours since its release, people have used it to create a website from a hand-drawn sketch or look through a dating website for an ideal partner.
But this is the fun part of unleashing a powerful language model to the public. The honeymoon period. What are the long-term consequences? OpenAI ( once again) hasn’t disclosed the datasets it used to train GPT-4, so that means researchers can’t scrutinize the model to determine how it might inadvertently manipulate or misinform people. More broadly though, it ushers in a new era of hyper-efficiency, where professionals will have to work smarter and faster — or perish.
There is no better example of this than Morgan Stanley, which has been using GPT-4 since last year. According to an announcement by the bank on Tuesday, Morgan Stanley trained GPT-4 on thousands of papers published by its analysts on capital markets, asset classes, industry analysis and more, to create a chatbot for its own wealth advisers. About 200 staff at the bank have been using it daily, the company said.
“Think of it as having our Chief Investment Strategist, Chief Global Economist, Global Equities Strategist, and every other analyst around the globe on call for every advisor, every day,” Morgan Stanley analytics chief Jeff McMillan said in an official statement.
But here was the line that really stood out from OpenAI’s own write-up of the case study:
“McMillan says the effort will also further enrich the relationship between Morgan Stanley advisors and their clients by enabling them to assist more people more quickly.”
How much more quickly? A spokesperson for Morgan Stanley tells me its advisers can now do in seconds what they used to do in half an hour, such as looking at an analyst’s note to advise a client on the performance of certain companies and their shares.
Powerful AI systems like GPT-4 aren’t going to replace large swaths of professional workers, as many have instinctively feared. But they will put them under greater pressure to be more productive and faster at what they do. They will raise the bar on what is considered acceptable output and usher in an era of ultra-efficiency unlike anything we’ve seen before.
That is what partly happened to professional translators and interpreters. As artificial intelligence tools like Google Translate and DeepL grew in popularity among business customers, many translators feared they would be replaced. Instead, they were expected to increase their output.
Before the advent of translation tools, a professional would be expected to translate between 1,000 and 2,000 words a day, according to Nuria Llanderas, who has been a professional interpreter for more than 20 years. “Now they are expected to manage 7,000,” she says. Her industry peers have predicted more AI systems will start supporting them on simultaneous translation, but that could also mean more work for the human translators in practice, checking that the machine’s output isn’t wrong.
It will also raise the bar on the humans’ performance. “With the extra help you have no excuses to leave anything out,” Llanderas adds.
Much of this is typical of the march of technology. Smartphones allowed us to be connected to work at all times. Slack allowed us to communicate with more people inside a company, more seamlessly. But such tools also kept us further chained to work, squeezing out minutes in the day that workers might have used in the past for contemplation, strategic thinking or just taking a breather.
GPT-4 clearly has the potential to wring more value out of human workers, but it may well come at the cost of our mental energy. However brilliant these models become, watch out for how they might take you a tiny step closer to burnout.
More From Bloomberg Opinion:
• ChatGPT Shows Just How Far Europe Lags in Tech: Lionel Laurent
• How Sentient Is Microsoft’s Bing, AKA Sydney?: Parmy Olson
• ChatGPT Sounds Exactly Like Us. Is That Good?: Stephen Mihm
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”
More stories like this are available on bloomberg.com/opinion
©2023 Bloomberg L.P.