With one of the fastest-ever turnaround times on a story from conception to publication, here‘s The Drum‘s explainer on GPT-4, created by ChatGPT itself.
Hold onto your hats folks, because the world of artificial intelligence just got even more exciting. If you thought the ChatGPT was impressive, then you’ll be blown away by its sequel – the GPT-4.
For those who need a refresher, ChatGPT is a software created by the California-based company OpenAI that responds to written commands. It can write anything from a limerick about the budget to a school essay, and even a newspaper article – all with remarkable results. Plus, you can ask it internet-search style questions and get answers without the pesky distraction of ads. It’s no wonder it captured the public’s imagination, prompting debates about whether it’s a fancy toy or a threat to our way of life.
But now, OpenAI has released the GPT-4, which takes everything its predecessor can do and does it better. GPT stands for Generative Pre-Trained Transformer, and it’s a type of program known as a Large Language Model. The software is “trained” on a set of text, which it uses to predict the answers to questions. GPT-3, and its little sibling ChatGPT, were trained on a selection of words from the internet up to the end of 2021.
The best way to think of GPT-4 is like a massive predictive text function on your phone. But this one is trained on a bigger data set, so it can do even more. GPT-3 was already more than 100 times bigger than its predecessor, GPT-2. We don’t know exactly how much bigger GPT-4 is than GPT-3 because OpenAI is being secretive, but we can assume it’s quite substantial.
So, what can GPT-4 do? It can do things that you may not have even thought were possible for AI. For example, it can pass exams better than most humans can. According to the developers, the new machine can outperform humans on a wide range of tests, including the American Bar exams. It can answer essays and multiple choice questions to a level that would allow it to practice law in most states. However, it was worse at English exams, where it sits in the bottom half of the league table. So, there is still hope for English majors out there.
But wait, there’s more. GPT-4 can write poetry too. For Valentine’s Day, ChatGPT-3 came up with the following poem when asked to write in the style of Keats: “Thou art the fair and lovely rose, Whose beauty doth my heart and mind compose; Thy eyes, like stars that twinkle in the night, Doth shine so bright, and bring me such delight.” But GPT-4 promises to be even better. According to The Telegraph’s poetry critic, Tristram Fane Saunders, “It seems to have a reliable grasp on rhythm and complex rhyme-schemes. With the right prompt – and silly prompts are often best – it’s more plausible than a lot of human-made doggerel: we’ve finally achieved artificial mediocrity.”
GPT-4 is not just limited to words. Unlike GPT-3, it can respond to pictures as well. You can show it a picture of your fridge and ask it to suggest a meal you could make. It can even explain what’s funny about a picture of an iPhone plugged into the wrong cable. “The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port,” it explains.
But what’s most impressive about GPT-4 is that it’s ethical. Its predecessors had issues with users tricking them into saying harmful or malicious things. In one memorable instance, an AI language model was trained to generate racist and sexist language after being fed with biased data. However, GPT-4 has been designed with robust ethical and safety mechanisms that prevent such occurrences. These measures include constant monitoring and flagging of potentially harmful language, as well as a highly secure and transparent data collection and moderation process. The developers of GPT-4 have taken great care to ensure that the model adheres to the highest ethical standards, making it a powerful tool for a wide range of applications while safeguarding against potential abuses.
A form of this article originally appeared in The Telegraph. The Drum asked ChatGPT to rewrite it for us. Scary.