OpenAI’s Controversial GPT-4: Is It Smarter or Dumber?
OpenAI’s latest large language model (LLM), known as GPT-4, has stirred up quite a controversy in the world of artificial intelligence (AI) development. Some developers claim that GPT-4 has become “lazy” or “dumber,” while OpenAI insists that the model has actually become smarter.
GPT-4, unveiled by OpenAI in March, is considered to be a superior model compared to its predecessor. It is a “multimodal” model capable of processing both text and images, providing accurate responses, albeit at a slightly slower speed.
Recent reports, however, suggest that GPT-4 is providing incorrect answers more frequently and lacks the same level of understanding of inference abilities and input commands. Users have likened the experience of using GPT-4 to going from an excellent chef to a dishwasher or from driving a Ferrari to a broken pickup truck. In response, Peter Welinder, OpenAI’s Vice President of Product, took to Twitter to assert that GPT-4 is not intended to be foolish and is, in fact, smarter than previous iterations.
Research conducted by a joint team from Stanford University and UC Berkeley has further revealed a clear decline in the quality of GPT-4 compared to its March version. The team evaluated GPT-4’s performance in math problem-solving, code generation, and visual inference, finding that the June version of GPT-4 exhibited overall poorer performance.
OpenAI has not provided an explanation for this decline, as the company has chosen not to disclose its core technologies since the release of GPT-4.
Experts in the field speculate that GPT-4 may have implemented the ‘Mixture of Experts’ (MOE) approach. This approach involves employing multiple sub-expert models optimized for specific fields such as physics, chemistry, and biology, rather than relying solely on a large model like GPT-4. Proponents argue that this approach allows for increased efficiency and reduced operating costs. Some even suggest that GPT-4 has essentially transformed into a different model altogether.
From the perspective of users, regardless of what GPT-4 has become, the main concern is the decline in quality. Therefore, it is crucial for OpenAI to address this issue and provide a solution.
The Future of AI: Key Technological Trends
As AI continues to advance, several notable technological trends are emerging:
1. Autonomous AI Assistants: AI companies are working on developing “autonomous AI assistants” that can independently perform complex tasks without explicit instructions. These assistants, integrated into virtual platforms such as ‘Alexa’ or ‘Siri’, are attracting significant interest from investment firms. In Silicon Valley alone, over 100 startups are actively engaged in developing this technology.
2. AI in Bio: The field of AI in bio is experiencing a significant influx of investment funds. One example is CureAI, an AI drug development company that recently secured $60 million in investment. This follows a previous investment of $50 million from NVIDIA. Founded six years ago, CureAI has built an AI-powered platform for drug development.
OpenAI’s GPT-4 has undoubtedly ignited a debate within the AI community. While some believe the model has regressed in performance, OpenAI asserts that GPT-4 has advanced in its capabilities. As with any rapidly evolving technology, it is inevitable to encounter challenges and setbacks along the way.
What’s important now is for OpenAI to address the concerns raised regarding the decline in quality and offer a solution. Transparency in the development and improvement processes will also be crucial in regaining trust from users and developers alike.
As AI continues to shape our world, it is imperative to closely follow its advancements and potential implications. Stay informed by visiting GPT News Room.