Governments and AI Regulation: Lack of Progress Amidst Safety Concerns
According to the recently published “2023 State of AI” report, governments worldwide have failed to make concrete progress in regulating artificial intelligence (AI) despite the increasing urgency surrounding its safety. The report highlights AI safety as a prominent focus in 2023, shedding its previous status as an overlooked aspect of AI research. However, due to a lack of global consensus on the path of regulation, developers of advanced AI systems have taken it upon themselves to propose regulatory models.
The Dominance of Big Tech in AI
While open-source initiatives seemed to lead the AI field in the previous year, the report reveals that Big Tech has once again asserted its influence in 2023. Major tech companies have gained leverage through their existing computing infrastructure and substantial capital reserves, particularly valuable amidst the ongoing shortage of powerful computer chips. This advantageous position enables them to invest more significantly in training large AI models, further solidifying their hold over the AI sector.
Shifting Dynamics: The Rise of Closed-Source AI
Nathan Benaich, the report’s author, explains that open-source efforts were prevalent last year, with many individuals assembling in Discord servers and the emergence of numerous open-source models. However, in 2023, there has been a significant shift towards closed-source capabilities. Almost every public tech company is now actively developing or integrating AI systems into their products, marking a substantial departure from the previous year. Although the open-source community remains vibrant, catching up with closed-source advancements, fully replicating powerful models like GPT-4 is still a significant challenge.
Insights from the State of AI Report
The annual State of AI report, now in its sixth year, offers valuable insights and predictions for the industry. Its compilation is led by Nathan Benaich, an investor at Air Street Capital. The 2023 report highlights the continued superiority of OpenAI’s GPT-4, which remains the most powerful large language model (LLM) worldwide, surpassing classic benchmarks and even human performance in specific evaluations. However, the increasing power and flexibility of cutting-edge AI systems make direct comparisons more challenging. Consequently, an industry shift towards “vibes-based” evaluations arises as traditional benchmarks become less definitive.
Opacities in AI Research
The report highlights the diminishing culture of open sharing among AI companies regarding state-of-the-art research. OpenAI, Google, and Anthropic have become less forthcoming with details about their models’ system architecture. With higher economic stakes and mounting safety concerns surrounding AI, traditionally open companies have shifted towards greater opacity. While the report emphasizes that this trend is not uniform across the industry, it implies a growing need for caution and strategic information sharing.
Predictions for 2024
- A Hollywood-grade production utilizes generative AI for visual effects.
- An investigation takes place involving a generative AI media company and its potential misuse during the 2024 U.S. election.
- A group invests over $1 billion to train a single large-scale model as part of the GenAI scaling craze.
- Global AI governance experiences limited progress, primarily relying on high-level voluntary commitments.
- An AI-generated song achieves a spot in either the Billboard Hot 100 Top 10 or the Spotify Top Hits 2024.
- The E.U.’s AI Act faces challenges related to enforcement and interpretation, hindering widespread adoption as an AI regulatory model.
Editor Notes: Insights and Perspectives
The 2023 State of AI report highlights the lack of concrete progress in AI regulation despite growing safety concerns. It reflects the dominance of Big Tech companies in the AI sector and the shift towards closed-source capabilities, shaping the AI landscape. Although GPT-4 retains its position as the most powerful language model, the report emphasizes the challenges of comparing advanced AI systems accurately. As the industry evolves, transparency in AI research becomes increasingly limited, necessitating a thoughtful balance between innovation and safety. Looking ahead to 2024, the report predicts various noteworthy trends, including the pervasive influence of generative AI in Hollywood and potential disruptions in AI governance.
Editor’s Note:
For more AI news, industry updates, and insights, visit GPT News Room.