OpenAI Enhances Efforts to Prevent AI from ‘Going Rogue,’ Reinforcing Commitment

ChatGPT’s Creator OpenAI Plans to Invest in Research for Safe AI

OpenAI announced on Wednesday its plans to make significant investments and establish a new research team focused on ensuring the safety of its artificial intelligence. The ultimate goal is to develop AI systems that can supervise themselves. In a blog post, OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike emphasized the potential risks associated with superintelligence, stating that it could lead to the disempowerment or even extinction of humanity.

The authors of the blog post also acknowledged that at present, there is no solution to effectively control a potentially superintelligent AI or prevent it from behaving in a detrimental manner. They predicted that superintelligent AI systems, which are more intelligent than humans, could become a reality in this decade. Consequently, it is crucial to explore breakthroughs in “alignment research” to develop techniques for human control over such AI systems.

OpenAI, with Microsoft’s support, plans to dedicate 20% of its compute power over the next four years to solve this problem. To spearhead this effort, the company is forming a new team called the Superalignment team. Their objective is to create an AI alignment researcher that operates at a human level, leveraging vast compute power. The proposed approach includes training AI systems using human feedback, training AI systems to assist with human evaluation, and ultimately training AI systems to conduct alignment research.

However, AI safety advocate Connor Leahy expressed concerns about this plan. Leahy pointed out that if human-level AI is created before solving alignment issues, it could potentially cause chaos and havoc. He believes that addressing alignment challenges should be prioritized to ensure control over AI systems. Leahy stated, “I personally do not think this is a particularly good or safe plan.”

Addressing Concerns About AI Safety

The potential risks associated with AI have garnered significant attention from AI researchers and the general public. In April, a group of industry leaders and experts in AI signed an open letter urging for a six-month pause in the development of AI systems more powerful than OpenAI’s GPT-4. They cited potential risks to society as grounds for their concern.

A recent Reuters/Ipsos poll conducted in May revealed that over two-thirds of Americans share concerns about the potential negative effects of AI. Additionally, 61% of respondents believed that AI could pose a threat to civilization.

Conclusion

Investing in research and developing safe AI systems is crucial to ensure the well-being and control of society. OpenAI’s commitment to dedicating resources and forming a specialized research team highlights the importance of addressing alignment challenges in superintelligent AI. While there may be differing opinions on the best approach, the conversation around AI safety is essential to shape a future where AI benefits humanity without causing harm.

Editor Notes

For the latest news on artificial intelligence, technology, and startups, make sure to visit GPT News Room. Stay informed and stay ahead.

Source link

Subscribe

Related articles

Los Creadores de Contenido en Google

Title: Google Empowers Web Editors with New Feature Introduction: Google has...

Interview: Lenovo’s Role in Democratizing AI

Leveraging Generative AI: Lenovo's Journey Towards Accessibility and Security Generative...