The Importance of Collaboration and Customized Benchmarks in the Age of AI
Generative artificial intelligence (AI) models are rapidly advancing, and as organizations strive to find the right fit for their business, collaboration and customized test benchmarks will play a crucial role. Ong Cheng Hui, assistant chief executive of the business and technology group at Infocomm Media Development Authority (IMDA), emphasizes the need for industry collaboration in training large language models (LLMs) that are specific to verticals and align with each country’s unique values and principles. She questions whether a single large foundation model is the best approach, highlighting Bloomberg’s efforts to create its own generative AI model, BloombergGPT, trained specifically on financial data. As long as expertise, data, and compute resources are accessible, the industry can continue to drive progress forward.
Collaboration and Open Source in Singapore
The AI Verify Foundation, of which Red Hat is a member, aims to foster responsible and ethical AI use by leveraging the open-source community to develop test toolkits. Singapore, known for its high adoption of open-source technologies and principles in the Asia-Pacific region, has seen significant progress in the adoption of open source, with 72% of organizations in the country reporting high or very high progress in their adoption efforts. Red Hat’s local customers, such as port operator PSA Singapore and UOB bank, have utilized open-source applications to automate operations and support cloud development. Ong emphasizes the importance of transparency in driving AI ethics and highlights the value of open collaboration and continuous updates, drawing inspiration from the cybersecurity field.
Continuous Testing and Improvement
According to Ong, the same approach used in the automotive industry, where seatbelts were continually tested and improved for driver safety, should be applied to generative AI models and applications. However, decisions by major players like OpenAI to withhold technical details behind their LLMs have raised concerns within the industry. Academics led by University of Oxford’s Emanuele La Malfa published a research paper highlighting accessibility, replicability, reliability, and trustworthiness issues arising from the lack of information about large language AI models. The scholars note that the most potent and risky models are often the most challenging to analyze due to limited transparency. OpenAI defends its decision, citing competitive landscape and security implications.
Two Paths Forward
Ong suggests that two camps will emerge in the foundation layer of generative AI models. One camp will consist of proprietary large language AI models such as OpenAI’s ChatGPT-4, while the other will opt for building models on open-source architectures like Meta’s Llama-v2. Organizations concerned about transparency can choose the open-source alternative. However, businesses will also build on top of the foundation layer to meet domain-specific requirements in education and financial services. To ensure AI ethics and safety principles are met, a transparent and trusted application layer with customized test benchmarks is needed.
The Importance of Domain-Specific Requirements
Different markets, regions, and industries have unique domain-specific requirements and ethical considerations. While some businesses and governments are currently testing generative AI tools, focusing on non-consumer-facing use cases initially to minimize risks, Ong highlights the importance of ensuring generative AI applications fulfill AI ethics and safety principles. Compliance with regulations is particularly significant in highly regulated sectors like financial services. Moreover, governments want to ensure AI models are based on training data and principles specific to their population’s values and cultures.
The Need for Localization
Localization is critical in ensuring AI models are well-suited to a specific population mix. For example, Singapore’s multicultural society has unique values and policies that must be reflected in AI models. However, there may be challenges in building a locally-tailored LLM due to the country’s small population size. Ong raises the question of whether it is possible to fuse or integrate foundation models from different regions to better align with Singapore’s population mix, considering that market players in other regions, particularly China, are also developing LLMs trained on local data.
As generative AI models continue to evolve, collaboration and customization are key to ensuring the right fit for businesses and aligning models with unique values and principles. Transparency and open collaboration are crucial in driving AI ethics and enabling continuous improvement. Organizations have the option to choose between proprietary models and open-source architectures based on their transparency requirements. Customized test benchmarks are vital to meet domain-specific needs and ensure compliance with AI ethics and safety principles. Governments need localization to align models with their population’s values and cultures, and collaboration across regions may offer opportunities for adaptation. For more news on artificial intelligence and technological advancements, visit GPT News Room.