Effective management of “windfall profits” crucial for OpenAI and other AI companies

The Future of AI Profits: How to Distribute the Windfall

There’s a lot of money in AI. That’s not just something that startup founders rushing to cash in on the latest fad believe; some very reputable economists are predicting a massive boom in productivity as AI use takes off, buoyed by empirical research showing tools like ChatGPT boost worker output. But while previous tech founders such as Larry Page or Mark Zuckerberg schemed furiously to secure as much control over the companies they created as possible — and with it, the financial upside — AI founders are taking a different tack, and experimenting with novel corporate governance structures meant to force themselves to take nonmonetary considerations into account.

Demis Hassabis, the founder of DeepMind, sold his company to Google in 2014 only after the latter agreed to an independent ethics board that would govern how Google uses DeepMind’s research. (How much teeth the board has had in practice is debatable.) ChatGPT maker OpenAI is structured as a nonprofit that owns a for-profit arm with “capped” profits: First-round investors would stop earning after their shares multiply in value a hundredfold, with profits beyond that going into OpenAI’s nonprofit. A 100x return may seem ridiculous but consider that venture capitalist Peter Thiel invested $500,000 in Facebook and earned over $1 billion when the company went public, an over 2,000x return. If OpenAI is even a 10th that successful, the excess profits returning to the nonprofit would be huge. Meanwhile, Anthropic, which makes the chatbot Claude, is divesting control over a majority of its board to a trust composed not of shareholders, but independent trustees meant to enforce a focus on safety ahead of profits.

AI Self-Regulation

Those three companies, plus Microsoft, got together on Wednesday to start a new organization meant to self-regulate the AI industry. I don’t know which of these models, if any, will work — meaning produce advanced AI that is safe and reliable. But I have hope that the hunger for new governance models from AI founders could maybe, possibly, if we’re very lucky, result in many of the potentially enormous and needed economic gains from the technology being broadly distributed.

Where does the AI windfall go?

There are three broad ways the profits reaped by AI companies could make their way to a more general public. The first, and most important over the long-term, is taxes: There are a whole lot of ways to tax capital income, like AI company profits, and then redistribute the proceeds through social programs. The second, considerably less important, is charity. Anthropic in particular is big on encouraging this, offering a 3-1 match on donations of shares in the company, up to 50 percent of an employee’s shares. That means that if an employee who earns 10,000 shares a year donates half of them, the company will donate another 15,000 shares on top of that. The third is if the companies themselves decide to donate a large share of their profits.

“The Windfall Clause”

This was the key proposal of a landmark 2020 paper called “The Windfall Clause,” released by the Centre for the Governance of AI in Oxford. The six authors notably include a number of figures who are now senior governance officials at leading labs; Cullen O’Keefe and Jade Leung are at OpenAI, and Allan Dafoe is at Google DeepMind (the other three are Peter Cihon, Ben Garfinkel, and Carrick Flynn). The idea is simple: The clause is a voluntary but binding commitment that AI firms could make to donate a set percentage of their profits in excess of a certain threshold to a charitable entity. They suggest the thresholds be based on profits as a share of the gross world product (the entire world’s economic output). If AI is a truly transformative technology, then profits of this scale are not inconceivable. The tech industry has already been able to generate massive profits with a fraction of the workforce of past industrial giants like General Motors; AI promises to repeat that success but also completely substitute for some forms of labor, turning what would have been wages in those jobs into revenue for AI companies. If that revenue is not shared somehow, the result could be a surge in inequality.

Setting the Thresholds

In an illustrative example, not meant as a firm proposal, the authors of “The Windfall Clause” suggest donating 1 percent of profits between 0.1 percent and 1 percent of the world’s economy; 20 percent of profits between 1 and 10 percent; and 50 percent of profits above that be donated. Out of all the companies in the world today — up to and including firms with trillion-dollar values like Apple — none have high enough profits to reach 0.1 percent of gross world product. Of course, the specifics require much more thought, but the point is for this not to replace taxes for normal-scale companies, but to set up obligations for companies that are uniquely and spectacularly successful. The proposal also doesn’t specify where the money would actually go. Choosing the wrong way to distribute would be very bad, the authors note, and the questions of how to distribute are innumerable: “For example, in a global scheme, do all states get equal shares of windfall? Should windfall be allocated per capita? Should poorer states get more or quicker aid?”

A Global UBI

I won’t pretend to have given the setup of windfall clauses nearly as much thought as these authors, and when the paper was published in early 2020, OpenAI’s GPT-3 hadn’t even been released. But I think their idea has a lot of promise, and the time to act on it is soon. If AI really is a transformative technology, and there are companies with profits on the order of 1 percent or more of the world economy, then the cat will be far out of the bag already. That company would presumably fight like hell against any proposals to distribute its windfall equitably across the world, and would have the resources and influence to win. But right now, when such benefits are purely speculative, they’d be giving up little. And if AI isn’t that big a deal, then at worst those of us advocating these measures will look foolish. That seems like a small price to pay.

My suggestion for distribution would be not to attempt to find hyper-specific high-impact opportunities, like donating malaria bednets or giving money to anti-factory farming measures. We don’t know enough about the world in which transformative AI develops for these to reliably make sense; maybe we’ll have cured malaria already (I certainly hope so). Nor would I suggest outsourcing the task to a handful of foundation managers appointed by the AI firm. That’s too much power in the hands of an unaccountable group, too tied to the source of the profits.

Instead, let’s keep it simple.

The windfall should be distributed…

Editor Notes: A Path to Equitable AI Development

The question of how to distribute AI profits is a critical one. As AI technology continues to advance, it is important to ensure that the economic gains are not concentrated in the hands of a few. The proposal put forth in “The Windfall Clause” provides a compelling framework for addressing this issue. By committing to donate a portion of their excess profits, AI companies can contribute to the well-being of society as a whole. However, it is crucial to carefully consider the specifics of how this distribution should take place, as the authors of the paper rightly point out. The windfall should be allocated in a way that is equitable and effective, avoiding potential pitfalls and ensuring that the funds are used for the greater good. To achieve this, collaborative efforts between AI companies, policymakers, and other stakeholders will be essential. By working together, we can create a future where the benefits of AI technology are shared by all. For more news and insights on AI and technology, visit the GPT News Room at https://gptnewsroom.com.

Source link


Related articles

Los Creadores de Contenido en Google

Title: Google Empowers Web Editors with New Feature Introduction: Google has...

Interview: Lenovo’s Role in Democratizing AI

Leveraging Generative AI: Lenovo's Journey Towards Accessibility and Security Generative...