This article is the on-site version of the Moral Money newsletter. Premium subscribers can sign up here and offer their newsletter three times a week. Standard subscribers can upgrade to premium here or explore all their FT newsletters.
Check out our moral money hub for all the latest ESG news, opinions and analysis from around Fort
Welcome home.
The struggle to balance profits and objectives is tough for many business leaders. It must be especially difficult when you think your industry can cause human extinction.
How do artificial intelligence companies handle this delicate dynamic? Read more and let us know at moralmoneyreply@ft.com.
Corporate Governance
Weight profits and humanity of AI startups
Perhaps historical entrepreneurs are less sure that they could shake the world of their work as much as the crops of today’s AI pioneering crops. To reassure the public, and perhaps by itself, some of the sector’s major players have developed an extraordinary governance structure that suppresses the commercial interests rather than humanity’s interests.
However, it is not clear that when these two priorities collide, these systems will prove to be aimed. And tensions have already proven difficult to deal with, as we can see from recent developments at Openai, the world’s most well-known and highly regarded AI startup. It’s a complicated saga, but it gives an important window into the corporate governance discussion that has a great significance.
Openai was founded in 2015 by a group that includes entrepreneur Sam Altman as a non-profit research entity funded by donations from Elon Musk and others. But a few years later, Altman concluded that the mission would require more expensive computing power than it would be funded solely through philanthropy.
So in 2019 Openai established a for-profit organization with its own structure. Commercial investors, which Microsoft has easily reached its peak, have capped their profits, with all revenue above that level flowing to nonprofits. Importantly, nonprofit boards continue to manage the work of for-profit organizations, with human-focused missions prioritizing investors’ returns.
“In the spirit of donation, it would be wise to look at investments in Openai Global, LLC,” the investor said. However, Microsoft and other investors have proven that Openai is willing to provide the funds that allowed the world to stun with the launch of ChatGpt.
However, these days, investors have expressed concern about the setup. In particular, Japan’s Softbank is seeking structural reforms.
In December, Openai moved to address these concerns. This concern, although harmlessly said, would have hampered its restrictive governance structure. Nonprofits will lose control over for-profit businesses. Instead, it is ranked as a voting shareholder along with other investors and uses the final income from the business to “pursuing charitable initiatives in areas such as healthcare, education, and science.”
The plan prompted catastrophic open letters from various AI celebrities, urging government officials to act what they said was a violation of Openai’s voluntary legal constraints. Importantly, they said the December plan would abolish “forceable obligations on the public” to ensure that AI benefits humanity, which had been burned into the organization’s legal structure from the start.
This week, Openai published a revised plan that addresses many of the critics’ concerns. Key climbing goes beyond the power of nonprofit committees and maintains overall control of the for-profit organization. However, Openai plans to move forward with the removal of its profit cap for commercial investors.
It remains to be seen whether this compromise will be sufficient to satisfy investors like Microsoft and SoftBank. In any case, Openai can reasonably argue that it maintains much more stringent constraints on its work than Arch Rival Deepmind. When the London-based company sold out to Google in 2014, its founder secured a promise that the work would be overseen by a legally independent ethics committee, as Palmie Olson speaks in the hegemony of her book. However, the plan was quickly removed. “I think we probably had a bit of an idealistic view,” Demis Hassavis, co-founder of DeepMiss Hassavis told Olson.
Early stage idealism is still being discovered at Anthropic, a startup founded in 2021 by Open Eye employees. Humanity has created five independent “long-term benefits trusts” with an obligation to promote the interests of all humanity. Within four years, the trust has the authority to appoint most of the committees of humanity.
Humanity is constituted as a public benefit corporation. That is, its directors are legally required to consider the interests of society as well as shareholders. Musk’s Xai is also a PBC, and Openai’s for-profit business will be one under the proposed restructuring.
However, in practice, PBC structures are rarely imposed in a constrained way. Only important shareholders, not more ordinary people, can take action against such companies to infringe on the fiduciary’s obligations to the wider society.
Recommended
And while preserving nonprofit control in Openai may seem like a big victory for AI safety advocates, it is worth remembering what happened in November 2023. After the board fired Altman over concerns about Openai’s guiding principles, it faced a rebellion of staff and investors who finished Altman’s return and most of his supervisor.
In short, the power of a nonprofit committee with obligations to humanity has been tested and shown to be minimal.
Two of these warned last year in an economist operation that voluntary constraints on AI startups “will certainly not bear the pressure of profit incentives.”
“For AI to benefit everyone, the government must now begin to build an effective regulatory framework,” writes Helen Toner and Tasha Macquarie.
The EU got off to a strong start on its front with its groundbreaking AI methods. However, in the US, engineers such as Marc Andreessen have made great strides in campaigns against AI regulations, with the Trump administration showing little desire for strict control.
The regulatory case is reinforced by increasing evidence of the possibility of AI exacerbating racial and gender inequality in the labour market and beyond. The long-term risks presented by increasingly powerful AI can prove even more serious. Many key figures in the sector, including Altman and Hassabis, signed a 2023 statement that “mitigating the risk of extinction from AI should be a global priority.”
If AI leaders are misled about the power of their invention, they may not need to worry. But as investments in this sector continue to mushroom, that would be a rush assumption.
Smart Lead
New data shows that global warming in Danger Zone has exceeded the 1.5c threshold in 21 months of the past 22 months.
Pushing back US officials is calling on global financial authorities to reduce flagship climate risk projects under the Basel Committee on Banking Supervision.
Recommended newsletter
Full disclosure – Keep the biggest international legal news up to date, from courts to law enforcement and legal businesses. Sign up here
Energy Sources – Essential Energy News, Analysis, Insider Intelligence. Sign up here