Exclusive: OpenAI Plans Initiative to "Redefine the Social Contract" | Vanity Fair
OpenAI staffers may be enjoying some hard-earned time off during a company-wide spring break this week, but as proven by the new $122 billion funding announcement that rolled out just moments before this column went to press, the AI world never sleeps. The new capital arrives on the heels of last week’s report that OpenAI was preparing to release a new advanced AI model, code-named Spud. In a company-wide meeting last Tuesday, CEO Sam Altman informed the assembled staff, “Things are moving faster than many of us expected.”
These kinds of leaks inevitably inspire breathless credulity on one side and cynical takes about the endless, investment-seeking hype cycle on the other. As usual, the reality of the situation is probably somewhere in between. As one OpenAI researcher put it to me over the weekend, “All newer models are better than older ones—and models have been close to or exceeded human intelligence for quite some time.”
Next week we’ll start to get a few more hints of what OpenAI is thinking. In conjunction with the coming release of the Spud model, I’ve learned, the lab is planning to release a series of new papers and proposals outlining fresh policy ideas for the superintelligence era. An initial document, which will be released next week, is part of a broader research push, led by CEO Altman, chief futurist Joshua Achiam, and vice president of global affairs Chris Lehane, that will focus on industrial policy and solutions for economic disruptions in the age of AI.
A source familiar with the project explained that the idea is to “think bigger from a policy perspective on societal issues as tech advances toward superintelligence.” This person teased some potentially controversial “conversation starters” meant to engage a broader cross section of society in the AI debate.
Those I spoke with wouldn’t get too deep into the nitty-gritty, but did discuss the need to “rethink the social contract” and build “superintelligence that works for everyone.” That certainly sounds a lot like they’re suggesting some form of wealth redistribution (an idea billionaires famously love). But the truth is, an Altman-funded study on universal basic income concluded in 2024 with somewhat disappointing results. Researchers found that the benefits of no-strings-attached monthly payments tended to “fade out by the second and third years of the program.” Let’s hope this new research initiative has come up with a better, more concrete solution to the coming AI-driven job loss.
Last week a leak revealed that Anthropic has been working on its own big new model release, code-named Mythos.
These developments are also intriguing in the context of the company’s streamlining pivots over the last few weeks. Last Tuesday, OpenAI announced the shuttering of its video-generation model, Sora, and the dissolution of its billion-dollar licensing deal with Disney—much to the entertainment company’s surprise. OpenAI also axed its controversial plans to release an erotic companion.
Meanwhile, the company has been reorganizing its safety-and-security efforts, and it announced that its OpenAI Foundation plans to spend $1 billion over the next year on medical research, AI resilience, and community programs. Even its product group was renamed to AGI Deployment.
These moves all seem to point to a company on the verge of…something. An IPO, which we know is scheduled for later this year? Falling irrevocably behind its competitors at Anthropic and Google? An actual technological breakthrough?
There’s also the rapidly approaching 2026 midterms—arguably the first election cycle in which AI and its ramifications will be truly top of mind, for American voters. Perhaps the company has woken up to the fact that AI’s dismal popularity ratings are bound to catch up with it in the form of harsh regulation.
OpenAI's new policy proposals will target 'societal issues as tech advances toward superintelligence.”
In general, phrases like “AI safety” and “AI risk” have become dirty words since Donald Trump took office a second time and “acceleration” became the Silicon Valley rallying cry. But the tides could be shifting back again, with some of the euphoria around Trump and his deregulatory paradigm starting to fade. In February, OpenAI poached safety researcher Dylan Scandinaro from Anthropic to lead its preparedness team, which now appears to be staffing up with roles focused on frontier biological and chemical risks, cybersecurity risks, and the ominously named “loss of control.”
Interestingly, OpenAI’s leadership has not exactly been walking in lockstep when it comes to politics. Achiam was last seen tweeting about how the “effort by the pro-AI lobby to torpedo Alex Bores will later on be widely understood as a pointless own-goal.” Many perceived that to be a slight against OpenAI president Greg Brockman, who has poured millions of dollars into a super PAC dedicated to attacking pro-regulation candidates like Bores.
OpenAI is not the only lab where the tonal shift can be felt. Anthropic experienced a huge public opinion boost following its crusade against what it argued was the Department of Defense’s overreach. (Woke is back!) Last week a leak revealed that Anthropic has been working on its own big new model release, code-named Mythos. In a statement to Fortune, a company spokesperson said that the model represents a “step change” in capabilities—with the leaked draft blog post warning of particular risks in the realm of cybersecurity due to the model’s sophistication.
Sabina Nong, an AI safety investigator at the Future of Life Institute, said she’s suspicious of this “renewed kind of narrative building,” at the frontier labs around the “catastrophic risks and disruptive forces.” Meanwhile, she sees a troubling countertrend: “While people are talking about it so much more often, the fact is that we see even less of a binding commitment from the companies.”
It is hard to escape the feeling that even in leaked chats and memos, these labs are performing for an audience. As the AI industry’s favorite anonymous account wrote last week, “it’s a difficult position to be in when all your private comms are de facto public comms…. ‘manhattan projects’ were not really in the option space, you just have to consistently out accelerate your adversaries in public.”
Kiss Kiss Bang Bang: Glock’s Wild Succession Saga
The High-Stakes Gambler, and Self-Styled Vigilante, at the Center of Paramount’s Legal Drama
The Summer House Scandal, Explained
How Trump Turned The Power of Positive Thinking Into Delusion
Milly Alcock Prepares to Take Flight
What Queen Elizabeth Was Really Like
Exclusive: Lindsey Vonn on Life After Her Horrifying Crash
Harry and Meghan’s Hollywood Dreams Hit a Speed Bump
Kylie Jenner Enters Her Hollywood Era
From the Archive: The Style Ethics of Mr. & Mrs. Calvin Klein