OpenAI's AGI Czar Quits, Saying the Company Isn't ready For What It's Building
"The world is also not ready."
OpenAI's researcher in charge of making sure the company (and the world) is prepared for the advent of artificial general intelligence (AGI) has resigned — and is warning that nobody is ready for what's coming next.
In a post on his personal Substack, the firm's newly-resigned AGI readiness czar Miles Brundage said quitting his "dream job" after six years has been difficult. He says he's doing so because he feels a great responsibility regarding the purportedly human-level artificial intelligence he believes OpenAI is ushering into existence.
"I decided," Brundage wrote, "that I want to impact and influence AI's development from outside the industry rather than inside."
When it comes to being prepared to handle the still-theoretical tech, the researcher was unequivocal.
"In short, neither OpenAI nor any other frontier lab is ready," he wrote, "and the world is also not ready."
After that bold declaration, Brundage went on to say that he's shared his outlook with OpenAI's leadership. He added, for what it's worth, that he thinks "AGI is an overloaded phrase that implies more of a binary way of thinking than actually makes sense."
Instead of there being some before-and-after AGI framework, the researcher said that there are, to quote many a hallucinogen enthusiast, levels to this shit.
Article
The TL;DR is:
I want to spend more time working on issues that cut across the whole AI industry, to have more freedom to publish, and to be more independent;
I will be starting a nonprofit and/or joining an existing one and will focus on AI policy research and advocacy, since I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so;
Some areas of research interest for me include assessment/forecasting of AI progress, regulation of frontier AI safety and security, economic impacts of AI, acceleration of beneficial AI applications, compute governance, and overall “AI grand strategy”;
I think OpenAI remains an exciting place for many kinds of work to happen, and I’m excited to see the team continue to ramp up investment in safety culture and processes;
I’m interested in talking to folks who might want to advise or collaborate on my next steps.
I’ve been excited about OpenAI as an organization since it was first announced in December 2015. After the announcement, I stayed up all night writing down thoughts on the significance of it. Even before that, around 12 years ago, I decided to devote my life to something in the rough vicinity of OpenAI’s mission (ensuring AGI benefits all of humanity).
To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I’ll be working on AI policy for the rest of my career).
Whether the company and the world are on track for AGI readiness is a complex function of how safety and security culture play out over time (for which recent additions to the board are steps in the right direction), how regulation affects organizational incentives, how various facts about AI capabilities and the difficulty of safety play out, and various other factors.
The Economic Research team, which until recently was a sub-team of AGI Readiness led by Pamela Mishkin, will be moving under Ronnie Chatterji, OpenAI’s new Chief Economist. The remainder of the AGI Readiness team will be distributed among other teams, and I’m working closely with Josh Achiam on transfer of some projects to the Mission Alignment team he is building.
Article