As a former vice president of research at OpenAI, Dario Amodei has a unique perspective on the evolution of the AI industry. He recounts how his time at OpenAI, particularly his interactions with Ilya Sutskever, helped solidify his belief in the "scaling hypothesis" - the idea that as AI models are scaled up, they can learn to solve increasingly complex problems, often in surprising ways.
Amodei describes how this realization, combined with a focus on AI safety and interpretability, drove much of the research direction at OpenAI during his tenure. He and his collaborators, many of whom later became co-founders of Anthropic, worked on projects like GPT-2, GPT-3, and "Reinforcement Learning from Human Feedback" in an effort to balance the power of scaling with the need for safety and transparency.
Amodei explains that his decision to leave OpenAI was not due to disagreements over commercialization or the Microsoft deal, as has been rumored. Rather, it was about having a clear vision for how to responsibly develop and deploy powerful AI systems in a way that builds trust with the public and the research community.
"If you have a vision for how to do it, you should go off and you should do that vision. It is incredibly unproductive to try and argue with someone else's vision."
Amodei sees Anthropic as a "clean experiment" in putting this vision into practice. He acknowledges that Anthropic, like any organization, is imperfect, but believes that striving for better practices, even imperfectly, is better than giving up. The goal is to create a "race to the top" where companies compete to adopt the best safety and transparency practices, rather than a "race to the bottom" that could lead to catastrophic outcomes.
Amodei argues that the key is not for any one company to "win," but for the entire industry to collectively improve its practices. He believes that when companies adopt good practices, others will follow, either through imitation or competitive pressure. The goal should be to align the incentives of the industry as a whole, rather than focusing on individual company rivalries.
"The point isn't to be virtuous, the point is to get the system into a better equilibrium than it was before."
Amodei sees this race to the top as the best way to ensure that the development of powerful AI systems benefits humanity as a whole. While individual companies and leaders will make mistakes, the important thing is to keep striving for improvement and to create an ecosystem where the best practices are widely adopted.
5 years of OpenAI 😳 this project didn't start now did it. They have scaled so high and now getting profitable. The Models have learned so much and soon they'll create their data learn from it and keep moving.
I think most of the early coworkers that left left for almost the same reason m the company has changed it's initial vision, goal and the principles they were founded on. I can blame Altman, things change based on season & investor interest
I see why he left but OpenAI seems to be the next Microsoft or Meta so it really took guts to make a decision like that. I personally wouldn't leave unless I'm going to xAI
Part 1/3:
The Scaling Hypothesis and the Race to the Top
As a former vice president of research at OpenAI, Dario Amodei has a unique perspective on the evolution of the AI industry. He recounts how his time at OpenAI, particularly his interactions with Ilya Sutskever, helped solidify his belief in the "scaling hypothesis" - the idea that as AI models are scaled up, they can learn to solve increasingly complex problems, often in surprising ways.
Amodei describes how this realization, combined with a focus on AI safety and interpretability, drove much of the research direction at OpenAI during his tenure. He and his collaborators, many of whom later became co-founders of Anthropic, worked on projects like GPT-2, GPT-3, and "Reinforcement Learning from Human Feedback" in an effort to balance the power of scaling with the need for safety and transparency.
Leaving OpenAI and Starting Anthropic
[...]
Part 2/3:
Amodei explains that his decision to leave OpenAI was not due to disagreements over commercialization or the Microsoft deal, as has been rumored. Rather, it was about having a clear vision for how to responsibly develop and deploy powerful AI systems in a way that builds trust with the public and the research community.
"If you have a vision for how to do it, you should go off and you should do that vision. It is incredibly unproductive to try and argue with someone else's vision."
Amodei sees Anthropic as a "clean experiment" in putting this vision into practice. He acknowledges that Anthropic, like any organization, is imperfect, but believes that striving for better practices, even imperfectly, is better than giving up. The goal is to create a "race to the top" where companies compete to adopt the best safety and transparency practices, rather than a "race to the bottom" that could lead to catastrophic outcomes.
The Importance of a Race to the Top
[...]
Part 3/3:
Amodei argues that the key is not for any one company to "win," but for the entire industry to collectively improve its practices. He believes that when companies adopt good practices, others will follow, either through imitation or competitive pressure. The goal should be to align the incentives of the industry as a whole, rather than focusing on individual company rivalries.
"The point isn't to be virtuous, the point is to get the system into a better equilibrium than it was before."
Amodei sees this race to the top as the best way to ensure that the development of powerful AI systems benefits humanity as a whole. While individual companies and leaders will make mistakes, the important thing is to keep striving for improvement and to create an ecosystem where the best practices are widely adopted.
5 years of OpenAI 😳 this project didn't start now did it. They have scaled so high and now getting profitable. The Models have learned so much and soon they'll create their data learn from it and keep moving.
I think safety and scaling was a primary focus but now safety was pushed off the window. Elon tried suing the company for that
I think most of the early coworkers that left left for almost the same reason m the company has changed it's initial vision, goal and the principles they were founded on. I can blame Altman, things change based on season & investor interest
I see why he left but OpenAI seems to be the next Microsoft or Meta so it really took guts to make a decision like that. I personally wouldn't leave unless I'm going to xAI