A top AI safety researcher at OpenAI departs citing disagreement over company's priorities
Jan Leike, who was heading important research on machine alignment and superalignment at AI safety giant OpenAI, announced his resignation last week. In a series of insightful posts on social media, Leike shed light on some of the key differences he had with the company's leadership over their strategic approach.
A pioneer in the field of aligning advanced AI systems, Leike had high hopes when he initially joined OpenAI three years ago, believing it would be “the best place” to conduct his crucial work. However, he mentions hitting a “breaking point” as his disagreements with the direction of the company grew over time.
The departing executive raised serious concerns about OpenAI losing focus on safety processes and culture, and instead prioritizing new product launches. He advocated that with more powerful AI just around the corner, the company needs to allocate increased resources towards security, monitoring, safeguards and other preventive measures.
Leike highlighted the “enormous responsibility” that comes with developing super-intelligent machines, stressing how ensuring such technologies benefit humanity should be the top priority. However, he is of the view OpenAI has been “long overdue” in comprehensively addressing risks from artificial general intelligence.
By sharing his journey and rationale for leaving the high-profile startup, the AI safety leader has opened an important debate on where the priorities of organizations in this field should truly lie. With his extensive experience, Leike's views could push for positive change towards a “safety-first” mindset in developing and handling advanced AI systems.