①OpenAI plans to recruit a 'Director of Safety and Risk Preparedness' with an annual salary of $555,000; ②OpenAI CEO Altman stated that this position is a 'key role' at the company's current stage, responsible for 'helping the world address the potential negative impacts of AI'.
Against the backdrop of rapid advancements in artificial intelligence (AI) capabilities and growing concerns over associated risks, ChatGPT developer OpenAI is increasing its investment in safety and risk governance.
Recently, OpenAI announced a job posting seeking a 'Director of Safety and Risk Preparedness,' offering an annual salary of $555,000. This position will be directly responsible for assessing and addressing systemic risks that AI may pose in areas such as mental health, cybersecurity, and biosecurity. Additionally, the role includes equity incentives on top of the base salary.
OpenAI CEO Sam Altman stated on social media that this position is a 'key role' at the company's current stage, tasked with the responsibility of 'helping the world address the potential negative impacts of AI.' He also acknowledged that this would be an extremely high-pressure job, with the new appointee needing to confront the most complex and uncertain risk issues almost immediately after taking office.
According to the job posting, the core responsibilities of this position include: continuously evaluating novel risks posed by cutting-edge AI capabilities, developing corresponding mitigation and prevention mechanisms, and reducing the potential for misuse both at the product level and across broader societal contexts. The term 'cutting-edge capabilities' refers primarily to next-generation model capabilities that are not yet fully understood but could have amplifying effects.
Notably, this position is not being established for the first time; however, several previous executives holding the role had relatively short tenures, reflecting the challenges faced by this position in terms of technical uncertainty, public pressure, and governance complexity.
This high-profile recruitment comes amid frequent warnings about risks within the AI industry. Recently, several senior executives from technology companies have publicly expressed concerns about the risk of losing control over AI. Microsoft’s AI head pointed out that the evolution of AI capabilities has far outpaced the adaptability of governance systems; leaders of other top research institutions have also warned that, without effective constraints, advanced AI systems could 'deviate from expected trajectories and cause significant harm to humanity'.
At the regulatory level, there remains no unified, enforceable AI governance framework globally. Several academics have noted that current constraints on AI fall significantly short of its potential impact, leaving companies largely in a state of 'self-regulation'.
OpenAI acknowledged that its latest models have shown significant improvements in network attack-related capabilities compared to several months ago, and it anticipates this trend will continue in future models. This statement has raised market concerns about 'capabilities expanding faster than safety controls'.
In addition to network and biosecurity, the potential impact of AI on mental health has also become a focus of regulatory and legal attention. OpenAI is currently facing multiple lawsuits related to user mental health issues, with some cases alleging that its products failed to effectively identify or intervene in users’ mental risk states under specific circumstances.
In response, OpenAI stated that it is continuously optimizing its model training methods to enhance the system's ability to recognize signals of psychological or emotional distress and guide users to seek professional support in the real world during conversations.
It is reported that OpenAI's current valuation has reached approximately 500 billion US dollars. In addition to high salaries, this recruitment also includes an undisclosed proportion of equity incentive arrangements. As the company's scale and influence continue to expand, its pressures in terms of safety, compliance, and social responsibility have also increased.
Industry insiders pointed out that as AI transitions from the technological innovation phase to the large-scale application phase, safety governance is evolving from a 'peripheral issue' into a core variable affecting corporate valuations and long-term development. The high-salary recruitment of safety directors reflects that leading AI companies have begun to view risk governance as a strategic investment rather than merely a compliance cost.
Editor/Stephen