OpenAI recently announced that its Safety and Security Committee, created in May to address concerns about safety, will now operate as an independent oversight board. This change marks a significant shift in how the company manages the safety of its AI models.
The committee will be led by Zico Kolter, a professor from Carnegie Mellon University, and includes Adam D'Angelo, co-founder of Quora and an OpenAI board member, Paul Nakasone, a former NSA chief, and Nicole Seligman, a former Sony executive.
OpenAI Announces Safety and Security Committee
Their job will be to oversee the safety and security processes that guide the development and deployment of OpenAI's models. The company also shared that the committee has recently completed a 90-day review, offering recommendations for improvements, which they are now sharing in a public blog post.
As part of their ongoing growth, OpenAI is reportedly in a funding round that could value the company at over $150 billion. Thrive Capital, a key investor, plans to contribute $1 billion, and others like Tiger Global, Microsoft, Nvidia, and Apple are also said to be interested. These investments would help OpenAI expand its operations and continue developing new technologies, such as ChatGPT and SearchGPT, according to NBC New York.
The committee made five primary recommendations: establishing independent governance for safety, enhancing security measures, collaborating with external organizations, increasing transparency about their work, and unifying safety frameworks across the company. These changes aim to address concerns from both inside and outside the company about how OpenAI handles the risks of artificial intelligence.
One of the main safety reviews included the latest AI model, known as o1, which is designed to improve reasoning and solve difficult problems.
The committee reviewed the safety and security measures that were in place before the release of o1, ensuring it met the necessary standards before going public. OpenAI also mentioned that this group has the authority to delay model launches if safety concerns arise.
Sam Altman Excluded in OpenAI's New Committee
The company's CEO, Sam Altman, was notably excluded from the committee's leadership role. Over the summer, Altman faced criticism from U.S. senators who questioned OpenAI's approach to safety concerns. There was also an internal letter from OpenAI employees raising alarms about a lack of whistleblower protections and insufficient oversight. Some employees felt that the company was growing too fast and compromising safety as a result.
Earlier this year, former board members also expressed concerns. A former member of the OpenAI board stated that Altman had previously provided the board with inaccurate information about safety processes.
Another team at OpenAI, which was focused on long-term AI risks, was dissolved in May, with key members resigning. One of the team leaders publicly criticized the company for focusing too much on product development and not enough on safety.
Despite these challenges, OpenAI is moving forward with ambitious growth plans. The company continues to gain attention from investors, and its profit potential is rising. It has also increased its lobbying efforts, with an $800,000 budget for the first half of 2024 compared to $260,000 for all of last year.
Altman even joined the U.S. Department of Homeland Security's Artificial Intelligence Safety and Security Board, which advises on how AI can be used in critical infrastructure.
While OpenAI has pledged to address safety concerns through the committee, critics remain skeptical about whether the company can balance its commercial interests with public safety.
As OpenAI continues to expand and raise capital, its ability to maintain its original mission of developing AI that benefits humanity is being closely watched. Some former board members argue that self-governance may not be enough to protect against the pressures of profit incentives.
In the meantime, OpenAI is reportedly in talks to raise $6.5 billion in a new funding round, which could lead to further changes in the company's structure, according to Tech Crunch. There is speculation that the company may abandon its hybrid nonprofit model to attract investors. This model was originally designed to limit investor returns and ensure that OpenAI's work stayed aligned with its mission of creating AI for the benefit of all humanity.