Just a few days ago, people were worried about the future of OpenAI, with that company’s board of directors deciding to fire its CEO Sam Altman out of the blue. Now a new, but unconfirmed report, claims that before the board fired Altman, its members were sent a letter from a number of OpenAI team members, claiming that a recent AI breakthrough from that company may have some big safety concerns.
Reuters reports, via unnamed sources, that this AI discovery was mentioned in the letter by a group of OpenAI employees as part of a number of issues they brought to the board. After a contentious few days after Altman was fired by the board, he was brought back earlier this week, and the company’s board members mostly departed in favor of a new and larger board.
Reuters admits that it has not actually read the reported letter and that it was not able to get a response from the staff members who actually wrote it. OpenAI also did not comment. However, Reuters does claim, again via unnamed sources, that a memo written by the company’s chief technology officer Mira Murati mentions a project called Q* (pronounced Q-Star) which could be the AI breakthrough mentioned in that letter.
Reuters offers more unconfirmed info on this Q* project. claiming that it was able to figure out mathematical problems on its own. It added:
In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.
As usual with these kinds of reports that use unnamed sources, take this one with a grain of salt.