What is Q*, the secret OpenAI project that could “threaten humanity?"

There is still no official explanation as to why the board of directors fired its CEO, Sam Altman, and then reinstated him just days later.

After more than five hectic days where the future of OpenAI was really in question, there are still many questions about what really happened with the very brief dismissal of its CEO, Sam Altman, who returned to the company triumphantly on Tuesday, in his same position, after the vast majority of employees stood in solidarity with him and supported him through a threat of mass resignation.

Now Altman, apparently successful in the attempted dismissal against him, will work with a renewed board of directors made up of Bret Taylor (president), Larry Summers and Adam D'Angelo.

The beginning of events peaked on Friday when the previous board of directors fired Altman through a harsh but vague statement in which he was accused of not being sincere enough with the board members.

Even though the outcome of the story reaches there, the initial question remains: Why had the previous board of directors fired, in an unexpected move, the CEO of the most important artificial intelligence company in the world for communication failures?

Some media reports raise the possibility that the dismissal was not simply due to that reason but instead arose from security concerns related to Q*, a secret OpenAI project that, according to company researchers, could threaten humanity.

What exactly is Q*?

On Tuesday, after the charismatic Altman returned to the company as CEO after agreeing to join Microsoft, two anonymous sources told Reuters that several company researchers wrote a letter to the board of directors before Altman was fired.

These researchers had warned of a powerful discovery of artificial intelligence that, they explained, could threaten humanity: Q*, pronounced Q-Star.

According to sources, the discovery of this intelligence was key for the previous board of directors to opt for the dismissal of Altman, generating an earthquake in Silicon Valley and the world of technology.

"The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences," reads the Reuters report.

According to the report, some researchers think that Q* can be a breakthrough in the search for what is known as "artificial general intelligence" (AGI), or as OpenAI defines it, autonomous systems that outperform humans in most tasks of economic value.

Reuters explained that this new model has extensive computing resources and, according to the researchers, was able to solve basic mathematical problems at the level of primary school students, generating a lot of optimism among the researchers.

However, the power of this tool also generates concern since researchers consider mathematical operations to be the frontier of the development of these artificial intelligence technologies.

As detailed by Reuters, these types of technologies are currently suitable for writing or translating languages ​​by predicting the next word based on statistics. However, the answers to the questions can vary quite a bit because the answers to specific inquiries can be more subjective. However, the fact that an AI can do mathematical calculations where there is only one correct answer automatically implies that this technology will have a reasoning capacity similar to human intelligence. A situation that could lead AGI to have the ability to generalize, learn, understand and have an impressive capacity for adaptability.

For this reason, the researchers raised their concerns and the potential danger to humanity that this type of intelligence represents to the previous board of directors of OpenAI, according to what anonymous sources informed Reuters, who did not specify the security problems raised in the letter.

Until now, little information about the Q* project has been available. However, according to the report, OpenAI acknowledged the existence of this technology to its employees and also anticipated that some stories about it would be published in the media.

More concerns about Q*

Last Monday, Ilya Sutskever, one of the prominent members of the previous board of directors of OpenAI (already removed from his position), published a post on social media X regarding the controversy: "I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."

That post caught the attention of X and Tesla owner Elon Musk, who responded: "Why did you take such a drastic action? If OpenAI is doing something potentially dangerous to humanity, the world needs to know."

On Wednesday, Musk published a post on X citing the Reuters article about Q* accompanied by the message: "Extremely concerning!"

Altman vs. the previous board of directors and Silicon Valley's support for the CEO

The New York Times shed light on the complex relationship between Sam Altman and the previous board of directors until his sudden dismissal last Friday.

According to the report, while Altman focused more on the company's expansion and commercialization, board members wanted to balance that energy by expanding security in the technology they develop.

In fact, one of the last key discussions that Altman and a member of the board of directors had was related to the issue of security and was decisive for the momentary dismissal of the CEO of OpenAI himself.

According to the NYT, Helen Toner, board member and director of strategy at Georgetown University Center for Security and Emerging Technologies, co-wrote an article for the Georgetown center where, in Altman's view, she criticized OpenAI's security measures to keep its AI technologies safe. Toner praised Anthropic, currently OpenAI's biggest rival, in that same article, for its security efforts.

Altman chose to reprimand Toner in an email, alleging that her article was dangerous for the company, especially since the Federal Trade Commission maintains an investigation against OpenAI for the use of data to develop its technologies.

According to the Times, Toner did not accept Altman's rebuke and defended her article as an academic document that addressed the different challenges faced by companies and countries that develop artificial intelligence.

The disagreement between Toner and Altman generated a strong short circuit, according to the NYT, and OpenAI's senior leaders, including Sutskever himself, debated whether the board member should be expelled.

However, against all odds, Sutskever, who, according to the Times, is worried that AI will destroy humanity, sided with Toner, and the board of directors ultimately opted to fire Altman.

The New York newspaper revealed that it was not the first time that Altman had faced an attempted dismissal.

In 2021, Sutskever and Altman had already had conflicts after another high-level Artificial Intelligence scientist left OpenAI to go to work with Anthropic.

"That scientist and other researchers went to the board to try to push Mr. Altman out. After they failed, they gave up and departed," three anonymous sources confirmed to the NYT.

Although on this occasion, the board of directors did take the step of removing Altman, a Financial Times report revealed how the big bosses of Silicon Valley and several venture capital investors actively operated in recent days to reincorporate the charismatic CEO of OpenAI.

According to the report, the former board of directors did not have Altman's power to convince public opinion and shape the narrative in his favor. Furthermore, both internal and external pressures were too heavy not to bring Altman back on board and risk the future of Open AI, whose board of directors, according to the company's bylaws, should ultimately ensure the safety of humanity in general and not of the investors.