Several former OpenAI workers are concerned about the "reckless" dominance that the company is trying to achieve

Experts say that artificial general intelligence could be as intelligent or even more intelligent than human beings.

Several former OpenAI workers sent a letter to the company this week expressing their concerns about the "reckless" dominance the company is trying to pursue in its quest to develop the next generation of artificial intelligence.

Artificial General Intelligence (AGI) could be as intelligent or even more intelligent than human beings. AGI could do things that humans do on a daily basis, such as writing or drawing (something that generative AI already does) but it would also have the ability to understand the complexity and context of their actions.

This was what Carroll Wainwright was developing. He belonged to the practical alignment and super alignment team, a sector that makes sure that the most advanced OpenAI models are safe and in line with human values, when he decided to resign from his job.

He was not afraid of technology, which he still considers to be far from becoming a reality. What truly terrified him was the mentality that the company was acquiring regarding technology, as Wainwright told Vox:

Over the past six months or so, I've become more and more concerned that the incentives that push OpenAI to do things are not well set up. There are very, very strong incentives to maximize profit and the leadership has succumbed to some of these incentives at a cost to doing more mission-aligned work.

Transparency and protection, main requests from AGI experts

He is not the only one. Daniel Kojotajlo, a former researcher for OpenAI's governance division, is another person who left his job after seeing the company's change in mentality. The expert, who was also one of the group's organizers, spoke to The New York Times and claimed that the company was being reckless: "OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there," he claimed.

They are some of the experts who signed a letter expressing their concerns about the "reckless" dominance the company is trying to achieve. Members such as former OpenAI research engineer, William Sanders and a former Google Deep Mind employee were a few of the individuals who signed the letter asking both OpenAI and the rest of the large artificial intelligence companies to be more transparent and apply more protection to this technology:

AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.

OpenAI fights back

Lindsey Held, a spokesperson for OpenAI, denied that the company had changed its values. She sent a statement to the New York Times assuring that the company is "proud" of its work and that it believes in its "scientific approach to addressing risk":

We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world.