Elon Musk sues OpenAI for prioritizing profits over the good of humanity

The businessman accused the company of no longer fulfilling its original nonprofit mission by partnering with Microsoft for $13 billion.

Elon Musk is suing OpenAI and its CEO, Sam Altman. The businessman claims that the company behind ChatGPT is no longer fulfilling its original nonprofit mission by partnering with Microsoft for $13 billion and is keeping the code for its new artificial intelligence products secret.

Musk co-founded OpenAI in 2015. However, he left the company when he formed his own artificial intelligence company, xAI. The lawsuit, filed this Thursday in California state court, claims that the company and its association with Microsoft violated OpenAI's founding statutes, which would be a breach of contract. Likewise, he is asking those involved to return the profits they received from that business.

Musk Suit Openai Altman March 2024 by Williams Perdomo on Scribd

"OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft. Under its new board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity," the lawsuit says.

The lawsuit describes some cases in which the risks of AI are evident. It highlights that Musk and the co-founders of OpenAI agreed that the new technology developed needed to be used only for the benefit of humanity. However, according to Musk, that is no longer the case.

"It could have catastrophic consequences for humanity"

Similarly, Wall Street Journal recalled that "Musk has said for years that poorly built AI could have catastrophic consequences for humanity. Since OpenAI’s ChatGPT system became a viral sensation in 2022, Musk has criticized it for being too politically correct and warned it could lead AI to become too powerful for humans to control."

Meanwhile, Sam Altman, CEO of OpenAI, also warned of the risks of developing artificial intelligence. However, Altman blamed "social imbalances" for the dangerous consequences that AI could have.

"I'm not that interested in the killer robots walking on the street direction of things going wrong. I'm much more interested in the very subtle societal misalignments, where we just have these systems out in society and through no particular ill intention, things just go horribly wrong," Altman said in a statement collected by Axios.