ChatGPT creator believes AI will lead humanity to a better world... or the worst

As he continues to develop Artificial Intelligence systems to "break capitalism," Altman acknowledges that he has a bunker prepared for apocalyptic scenarios... including his creations' attack on humanity.

Sam Altman, CEO of OpenAI, the company that created ChatGPT, is confident that artificial intelligence will eventually "break capitalism" and find a better and fairer socioeconomic system. However, he warns of the risks that its continued development may cause for humanity and acknowledges that he has prepared a fully equipped bunker to survive any kind of apocalyptic misfortune. Part of this preperation includes, of course, one of their AI systems deciding that the human race is the problem and that it should be eradicated....

Speaking to Forbes, Altman stressed that he is not anti-capitalist. "I think capitalism is awesome. I love capitalism. Of all of the bad systems the world has, this is the best, or least bad, we've found so far," I hope we find a way better one. And I think that if AI really truly fully happens, I can imagine all these different ways that it will break capitalism."

A technology to look to the future, not the past

The owner of ChatGPT considers that his creation has not come to replace any other type of technology, but that they are different and each has its own terrain. While he acknowledges that AI programs could replace traditional search engine organic searches in the future, he believes the key is to look for new utilities and possibilities, without focusing on what already exists. "I don't think ChatGPT does [replace Search]. But I think someday, an AI system could. More than that though, I think people are just totally missing the opportunity if you're focused on yesterday's news. I'm much more interested in thinking about what comes way beyond search," he said.

In their goal to improve the world, Altman said they should work to make each of their milestones more secure before exposing them to the public. Moreover, to leave them open source so that other people can work from the point where they deliver it. "we want to offer increasingly powerful APIs as we are able to make them safer. We will continue to open source things like we open-sourced CLIP (a visual neural network released in 2021). Open source is really what led to the image generation boom. More recently, we open sourced Whisper and Triton (automatic speech recognition and a programming language). So I believe it's a multi-pronged strategy of getting stuff out into the world, while balancing the risks and benefits of each particular thing."

Balance between progress and safety

From the OpenAI's CEO personal point of view, he talks about how, "summarization has been absolutely huge for me, much more than I thought it would be. The fact that I can just have full articles or long email threads summarized has been way more useful than I would have thought. Also, the ability to ask esoteric programming questions or help debug code in a way that feels like I've got a super brilliant programmer that I can talk to," he said.

On the flip side, what scares him the most so far has been the use of his creation to harm third parties. Among its inappropriate uses, he is particularly concerned about revenge porn. "I definitely have been watching with great concern the revenge porn generation that’s been happening with the open source image generators. I think that's causing a huge and predictable harm," he responded.

Fear of AI misuse

On thepoint of the responsibility of creators in the misuse of their products, or on the legislation that should be applied to prevent and punish them, Altman favors a combination of both, although he is aware of the difficulty.

I think it's both. There's this question of like, where do you want to regulate it? In some sense, it'd be great if we could just point to those companies and say, “Hey, you can't do these things.” But I think people are going to open source models regardless, and it's mostly going to be great, but there will be some bad things that happen. Companies that are building on top of them, companies that have the last relationship with the end user, are going to have to have some responsibility, too. And so, I think it's going to be joint responsibility and accountability there.

A bunker to survive the apocalypse... and the AI...

Just in case, Altman acknowledged in another interview that he is well prepared for an apocalyptic scenario, triggered by the emergence of "a lethal synthetic virus, a nuclear war... or that an AI starts attacking humans." "I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to," he explained.