Voz media US Voz.us
80 days and counting

SINCE KAMALA HARRIS' LAST PRESS CONFERENCE

Elon Musk reveals his conversation with Obama on artificial intelligence

The tycoon and Tesla founder referred to the dangers of AI and recalled when he spoke with the former president about its possible regulation.

El director general de Tesla, Inc. y de SpaceX, Elon Musk.

(Cordon Press)

Elon Musk recently warned about the dangers of artificial intelligence and what it can represent for mankind. Indeed, he predicted a breakthrough within the next decade and revealed a conversation he had on the subject with Barack Obama in which he advised the former president to regulate the technology.

The South African tycoon confessed on Sunday that he had only one formal meeting with the former president. It took place in 2015 at the Spruce restaurant after the two participated in a cybersecurity summit at Stanford University.

Musk addressed the issue on Twitter when he responded to a post by software developer Mckay Wrigley who had written that AI would continue to experience "exponential growth" in the coming years and would shake skeptics "like an asteroid."

"I saw it happening from well before GPT-1, which is why I tried to warn the public for years. The only one on one meeting I ever had with Obama as President I used not to promote Tesla or SpaceX, but to encourage AI regulation," the entrepreneur responded on the social network, of which he is currently the owner. The 44th president of the United States also visited SpaceX's operations at Cape Canaveral, Florida, in 2010.

With regard to the challenges brought on by artificial intelligence, Musk signed an accompanied by hundreds of leaders in the technology industry. The paper was endorsed by Ripple co-founder Chris Larsen, Pinterest co-founder Evan Sharp, former presidential candidate Andrew Yang and academics from Stanford University and Harvard University.

"Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects," the document professed.

"This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt," the letter added.

tracking