Elon Musk has declared himself part of an influential group urging a six-month pause in the training of advanced artificial intelligence models following ChatGPT’s rise – arguing the systems could pose “profound risks to society and humanity”.
The CEO of Twitter and Tesla joined more than 1000 experts in signing an open letter organised by the non-profit Future of Life Institute, which is primarily funded by the Musk Foundation, the billionaire’s charity grant making body.
The group also gets funds from the Silicon Valley Community Foundation and the effective altruism group Founders Pledge, the European Union’s transparency register shows.
The letter calls for an industry wide pause until proper safety protocols have been developed and vetted by independent experts — and details potential risks that advanced artificial intelligence (AI) poses without proper oversight, reported the New York Post.
Risks include the spread of “propaganda and untruth,” job losses, the development of “non-human minds that might eventually outnumber, outsmart, obsolete and replace us,” and the risk of “loss of control of our civilisation”.
The experts pointed out that OpenAI itself recently acknowledged it may soon be necessary to “get independent review before starting to train future systems”.
“Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” the letter says.
“This pause should be public and verifiable, and include all key actors.”
Mr Musk was a co-founder and early investor in OpenAI, the firm responsible for the development of ChatGPT. He has since left OpenAI’s board of directors and no longer has any involvement in its operations.
Shivon Zilis, an AI expert who gave birth to twins fathered by Mr Musk via in vitro fertilisation, also recently stepped down from OpenAI’s board. She had served as an Adviser to OpenAI since 2016.
Ms Zilis, 37, is an executive at Neuralink, Mr Musk’s brain chip company.
Despite his self-proclaimed fears about AI, Mr Musk is reportedly exploring the possibility of developing a rival to ChatGPT.
Microsoft-backed OpenAI’s ChatGPT-4, the latest version of its AI chatbot, has both shocked the public with its ability to generate lifelike responses to a huge variety of prompts and stoked fears that AI will place many jobs at risk and ease the spread of misinformation.
Other notable signers of the letter include Apple co-founder Steve Wozniak, Pinterest co-founder Evan Sharp and at least three employees affiliated with DeepMind, an AI research lab owned by Google parent Alphabet.
OpenAI’s CEO Sam Altman has not signed the letter.
Active AI labs and experts “should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” to ensure the systems are “safe beyond a reasonable doubt,” the letter adds.
“Such decisions must not be delegated to unelected tech leaders,” the letter says.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter adds.
The Post has reached out to OpenAI for comment on the letter.
Last month, technology website The Information reported that Mr Musk had approached AI researchers to develop a product that could potentially be integrated into Twitter. The report said Mr Musk believes ChatGPT has gone “woke” and displays a liberal bias.
Mr Musk has repeatedly warned about the danger posed by the unrestrained development of AI technology.
The billionaire likened AI to the discovery of nuclear physics, which led to “nuclear power generation but also nuclear bombs”.
“I think we need to regulate AI safety, frankly,” Mr Musk said.
“Think of any technology which is potentially a risk to people, like if it’s aircraft or cars or medicine, we have regulatory bodies that oversee the public safety of cars and planes and medicine.”
“I think we should have a similar set of regulatory oversight for artificial intelligence, because I think it is actually a bigger risk to society,” he added.
Google is developing its own AI-powered chat bot, Bard, which has so far drawn mixed reviews.
This story appeared in the New York Post and is reproduced with permission.