In response to "profound risks to society and humanity," more than 2,600 IT industry leaders and researchers have signed an open letter calling for a temporary "pause" on further artificial intelligence development.
Elon Musk, the CEO of Tesla, Steve Wozniak, the co-founder of Apple, and several CEOs, CTOs, and academics in the field of artificial intelligence were among the signatories of the letter, which was written by the US think tank Future of Life Institute on March 22.
The institute shared worries that "human-competitive intelligence can pose profound risks to society and humanity," among other reasons, and urged all AI businesses to "immediately pause" developing AI systems that are more potent than GPT-4 for at least six months:
📢 We're calling on AI labs to temporarily pause training powerful models!
— Future of Life Institute (@FLIxrisk) March 29, 2023
Join FLI's call alongside Yoshua Bengio, @stevewoz, @harari_yuval, @elonmusk, @GaryMarcus & over a 1000 others who've signed: https://t.co/3rJBjDXapc
A short đź§µon why we're calling for this - (1/8)
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening.”
The most recent version of OpenAI's chatbot driven by artificial intelligence, known as GPT-4, was made available on March 14. It has so far achieved 90 percentile passing rates on some of the most difficult high school and legal tests in the United States. It is said to be ten times more sophisticated than ChatGPT's initial release.
BREAKING: A petition is circulating to PAUSE all major AI developments.
— Lorenzo Green 〰️ (@mrgreen) March 29, 2023
e.g. No more ChatGPT upgrades & many others.
Signed by Elon Musk, Steve Wozniak, Stability AI CEO & 1000s of other tech leaders.
Here's the breakdown: 👇 pic.twitter.com/jR4Z3sNdDw
One of the main worries was whether automation will "automate away" all work prospects and perhaps inundate media conduits with "propaganda and untruths."
FOLI took these worries a step further by speculating that these AI businesses' forays into entrepreneurship would create an existential threat:
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
Having a bit of AI existential angst today
— Elon Musk (@elonmusk) February 26, 2023
The institution also agreed with a recent assertion made by OpenAI founder Sam Altman that said an impartial assessment could be necessary prior to developing new AI systems.
In a blog post on February 24, Altman emphasized the need of becoming ready for robots with artificial general intelligence (AGI) and artificial superintelligence (ASI).
The petition hasn't been quickly signed by all AI experts, though. Gary Marcus, author of Rebooting.AI, explanation from Ben Goertzel, CEO of SingularityNET, on March 29 through Twitter. AGIs, of which there have been few advancements to date, won't evolve from AI that language learning models (LLMs).
On the whole, human society will be better off with GPT-5 than GPT-4 -- better to have slightly smarter models around. AIs taking human jobs will ultimately a good thing. The hallucinations and banality will decrease and folks will learn to work around them.
— Ben Goertzel (@bengoertzel) March 29, 2023
Instead, he advocated slowing down research and development for items like bioweapons and nuclear weapons.
AI-powered deep fake technology has been used to produce convincing photos, audio, and video hoaxes in addition to language learning models like ChatGPT. A few questions have been raised regarding whether the technology may occasionally break copyright rules when it is used to produce AI-generated art.
Recently, Mike Novogratz, CEO of Galaxy Digital, told investors that he was surprised by the regulatory attention given to cryptocurrencies but not artificial intelligence.
“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down.”
In the event that an immediate ban on AI development is not implemented, governments should step in, according to FOLI.
“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”