In response to "profound risks to society and humanity," more than 2,600 IT industry leaders and researchers have signed an open letter calling for a temporary "pause" on further artificial intelligence development.

Elon Musk, the CEO of Tesla, Steve Wozniak, the co-founder of Apple, and several CEOs, CTOs, and academics in the field of artificial intelligence were among the signatories of the letter, which was written by the US think tank Future of Life Institute on March 22.

The institute shared worries that "human-competitive intelligence can pose profound risks to society and humanity," among other reasons, and urged all AI businesses to "immediately pause" developing AI systems that are more potent than GPT-4 for at least six months:

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening.”

The most recent version of OpenAI's chatbot driven by artificial intelligence, known as GPT-4, was made available on March 14. It has so far achieved 90 percentile passing rates on some of the most difficult high school and legal tests in the United States. It is said to be ten times more sophisticated than ChatGPT's initial release.

One of the main worries was whether automation will "automate away" all work prospects and perhaps inundate media conduits with "propaganda and untruths."

FOLI took these worries a step further by speculating that these AI businesses' forays into entrepreneurship would create an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

The institution also agreed with a recent assertion made by OpenAI founder Sam Altman that said an impartial assessment could be necessary prior to developing new AI systems.

In a blog post on February 24, Altman emphasized the need of becoming ready for robots with artificial general intelligence (AGI) and artificial superintelligence (ASI).

The petition hasn't been quickly signed by all AI experts, though. Gary Marcus, author of Rebooting.AI, explanation from Ben Goertzel, CEO of SingularityNET, on March 29 through Twitter. AGIs, of which there have been few advancements to date, won't evolve from AI that language learning models (LLMs).

Instead, he advocated slowing down research and development for items like bioweapons and nuclear weapons.

AI-powered deep fake technology has been used to produce convincing photos, audio, and video hoaxes in addition to language learning models like ChatGPT. A few questions have been raised regarding whether the technology may occasionally break copyright rules when it is used to produce AI-generated art.

Recently, Mike Novogratz, CEO of Galaxy Digital, told investors that he was surprised by the regulatory attention given to cryptocurrencies but not artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down.”

In the event that an immediate ban on AI development is not implemented, governments should step in, according to FOLI.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Coin Aquarium.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.