Skip to main content

Tech leaders call for pause of GPT-4.5, GPT-5 development due to ‘large-scale risks’

Generative AI has been moving at an unbelievable speed in recent months, with the launch of various tools and bots such as OpenAI’s ChatGPT, Google Bard, and more. Yet this rapid development is causing serious concern among seasoned veterans in the AI field — so much so that over 1,000 of them have signed an open letter calling on AI developers to slam on the brakes.

The letter was published on the website of the Future of Life Institute, an organization whose stated mission is “steering transformative technology towards benefitting life and away from extreme large-scale risks.” Among the signatories are several prominent academics and leaders in tech, including Apple co-founder Steve Wozniak, Twitter CEO Elon Musk, and politician Andrew Yang.

The article calls for all companies working on AI models that are more powerful than the recently released GPT-4 to immediately halt work for at least six months. This moratorium should be “public and verifiable” and would allow time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

The letter says this is necessary because “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” Those risks include the spread of propaganda, the destruction of jobs, the potential replacement and obsolescence of human life, and the “loss of control of our civilization.” The authors add that the decision over whether to press ahead into this future should not be left to “unelected tech leaders.”

AI ‘for the clear benefit of all’

ChatGPT versus Google on smartphones.
DigitalTrends

The letter comes just after claims were made that GPT-5, the next version of the tech powering ChatGPT, could achieve artificial general intelligence. If correct, that means it would be able to understand and learn anything a human can comprehend. That could make it incredibly powerful in ways that haven’t yet been fully explored.

What’s more, the letter contends that responsible planning and management surrounding the development of AI systems is not happening, “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

Instead, the letter asserts that new governance systems must be created that will regulate AI development, help people distinguish AI-created and human-created content, hold AI labs like OpenAI responsible for any harm they cause, enable society to cope with AI disruption (especially to democracy), and more.

The authors end on a positive note, claiming that “humanity can enjoy a flourishing future with AI … in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.” Hitting pause on AI systems more powerful than GPT-4 would allow this to happen, they state.

Will the letter have its intended effect? That’s hard to say. There are clearly incentives for OpenAI to continue working on advanced models, both financial and reputational. But with so many potential risks — and with very little understanding of them — the letter’s authors clearly feel those incentives are too dangerous to pursue.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Here’s why you can’t sign up for ChatGPT Plus right now
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

CEO Sam Altman's sudden departure from OpenAI weekend isn't the only drama happening with ChatGPT. Due to high demand, paid subscriptions for OpenAI's ChatGPT Plus have been halted for nearly a week.

The company has a waitlist for those interested in registering for ChatGPT to be notified of when the text-to-speech AI generator is available once more.

Read more
OpenAI is on fire — here’s what that means for ChatGPT and Windows
Former OpenAI CEO Sam Altman standing on stage at a product event.

OpenAI kicked off a firestorm over the weekend. The creator of ChatGPT and DALL-E 3 ousted CEO Sam Altman on Friday, kicking off a weekend of shenanigans that led to three CEOs in three days, as well as what some are calling an under-the-table acquisition of OpenAI by Microsoft.

A lot happened at the tech world's hottest commodity in just a few days, and depending on how everything plays out, it could have major implications for the future of products like ChatGPT. We're here to explain how OpenAI got here, what the situation is now, and where the company could be going from here.

Read more
The world responds to the creator of ChatGPT being fired by his own company
Sam Altman at the OpenAI developer conference.

The company behind ChatGPT and GPT-4 has dropped its CEO and co-founder, Sam Altman. According to a blog post from OpenAI: "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."

Those sound like some serious allegations, despite being intentionally vague. The timing of a later afternoon blog post on Friday make the announcement even more eyebrow-raising. There's been plenty of speculation about the reason behind the sudden departure, but nothing clear has risen to the surface just yet.

Read more