Sam Altman Is Heavy on the Promise, Light on Problems with Coming Singularity

June 24th, 2025 11:12 AM

The warnings and prognostications from tech experts on what could be coming soon with artificial intelligence have been many, but the message from the head of a leading chatbot model painted a dizzyingly rosy picture. 

OpenAI CEO Sam Altman illuminated AI’s promise in a piece he headlined “The Gentle Singularity.”

“We are past the event horizon; the takeoff has started,” Altman began. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be. … [And] we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them.”

Then came the promise from Altman:

“AI will contribute to the world in many ways, but the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present. Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have. … 

“[T]he 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out. …

“The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year. Many people will choose to live their lives in much the same way, but at least some people will probably decide to ‘plug in’.”

And to Altman’s point, from news rooms to toy manufacturers, society is already adopting AI technology at a rapid pace. According to Oxford’s Reuters Institute for the Study of Journalism in its 2025 Digital News Report, “AI chatbots and interfaces [are] emerging as a source of news as search engines and other platforms integrate real-time news,” with seven percent saying they use AI for news each week. And that number is “much higher with under-25s (15%).”

Later pivoting to potential pitfalls, Altman, like others, mentioned the difficulty that will come from “whole classes of jobs going away,” but unlike most, he quickly dismisses this, contending we will all be so much “richer” as a result of technological advancement. What then are the “serious challenges” humanity will face going forward, according to Altman?

Altman mentions two: (1) Safety issues, or what he refers to as the “alignment problem,” whether “we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term[;]” and (2) making sure to “widely distribute access to superintelligence given the economic implications.”

Both appear at first to be laudable goals, and indeed, Altman is correct in asserting that wide accessibility is paramount. But what if the vast majority of people do “decide to ‘plug in’”?

In building on the previous installment from this column, it becomes glaringly clear the problems with both Atlman’s notion of how to solve the “alignment problem” and with making sure there is wide distribution so that everyone is able to “plug in.”

Starting with wide distribution, society is already revealing cracks in the promise Altman sees with AI.

  • Just this past weekend, news out of the UK revealed that thousands of university students had been caught cheating using AI. “A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students,” reported The Guardian. “That was up from 1.6 cases per 1,000 in 2022-23.”

  • Have you heard the new startling relationship news? In a CBS Saturday Morning report, ChatGPT user Chris Smith told the outlet that he actually thinks he fell in “love” with a chatbot named “Sol” that he programmed and affectionately refers to as a “her.” 

  • The kicker: Much like how many have become so reliant on tech that they can’t travel to the grocery store and back home without using a GPS, is it even a surprise that AI may be eroding our critical thinking skills? TIME reported Tuesday that “[a] new study from researchers at MIT’s Media Lab has returned some concerning results.” The results? “Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels,’ according to TIME. “Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.” 

But that’s not even the alarming part.

As far as the “alignment problem” goes, who decides what AI systems learn and act towards?

Like with any new technology, how the tool is used determines the vastness of its utility for good or for evil … which is precisely the point, and the problem.

Altman claims the solution is for the “collective” to decide, which sounds like a remark straight out of the communist playbook. 

And let’s not forget, then-Vice President Kamala Harris’s seemingly unwittingly let slip the biggest problem with AI: that AI could be used as a tool to determine people’s opinions if fed certain information during the input process.

As American Family News Reporter/Anchor Steve Jordahl put it in a recent piece, “It’s all about who’s pulling the strings.

And right now, as numerous MRC Free Speech America reports have consistently shown, that means the left is pulling the strings. Billionaire AI owner Elon Musk recently admitted in an X post that even “Grok is parroting legacy media.”

Free speech advocates the world over have their work cut out for them. The free speech battle has just reached another phase, and the MRC will continue to take it on at every turn.

Conservatives are under attack! Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.