The Brewing Controversy Over The Proposition That AI Is Nothing More Than Just Normal Technology


In today’s column, I examine a rapidly expanding controversy that AI is nothing more than just normal technology. The proposition goes like this. We are overstating the underlying nature and impact of AI and allowing ourselves to be deluded into a false sense of AI as superhuman. This causes us to then fail to suitably deal with conventional AI and become hopelessly mired in fanciful daydreams of what AI might someday become. Ergo, let’s recalibrate our mindset and view AI as normal technology. Period, end of story.

The AI community is quite strongly divided on both sides of this assertion. Some heartily insist that it is about time that the emperor with no clothes was finally revealed, while others vehemently disagree with the idea that AI is merely normal technology.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI As Normal Technology

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

An intriguing question arises about the status of conventional AI.

Is conventional AI merely the same as other “normal” technologies, such as on par with the Internet or electricity?

Observe that the comparison of AI is being made to tremendous and transformative technologies including the advent of the Internet and the discovery and harnessing of electricity. The point is that the word “normal” should be level-set as outsized tech that has been wholly disruptive and a huge change to how humankind operates. Do not fall into the mental trap of construing the word “normal” as meaning akin to a normal toaster or a normal mouse trap.

Mull over whether you believe that AI is normal or exceedingly extraordinary.

The Case For AI As Normal Tech

Assume for the sake of discussion that you or someone you know is on the side of believing that AI is normal technology.

How can anyone arrive at that conclusion?

One perspective is that conventional AI is running on everyday computer servers, uses everyday aspects of inputs and outputs, and employs cleverly devised algorithms but all of this is well within human craftsmanship. It is all still bits and bytes. There aren’t any magical incantations involved. No voodoo.

Yet many seem to have turned the public perception of contemporary AI into a hullabaloo, maybe even into a farce. Some AI insiders have contributed to this sense of otherworldly aura by insisting that they see “sparks” of sentience and consciousness in generative AI and large language models (LLMs). Sorry to say, those grandiose visions have been debunked, see my discussion at the link here.

Hard-nosed AI insiders who are fed up with the fakery and over-the-top proclamations about modern-era AI are declaring that enough is enough. It is time to draw a line in the sand. Set aside all this nonsense about AI that walks on water and leaps tall buildings in a single bound. Let’s get real.

Drawing A Line In The Sand

A recently posted paper on April 15, 2025, entitled “AI as Normal Technology” which was co-authored by researchers Arvind Narayanan and Sayash Kapoor, doing so under the auspices of Columbia University and their Knight First Amendment Institute, made these salient points (excerpts):

  • “We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception.”
  • “But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.”
  • “The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. “
  • “We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future.”

Boom, drop the mic.

Their viewpoint has stirred a hornet’s nest. If you are someone who ardently believes that all the zany headlines stirring the fearmongering, gaslighting, and breathless existential risks of AI have got to come to a halt, you are likely to embrace and applaud the contention that AI is normal technology. Others would disagree with the contention and assert that this is nothing more than a form of gaslighting about the alleged gaslighting about AI.

As an aside, the paper goes into great detail about their rationale that AI is a normal technology and you might consider reading the full paper if this is a matter of direct interest to you.

The Complacency Danger

One worry about conceiving AI as normal technology is that this might slide society into a semblance of complacency about AI.

Allow me to briefly elaborate on this. Right now, the talk of AI as possibly being able to cure cancer and meanwhile that AI might wipe out humanity provides a potent focus on what AI is and where AI is headed. Sure, maybe it is hyperbole, but the only way to get society to be attentive often necessitates going over the top.

If we all agree to henceforth say that AI is normal technology, perhaps AI will no longer garner headlines and no longer gain the attention of policymakers, regulators, and other such stakeholders. We will relegate AI to the realm of the techies. Looming societal impacts will undoubtedly be unexplored since AI techies either aren’t thinking about those heady matters or assume that it will all get figured out once AI emerges once again into the global consciousness.

That could be like letting the horse out of the barn. You could have kept the horse in the barn or at least prepared to smartly release the horse, but instead, you kept your head deeply buried in the sand (two metaphors for the price of one).

The Case For Abnormal Impacts

There is another twist on why the premise that AI is normal technology seems worrisome.

First, let’s go ahead and place the matter of AGI and ASI completely outside our purview. Just go along with the supposition that AGI and ASI are not in the picture. Only conventional AI as we know it today is inside the big picture.

One argument is that the impacts of even conventional AI are, in a sense, abnormal. You can proceed to assume that AI is a normal technology, but the societal impacts go beyond those of other normal technologies. The impacts are abnormal.

There are normal technologies that lead to normal impacts. That’s fine. The thing is, there are normal technologies that lead to abnormal impacts. Lumping AI into the normal technology’s classification can falsely lead to thinking that AI has just normal impacts.

Foreseeable Futures That Miss The Mark

In the excerpted points regarding the propositional case of AI as a normal technology, some inquisitors have noted a make-or-break assertion in there. The deal is this. The premise includes a crucial underpinning about predicting the foreseeable future of AI (note: I have italicized that portion here so you can readily observe it here):

  • “The statement ‘AI is normal technology’ is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it” (ibid).

Well, throughout history there have been numerous instances of predictions concerning so-called foreseeable futures that were significantly off base. The upshot is that the foreseeable future for conventional AI could be that we continue to bump along and see modest advances that aren’t especially surprising. Same old, same old. If that’s the case, the normalcy of the said-to-be normal technology of AI retains its normalcy.

But if we advance AI more demonstrably, doing so within the foreseeable future, the potential for AI to go far beyond conventional human control might catch us by surprise. Think of it this way. Imagine we all agreed to treat conventional AI as a normal technology and applied normal scrutiny accordingly. Then, seemingly out of the blue, and breaking our assumptions about the foreseeable future, AI suddenly gets bigger than we can handle.

Betting heavily on the presumed foreseeable future of AI might be quite a gamble.

Undercutting Of Drastic Interventions

There has also been heartburn about the lay-low aspects of nullifying a willingness or readiness to employ potential drastic interventions associated with conventional AI. As noted in one of the excerpted points (portion italicized here for ease of visualization):

  • “We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs” (ibid).

If we establish a mindset that no drastic policy interventions will be needed, do we undercut the chances of spotting a need for drastic interventions and thus delay or fail to identify and adopt such interventions on a timely basis?

Perhaps conventional AI gets infused into critical systems that we all rely upon for daily existence and survival. The AI placed into that posture puts us at risk. It could be that we need to perform some form of drastic policy intervention. But the normal technology perspective possibly has convinced us that nothing drastic is needed and that otherwise conventional interventions will suffice.

Predictions Of AGI On The Horizon

Related to the matter of the foreseeable future of AI, there are a multitude of AI luminary predictions that AGI will be attained in the near term such as 2027 to 2030, and surveys of AI specialists have generally suggested that 2040 is a considered “almost certainly by then” AGI date, see my coverage at the link here.

Do you consider 2027 to 2030 as within the rubric of a foreseeable future or is it outside that range?

In other words, many in the AI community believe that we are either just a few years away from AGI or at most about a dozen or so years away. You can certainly disagree with those predictions. Many do. Others believe in those dates fiercely.

The trouble goes this way. AGI would have such an enormous consequential impact that we cannot sensibly lump AGI into the conventional AI bucket. And to clarify, we are stipulating that AGI is not superhuman, and “only” on par with human intellect. This sets aside the superhuman AI conflating.

Thoughts On The Normalcy Debate

It sure would be nice if we could have our cake and eat it too. The circumstances would be that we could keep two fundamental dimensions in our heads at the same time, in a balanced effort:

  • (1) We construe conventional AI as considered “normal technology” and do so in a strictly kind of unpolluted status — keeping at arm’s length those grand schemes of AI that become AGI or ASI.
  • (2) Meanwhile, we still entertain the possibility of attaining AGI (or possibly ASI) and anticipate and prepare for that possibility, but not to the extent that we overinflate the first point and get mired in failing to focus suitably on conventional AI (the “normal technology”).

Can we do that?

It’s an exceedingly extraordinarily challenging non-normal task.



Source link

Scroll to Top