Every now and then, news surfaces in the press about the impending doom of mankind, courtesy of the “superhuman” artificial intelligence we are about to unleash upon the world. This new generation of AI, it is said, will soon make us puny humans irrelevant. It will rob us of our jobs, take over the world, and usher in an age of dominance for our new robot overlords.
Well, maybe not quite—but you catch my drift.
Ever since we mastered the use of fire, invented the wheel and hewed the first hand axe, mankind has had a troubled relationship with technology. Our concerns about superhuman AI are just the latest in a long lineage of similar concerns. The fruits of our progress towards ever more advanced technologies have been—quite literally, since the Bronze Age—a double-edged sword.
On the One Hand… Gimme More!
At its core, what we call “technology” is just stuff our smart brains come up with that increases our chances of survival and reproduction. Physical stuff, to be precise, that we can use or make for some specific purpose. It makes life easier, safer, more comfortable.
Consider the bow and arrow, a marked improvement over previous tools for hunting. The use of the bow and arrow began around 10,000 years ago (and possible much earlier), probably in Africa. So popular was this new technology that in just a few thousand years, it had spread to every single continent except Australia.

As the bow and arrow can also be used as a tool for warfare, not embracing this innovation meant certain annihilation. Bows and arrows continued to be a mainstay of armed conflict until an even better technology superseded it. That newfangled high tech was gunpowder.
Or consider the field of communication. From spoken language to clay tablets to papyri to books to the telegraph to land-line telephones to personal computers to tricked-out smartphones, it’s been a continuous relay race of technologies that have enabled us to share and spread our ideas faster and father with every next iteration.
So enamored are we of each next improvement that there are concerns today that handwriting itself may become a dying art. The application of technology is mercilessly efficient: when something better comes along, we ditch whatever worked before in the blink of an eye or resign it to the pastures of nostalgia.
Our love of technology is FOMO on steroids.
On the Other Hand… Woe Is Me!
Right from the start, new technologies have also been seen—and rightly so—as disruptive of the present status quo. They offer promise, but also ambiguity and uncertainty. (And, as I wrote here, we don’t like uncertainty.)
In one famous example, in Plato’s The Phaedrus, Socrates riles against a novel communication technology that was taking Athens by storm:
[This] discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
What was this newfangled “discovery” that the great philosopher hated so? Writing.
Go figure.

Fast forward to our own age, and it’s not too hard to find examples of technologies that many people have great (and often, as it was with Socrates, unjustified) concerns over.
Are GMOs safe to eat? If a self-driving car causes an accident, who is responsible? Should we limit the use of stem cells in medical research and treatments? How autonomous should military drones be? What restrictions should we place on Big Data? And, of course, the example I opened this essay with: artificial intelligence.
Acceptance of new technologies is, initially, neither extensive or entirely whole-hearted, no matter how much of an improvement they offer. Everett Rogers has famously outlined the way in which innovations are assimilated by society at large. The first to embrace a new technology are the innovators and early adopters, then the early majority and late majority follow suit, and finally, always late to the party, are the laggards.
In other words, it takes time for any new technology to be fully adopted and become widespread. We are suspicious of the unknown; we cling to what we know; we label innovations as threats; we fear upsetting the balance that has, so far, worked for us.
Our reticence towards technology is caution on steroids.
A Matter of Definition
So what about this so-called “superhuman” AI? As always, I suspect, many of the spectacular predictions now being made with great confidence will come to naught. And many of the eventual applications of next-gen AI will not have been foreseen for some time to come.
But here’s the thing. All technology is by definition superhuman. That’s the whole point.
The hand axe is a superhuman way to open hard fruits or skin prey. Ships are a superhuman way to travel across water. The internet is a superhuman way to process and disseminate information. So really, artificial intelligence is just as “superhuman” as a fountain pen.
There is, to my knowledge, to special technological innovation for scratching your chin. That’s because your own hand will do just fine, thank you very much. No superhuman solutions need apply. As for scratching your back, however… that is another story altogether.

Looking Ahead
In the end, an innovation like AI, machine learning or datamining is just another tool. These technologies will be as beneficial or detrimental as the people who develop and use them want them to be. The invention that gave us fireworks also brought us guns. The advances that gave us poisons also brought us medicines.
We are not the only animals to use tools—not by a long stretch—but we are the one species that has developed its tools the most (by far) and that has come to rely on technology more than any other.
The question of “superhuman” technology is really a question that philosophy has been asking for millennia: how to live a good life? The principles that we adhere to, the values that we hold dear, the compassion we show others, our capacity for reason and restraint—these are the factors that will determine whether advanced computing technologies will be a force for good or evil.
In all likelihood, if history is anything to go by, the answer will be: both.
• • •