Every now and then, news surfaces in the press about the impending doom of mankind, courtesy of the “super­human” artificial intel­li­gence we are about to unleash upon the world. This new gener­ation of AI, it is said, will soon make us puny humans irrel­evant. It will rob us of our jobs, take over the world, and usher in an age of dominance for our new robot overlords.

Well, maybe not quite—but you catch my drift.

Ever since we mastered the use of fire, invented the wheel and hewed the first hand axe, mankind has had a troubled relationship with technology. Our concerns about super­human AI are just the latest in a long lineage of similar concerns. The fruits of our progress towards ever more advanced technologies have been—quite literally, since the Bronze Age—a double-edged sword.

On the One Hand… Gimme More!

At its core, what we call “technology” is just stuff our smart brains come up with that increases our chances of survival and repro­duction. Physical stuff, to be precise, that we can use or make for some specific purpose. It makes life easier, safer, more comfortable.

Consider the bow and arrow, a marked improvement over previous tools for hunting. The use of the bow and arrow began around 10,000 years ago (and possible much earlier), probably in Africa. So popular was this new technology that in just a few thousand years, it had spread to every single continent except Australia.

Apollo and Artemis (with bow), on the tondo of an Attic red-figure cup (source)

As the bow and arrow can also be used as a tool for warfare, not embracing this innovation meant certain annihi­lation. Bows and arrows continued to be a mainstay of armed conflict until an even better technology super­seded it. That newfangled high tech was gunpowder.

Or consider the field of commu­ni­cation. From spoken language to clay tablets to papyri to books to the telegraph to land-line telephones to personal computers to tricked-out smart­phones, it’s been a continuous relay race of technologies that have enabled us to share and spread our ideas faster and father with every next iteration.

So enamored are we of each next improvement that there are concerns today that handwriting itself may become a dying art. The appli­cation of technology is merci­lessly efficient: when something better comes along, we ditch whatever worked before in the blink of an eye or resign it to the pastures of nostalgia.

Our love of technology is FOMO on steroids.

On the Other Hand… Woe Is Me!

Right from the start, new technologies have also been seen—and rightly so—as disruptive of the present status quo. They offer promise, but also ambiguity and uncer­tainty. (And, as I wrote here, we don’t like uncer­tainty.)

In one famous example, in Plato’s The Phaedrus, Socrates riles against a novel commu­ni­cation technology that was taking Athens by storm:

[This] discovery of yours will create forget­fulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminis­cence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

What was this newfangled “discovery” that the great philosopher hated so? Writing.

Go figure.

Bust of Socrates at the Louvre (source)

Fast forward to our own age, and it’s not too hard to find examples of technologies that many people have great (and often, as it was with Socrates, unjus­tified) concerns over.

Are GMOs safe to eat? If a self-driving car causes an accident, who is respon­sible? Should we limit the use of stem cells in medical research and treat­ments? How autonomous should military drones be? What restric­tions should we place on Big Data? And, of course, the example I opened this essay with: artificial intel­li­gence.

Acceptance of new technologies is, initially, neither extensive or entirely whole-hearted, no matter how much of an improvement they offer. Everett Rogers has famously outlined the way in which innova­tions are assim­i­lated by society at large. The first to embrace a new technology are the innovators and early adopters, then the early majority and late majority follow suit, and finally, always late to the party, are the laggards.

In other words, it takes time for any new technology to be fully adopted and become widespread. We are suspi­cious of the unknown; we cling to what we know; we label innova­tions as threats; we fear upsetting the balance that has, so far, worked for us.

Our reticence towards technology is caution on steroids.

A Matter of Definition

So what about this so-called “super­human” AI? As always, I suspect, many of the spectacular predic­tions now being made with great confi­dence will come to naught. And many of the eventual appli­ca­tions of next-gen AI will not have been foreseen for some time to come.

But here’s the thing. All technology is by defin­ition super­human. That’s the whole point.

The hand axe is a super­human way to open hard fruits or skin prey. Ships are a super­human way to travel across water. The internet is a super­human way to process and dissem­inate infor­mation. So really, artificial intel­li­gence is just as “super­human” as a fountain pen.

There is, to my knowledge, to special techno­logical innovation for scratching your chin. That’s because your own hand will do just fine, thank you very much. No super­human solutions need apply. As for scratching your back, however… that is another story altogether.

Yay super­human technology! (adapted from source)

Looking Ahead

In the end, an innovation like AI, machine learning or datamining is just another tool. These technologies will be as beneficial or detri­mental as the people who develop and use them want them to be. The invention that gave us fireworks also brought us guns. The advances that gave us poisons also brought us medicines.

We are not the only animals to use tools—not by a long stretch—but we are the one species that has developed its tools the most (by far) and that has come to rely on technology more than any other.

The question of “super­human” technology is really a question that philosophy has been asking for millennia: how to live a good life? The principles that we adhere to, the values that we hold dear, the compassion we show others, our capacity for reason and restraint—these are the factors that will determine whether advanced computing technologies will be a force for good or evil.

In all likelihood, if history is anything to go by, the answer will be: both.

• • •

Image credit: Salvador Dalí, La Invención de los Monstruos (1937) (adapted from source)

Father, son, husband, friend and writer by day; asleep by night. Happily pondering the immortality of the crab wherever words are shared.

Write A Comment