Share
Home Technology Fears abound with rise of AI

Fears abound with rise of AI

Portrayal as villains in film taints image of technology
Fears abound with rise of AI
Source: Reuters

The dawn of artificial intelligence is upon us. Whether we like it or not, the world today is governed more and more by computer programs that are faster and better at decision making than we are.

But does it have to end in an apocalyptic all-out-war that most of us have been led to believe by popular culture?

What is the origin of our fears and is there a way of shaping the future of AI so that it will serve and not rule humanity?

Of course it doesn’t help when one of the most outspoken critics of Artificial Intelligence (AI) happens to be the most visible and vocal billionaire in the world. Elon Musk has never minced his words about existential risk AI might pose to humanity. Describing it as a potential “immortal dictator that will never die” in open forums definitely feeds into public hysteria, Musk maintain, however, that a “public body with insight and oversight” would go a long way towards “coupling human and digital intelligence to ensure a symbiotic future.”

Thankfully, there are many that don’t share Mr. Musk’s pessimism.

Sundar Pichai, CEO of Alphabet and its subsidiary Google, describes AI as “more profound than electricity and fire.”

At Davos 2020 he went on the say “the biggest risk with AI may be failing to work on it.”

Perhaps Demis Hassabis, the co-founder of DeepMind, sums up the case for AI perfectly; “I would actually be very pessimistic about the world if something like AI wasn’t coming down the road.”
He notes that the current existential threats that we face like climate change, sustainability and mass inequality are behavior based and not being resolved fast enough.

He further argues for a “quantum leap in technology like AI” to accelerate breakthroughs because “I don’t think we’re going to be getting an exponential improvement in human behavior any time soon.”

Maybe to better shape the future of AI, we have to understand the past of humanity’s greatest leaps. Only by examining the inventions that have allowed us to change the world we live in can we understand and influence the future of all our innovations.

Fire allowed us to not only defend against predators but also to cook our food and to provide more energy. This led to substantial increases in body and brain size and allowed us to organize and invent tools so that we could out-compete our ape cousins.

On a down note, we hit each other harder than before and caused more damage with slightly better weapons and go after our neighbors land and resources.

The advent of agriculture came about 400,000 years later and allowed a fraction of the population to grow enough for the whole tribe.
The Neolithic revolution saw our nomadic ancestors turn their back on foraging and finally settle and grow what they needed where they needed it.
The surplus food gave rise to professions. We specialized in tasks without having to worry about our food supply.  Populations grew and technological innovation soared while art and religion flourished. On a down note, we could raise massive armies with evermore deadlier weapons and go after our neighbors land and resources.
The industrial and agricultural revolutions in the 18th century fundamentally transformed society.
The invention of steam and internal combustion engines, the telephone, the sewing machine and the lightbulb to name a few inventions drastically changed the way we live in just a few decades.

On a down note, bigger armies and better guns meant more going after of more of our neighbors land and resources.

Movies and books are just as much to blame for our fear of AI and have long taken up the subject of self-aware computers, sometimes with great success other times not so much.

Apart from a few acceptations, however, sentient programs, it seems, make better adversaries than friends. In fact, the only recent movie that doesn’t have an AI character that is actively trying to harm humans is the 2013 film, Her, by director Spike Jonze.

2001: A Space Odyssey, Blade Runner, The Matrix and Ex-Machina, to name a few, all brilliant films but all have an AI entity as their leading villain.

A particularly disturbing literary tale about sentient AI is I Have No Mouth and I Must Scream. The 1967 short story by Harlan Ellison is about as terrifying as the title suggests. Set in a dystopian future with war, genocide, captive humans and a vengeful supercomputer called “Aggressive Menace”, it is perhaps the bleakest future imagined with artificial intelligence in mind.

It is possible that these narratives are amplified in movies and books for dramatic effect. We need heroes to overcome villains and non-humans make great villains. The fact that it’s easy to see AI as heartless, morally vacuous tin cans that just follow programming makes them plausible antagonists.

Or perhaps around the time when such stories were written and popularized, the world was in the midst of the dealing with the consequences of the last technological leap.

The physical and psychological wounds of two great wars and the use of the atomic bomb along with the cold war that followed could be directly linked with the industrial revolution.

Whether justified or not the state of global society was inexorably linked to the inventions that fueled these events and gave scholars and intellectuals of that time plenty to fear and reflect upon.

Therefore the creative minds of that era could be forgiven for assuming the very worst of the new advancements appearing on the horizon. Hence the numerous works of fiction that attempted to warn future generations to be weary of the implications of such progress.

It is inevitable that artificial intelligence will one day surpass human intelligence and become more capable.  As with the emergence of previous technological leaps in humanity, the propensity of the few who end up in control to use these advancements for personal rather than collective gain could be once again what defines this era. Even if we manage to save ourselves from being enslaved or destroyed by our artificially intelligent creations, there is no guarantee that we won’t suffer a similar fate at the hands of those that control that entity.

Perhaps we should pay as much attention to those of us who could wield control over AI as the AI itself.

——-

Man in a dark suit and light blue shirt with green foliage in the background.

* A.K. Sarper is a seasoned financial veteran whose experience spans across 11 years in the private banking and hedge fund management fields, in Geneva and Singapore. He currently runs his own consultancy firm based in London.

Disclaimer: Opinions conveyed in this article are solely those of the author. The information presented in this article is intended for informational purposes only. It does not constitute advice on tax and legal matters; neither are they financial or investment recommendations. Refer to our full disclaimer policy here.