AI, Sir: A Salute to the All-Too-Real World of Artificial Intelligence

Alexa. Siri. Your smartphone. Your navigation app.

They’re handy, helpful and a little bit creepy – with their disembodied voices, urging us to take more steps, take a screen break or take a right. (It can't mean that. There's a lake there, Michael!)

And they’re nosy, we think – always listening in on our conversations, with a digital ad at the ready to remedy that foot fungus they’ve heard us mention more than a few times.

But are they intelligent? As in artificial intelligence – or AI to the cool kids.

Artificial Intelligence is defined as the ability of a computer or computer-controlled object to perform tasks commonly associated with intelligent beings. But to get there, we first need to understand the tasks those intelligent beings are supposedly performing.

Say, for example, you were trying to decide if the robot, dog or annoying human in your life exhibits any signs of intelligence. You might apply these scientifically accepted criteria:

But just how close are we to Judgement Day? Because it wasn’t August 29, 1997 as predicted. For that, let’s look at how we got here and where we’re going:

AI Yesterday

AI in Science Fiction was initiated by Mary Shelley in 1818’s Frankenstein by the idea that scientists could birth intelligence in a dead body. But, the idea of mechanical AI dates back at least to 1872 with Samuel Butler and his novel Erewhon. After Asimov’s “I, Robot” series in the 1950s, the Dartmouth Summer Research Project on Artificial Intelligence professed that, yep, such a thing as AI was achievable.

Early efforts showed promise of problem-solving and language skills. And in the 1980s, expert systems, with the ability to mimic the decision-making process, were being used for design, diagnosis and monitoring in such diverse industries as computers, oil drilling and accounting.

In 1997, a chess-playing computer named Deep Blue claimed the title of AI alpha-bot by famously defeating grandmaster Gary Kasparov. That was closely followed by speech-recognition software and a robot named Kismet that could recognize and display emotion. Like the creeped-out emotion we’re feeling right now.

Today, when we have the capacity to gather huge amounts of information electronically but not the human brain capacity to process it, artificial intelligence accounts for a lot of the heavy “mental” lifting.

AI Today

Machines now take our calls, decipher what we say and direct us to machines to take our messages. Other machines call us to pitch products. AI-powered robots work alongside us in manufacturing and other areas. Virtual tutors help teach our children. And doctors even turned to AI algorithms to help predict potential treatment models for COVID-19. Meanwhile, facial recognition systems can ID us in photos, videos or in real time as we stop to grab a gallon of milk at Walmart.

Oh, and don’t get us started about phones. Our phones scan our faces and decide whether to grant or deny us access to our own information. They know where we’re going, where we’ve been and how many steps it took to get there. They know our medical status. They know our passwords. They know where we work. They remember every text. They know if we are sleeping; they know if we’re awake. They may even know if we’ve been bad or good, for goodness sake!

Fun fact: We give them permission (explicitly or implicitly!) to know all of these things. So, who among us can claim to be blameless for the kinda/maybe inevitable robot takeover?

On a much happier note, it’s not like we’re dealing with self-aware artificial beings with human-like or superhuman-like cognitive abilities – true artificial intelligence. At least not yet!

Which raises the question: Even if one day we can create Hal, the menacing robot from 2001: A Space Odyssey, should we? There are true technological – and yes, moral and ethical – questions to answer before we take this too far. As a reminder of how that Space Odyssey story played out:

BOWMAN: “Open the pod bay doors, HAL”

HAL 9000: “I'm sorry, Dave. I'm afraid I can't do that.”

AI Tomorrow

So can we all agree we don’t want Hal?

But maybe we like the idea of driverless cars, with their potential to change our lives in positive ways. (Auto insurance? Breathalizers? Don’t need ‘em. Safe, easy and quicker commutes, while you work or relax with your seat fully reclined. Hmmm.) And AI-assisted advances in health care? Feeling good about that.

But do we like the idea of AI taking over more and more of our jobs, most likely those that are repetitive or routine? Yes and no. (Unemployment of low-skill workers could skyrocket.) What about the possibility of rogue robots tinkering with our security systems, weapons systems or infrastructure systems like water and electricity? Are we OK with knowing that AI doesn’t respect our boundaries or our privacy?

AI doesn’t care. Can’t care. As of now, we haven’t yet figured out how to give our AI creations real empathy, feelings and a sense of civic responsibility. Even the ever-attentive Tin Man lacked a heart. But, there’s always Sarah Connor’s view:

“Watching John with the machine, it was suddenly so clear. The terminator, would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die, to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice.”

So, with what scientists call “human-level AI” still a ways off, we have time to consider the ramifications of artificial intelligence before we accidentally create a murderous robot overlord. For now, we think we’ll terminate any premature talk of the rise of the machines and call it a day.

“Hey, Siri. How’s the weather tomorrow?”

*Portions of this report compiled using information from

Like it? Share it. (Go ahead, we don’t mind.)