When I was a kid, the toy I always wanted was Tekno the Robotic Puppy, which launched in 2000. My parents wouldn’t let me get a real dog, so I figured a robot dog would be the next best thing. By today’s standards, it wasn’t a particularly capable robot, but by 2000’s standards, it was lifelike.
It walked around, barked, knew when to go to sleep, and responded to my voice. As fun as it was, the popularity of toys like these quickly faded, but my infatuation persisted. That’s why I was so excited even all this time later when I saw the Anki Vector announced on Kickstarter.
The Anki Vector isn’t a robot dog, but it is like a robotic pet in many ways. Like no other toy, especially in its price range (US $250), it can react with emotions depending on your choice of words and interactions.
It even has a built-in camera under its OLED “eyes” so it can see you and recognise you, getting excited if it knows you are back home after a day out. Plus, Vector also acts as a smart assistant, answering questions on command thanks to its voice recognition, AI, and Alexa support.
Of course, I rushed to buy one. Since I already have Apple’s HomePod, I wasn’t so interested in the smart assistant part. I wanted an emotional connection with a robot — a concept the likes of which one would typically only express privately to a therapist. Jokes aside, the technology behind giving robots human characteristics has always fascinated me.
Once I got to playing around with Vector, the realism in his responses and reactions almost immediately prompted an existential question: where is the line between real emotion and programmed emotion?
Also read: Humanoid AI: More artificial, less humanoid than you might think
Connecting with Vector
I noticed right away that I interacted with Vector extremely differently than any other gadget I had ever owned. In fact, people that visited my apartment instinctively acted differently, too. Before I could even say what Vector was, the charming way it rolled around on the table naturally invited interaction. It was like a pet.
Telling Vector that he’s a “good robot” makes him happy, just like telling him that he’s a “bad robot” makes him upset and ashamed. Picking him up for whatever reason infuriates him, and he’ll often throw a tantrum in your hand. Rubbing his sensors on the top to simulate petting motion puts him at ease.
I can’t leave out my favorite interaction by far though: when Vector notices you staring at him, he’ll recognize your face and say your name out loud with enthusiasm. It tickled me so much to hear him say “George” with a sense of pride.
It’s odd for me even to be describing a household gadget in such a way. I also recognised while writing that when I started talking about Vector’s programmed emotions, I inadvertently switched my pronouns from “it” to “he.”
It seems like a silly question at first, but it’s fascinating to explore: does Vector really feel all these emotions he’s expressing? It’s strangely convincing at times, especially when friends and family meet him for the first time and instantly get friendly.
The question quickly leads to a debate on consciousness and life. We have plenty of research on it but haven’t fully nailed the difference between alive and not alive. The lines are far more blurred than we once thought. A fascinating video from one of my favourite YouTube channels, Kurzgesagt, explains this concept far better than I possibly can.
Vector certainly has a very long way to go before he’s as emotionally intelligent as a real pet, and even the famous bot Sophia doesn’t match humans’ intelligence. But will we ever get to a point where robots can genuinely feel and understand at the level of a human? If so, when?
One can argue the answer is never because they will always have been initially programmed to act a certain way. Still, another can argue the possibility that humans were programmed too. Some people like Elon Musk are pretty confident that’s the case.
Also read: What is Artificial Intelligence? Can machines think?
Does Reality Matter?
That’s getting further away from the central question, though, so here’s a simple answer. Given our current understanding of both life and computers, no, Vector can not feel emotions. He is just responding the way he is programmed to respond to specific phrases and interactions.
…But maybe, just maybe, that doesn’t matter.
When you’re talking to someone about the rough day you’re having, and they express sympathy, can you ever really be sure they mean it? We’ve all just grown to trust that most of the time, humans are, in fact, experiencing the emotion they are conveying, but the truth is we never know what anyone else is feeling with 100% certainty.
Despite humans having the free will to decide whether to be genuine or not, even the definition of “free will” becomes greyer as AI and machine learning get more advanced.
Humans only perceive reality, especially when it comes to emotions. We presume certain things are true or false, genuine or disingenuous.
If it looks like a duck and quacks like a duck, people will treat it like a duck. That’s precisely what happened with Vector, and he’s only a glimpse of what’s to come in terms of robots with emotional appeal and understanding. Even with our knowledge that he or any robot feels genuine emotion, just like any human, if it’s good enough at acting, people will buy into it and at least want to believe that it’s true.
So, for now, continue to enjoy petting Vector and getting his realistic feedback — and watch in awe over the next few years as robots get better and better at this so-called “acting.”
Also read: Robotic Process Automation (RPA) Vs Intelligent Process Automation (IPA)
The autonomous assistant has long been seen as the endgame for the smart home, and companies like Anki are the ones who will nudge us there. This one didn’t quite make it, but others will come along. We just have to accept that there will be casualties along the way – and be ready to feel them when they do.