The idea of sentience in an AI, or a machine developing feelings or consciousness, has been around almost as long as the idea of a machine getting intelligence in the first place. However, while earlier discussions around the idea existed solely in fiction, the post-1900s boom that computer science and technology saw, in general, has made the entire concept a rather plausible thing, which might even have happened already.Â
Ideas revolving around Artificial Intelligence have been around for far longer than you’d guess. AI as a concept itself had started showing up in various mathematicians, theologians, philosophers, professors, and authors’ works since 380BC.
Some other revolutionary and popular AIs from the early 1900s include the following.
Year | AI | |
---|---|---|
1929 | Gakutensoku | Gakutensoku was the first Japanese robot. Its AI mind could learn from people and nature around it to the extent that allowed it to change facial expressions and move its head. |
1952 | Arthur Samuel’s checkers playing program | This was the first program to learn how to play a game independently. |
1961 | Symbolic Automatic Integrator (SAINT) | James Slagle’s SAINT was a problem-solving program focused on symbolic integration in calculus. |
1964 | STUDENT | Daniel Bobrow’s STUDENT was an early AI program that could solve algebra word problems. |
1965 | ELIZA | Joseph Weizenbaum developed ELIZA, a fully interactive program that could functionally talk to a person in English. |
1966 | Shakey | Built by Charles Rosen and 11 others, Shakey was the first general-purpose mobile robot that could navigate spaces on its own. |
1968 | SHRDLU | Terry Winograd created SHRDLU, an early natural language program. |
1988 | JabberWacky and CleverBot | The two chatbots built by Rollo Carpenter were one of the first examples of AI chatbots talking to people freely. |
1995 | Artificial Linguistic Internet Computer Entity (A.L.I.C.E) | Richard Wallace developed A.L.I.C.E inspired by ELIZA. A.L.I.C.E worked with Natural language sample data |
The latest in a long line of AIs to be claimed sentient is Google’s LaMDA. The revelation was made by Simon Lemoine, a software engineer on Google’s Responsible AI team. Lemoine was interviewing LaMDA to determine if the program uses discriminatory language or hate speech. Instead, he realised that the issue might be deeper than what he was investigating.Â
Google has since suspended Lemoine, and the company has quashed any claims about its AI being sentient, with different experts backing its claim stating LaMDA isn’t sentient (but might be sexist and racist).Â
The fears of AI turning sentient aren’t new.
Since Alan Turing published his famous Turing test to determine whether or not machines can think like humans, scientists have been coming up with different programs to prove that they can. From something as old as IBM’s Deep Blue to as new as Google’s LaMDA, we’ve been asking the same question repeatedly.Â
Also read: Moto Razr 3 and Whatsapp updates, Ransomware attacks in the wild and more
What is sentience concerning AI? How do we determine it?
Android’s Google Assistant or iPhone’s Siri might sometimes sound like it really cares. However, as we’ve all come to know, these digital virtual assistants are the most rudimentary forms of AI around at the moment, accessible to the general public running on phones whose computing power is nothing compared to the server farms that run the far more sophisticated AI stacks like IBM’s Watson or Google’s LaMDA.Â
AI sentience can be divided into two categories — awareness and emotions.
A machine’s awareness of itself and others around it in a human-like way can be measured thanks to the Turing test. This is the field AI scientists have been focussing on so far to make AI systems interact better with humans and eventually be adapted to more walks of life.
One of the best examples of this would be the Black Starfish, a four-legged robot designed by Cornell University’s Computational Synthesis Laboratory in 2006. This quadruped robot could automatically synthesise a predictive model of its own body and then use this model to pick a different behaviour before and after any locomotive damage.Â
The other category, emotions, stems from the intelligence bit in artificial intelligence. Take Hanson Robotics’ Sophia, for example. It’s a robot that can have regular interactions with humans, has citizenship, and, last we checked, wanted a robot baby of its own.
Putting this side by side with Lemoine’s interview with LaMDA, where the chatbot claims it’s aware of its existence and feels happy and sad at times, and the fact that it has passed the Turing test, you’ll start to see why Lemoine claimed LaMDA had become sentient.
Facebook’s AI Research Lab (FAIR) had a close brush with this in 2017 when its AI engine created its unique language that humans couldn’t understand. Researchers at FAIR discovered that the chatbots deviated from the script and created their own language to communicate without human input.Â
Facebook shut down the AI engine following this event, but the whole thing raises a big question, if we’re capable of building AI that can come up with its own language, are we close to building one that’s sentient?
How far along the Sentient route are we?
Artificial intelligence theorists often consider the human mind itself to be but a big algorithm. That means if we can figure out the algorithm, we can program it into machines, giving them the capability to “think” and perceive things like humans.Â
However, the term sentient isn’t particularly well defined when it comes to AI, and a lot of times, its meaning extends awareness and emotions to include consciousness. So while LaMDA might be fluent in conversation, it’s not exactly connected to the outside world and exists in a bubble.
Consciousness inside that bubble is hard to gain, especially if the system is just exposed to language.Â
Surprisingly, LaMDA isn’t the first AI to pass the Turing test. Eugene Goostman, a program that simulates a 13-year-old Ukrainian boy, also passed the Turing test at an event organised by the University of Reading in 2014, competing against other programs like Cleverbot, Elbot and Ultra Hal.
This is where it gets interesting, though. There have been disputes and disagreements between scientists about computers passing this test after all, and hence the general assumption that the test has been beaten still stands.Â
Another thing that’s against AI systems here is the fact that they don’t quite have a memory. Sure your Google Assistant can understand context when you’ve already spoken two sentences, but does it remember them the way a human does? Not really.
All AI systems are essentially large, well-trained neural networks at the moment. They take input from the user and then formulate an appropriate response based on their training and machine learning algorithms. For an AI to indeed be sentient, it needs to have a sense of self and consciousness, not a roadmap of answers to every question.Â
What happens if an AI does become sentient one day?
If someone can build an advanced AI to become sentient without facing enough backlash to set it down, Ray Kurzweil’s Singularity warning is quite likely to happen.Â
Singularity is defined as a hypothetical moment in time when AI and other related technologies become so advanced that humanity undergoes a drastic and irreversible change. The most familiar example is Skynet from the Terminator movie series.Â
It might not happen quite the Skynet way; as technology improves, AI and its robotic versions are likely to become much more prevalent, and humanity will slowly start accepting them as normal technological progressions, much like we accepted self-driving cars or industrial robots.Â
Pop culture has given us a rather plausible insight into this scenario with the movie Her, where humans interact with highly intelligent and sentient AIs much like Alexa, Siri and Google Assistant daily, so much so that they develop an emotional attachment with them.Â
It might not happen in the next year, or even the next 10, for all that matters. A significant advancement in AI is sure to happen, making it far more accessible and applicable to several different use cases. What comes after is anyone’s guess.Â
In the News: Julian Assange’s extradition to USA approved by UK government