When it comes to how truly intelligent Artificial Intelligence (AI) is, it’s a polarizing debate. Either AI will solve the world’s woes or robots will rule us all – Matrix-style. But it’s all a little more complicated than Hollywood makes it seem…
Watch podcast episode 2 here
For a deep dive, do listen to our Beyond the Data podcast hosted by Sophie Chase-Borthwick (Calligo’s Global Data & Governance Lead) and Tessa Jones (VP of Data Science Research & Development).
Meanwhile, in this blog we look at tea-making and social care robots to illustrate an otherwise very nuanced and arguably never-ending narrative on the ‘intelligence’ part of the AI equation.
It’s important first to consider the different types of AI:
- The majority of AI is ‘narrow AI’ – a single task, building a system to perform a particular task. You can build lots of narrow AI systems to perform together.
- General AI, in comparison, is a lot more broad – intelligent machines that can learn, perform, and comprehend intellectual tasks much like a human. This is the territory where it’s a lot less clear-cut.
Let’s unpick the gray area of ‘general AI’, by looking at what robots are capable of – and whether this makes them truly intelligent, yet…
Tea-making as a success criteria for intelligence?
A robot making a cup of tea isn’t something a lot of us think twice about and wouldn’t be the first example of proving intelligence in a typical setting. However, scientists are doing just this, typically by:
1. Coding in the tasks a robot has to complete first (boil kettle, get cup, put the teabag in and so on).
2. Using experience-based learning to demonstrate how to make a cup of tea. When the robot doesn’t do it well or something is not done correctly, then the robot is given more examples of how to do that task.
To successfully have the robot make a cup of tea, scientists are having to build in and prescribe a lot of the parameters and tasks a robot has to complete. However, if the environment changes (for example a robot has to make a cup of tea in a different room) it would likely struggle because it isn’t familiar with the environment and the parameters.
Intelligence can’t just be about managing to do a task correctly; it’s being able to use inference to adapt in a new environment and navigate unfamiliar parameters to complete a task.
However, this adaptation and re-learning is a lot slower for robots than it is for humans. As Tessa Jones highlights, it’s referred to as Moravec’s paradox and essentially means it’s easy to train robots to do things that humans find hard, like chess and logic-driven tasks. However, it’s hard to train robots to do things humans find easy, like walking and image recognition.
In the podcast Sophie Chase-Borthwick observes: “Playing a game of chess is very rule-based [and easy to code into a robot] whereas making a decent cup of tea is definitely an art”.
Using a Japanese concept to make robots more human
When looking at robots comprehending tasks much like a human, what could be more human than caring for one another? Japan is leading the exploration of the use of social robotics for assisted care. However, rather than the robot just serving a functional task, Japanese scientists are building one step further…
“There’s a concept coming out of Japan – a concept called ‘kokoro’”, says Tessa. “For robots to actually be effective and useful, there needs to be a heart-to-heart connection between the human and the robot”. There’s typically three kinds of kokoro you can achieve:
1. How the robot affects the human. If the human is feeling sick, whether the robot can interact in a way that lifts their spirits – for example Paro, a soft baby seal robot designed for use in hospitals and nursing homes as a therapeutic tool.
2. Whether the robot understands a human’s emotions. The robot can conceptualize when the human is feeling sad or angry. But getting this right is very difficult, as it’s hard to detect between anger and happiness based on imagery and voice. Microsoft has even recently stopped a lot of its programs around emotion detection as it opens the door to racial biases, and different facial and voice features.
3. When the robot itself feels and has its own ‘kokoro’. Currently, this remains confined to science fiction as it maps to ‘super intelligence.’
However, it’s worth considering the spectrum of human diversity. For example, neurodiverse people don’t always recognise what some emotions are but they are still intelligent. So recognising emotions and responding to them on its own isn’t a demonstration of intelligence.
As Sophie poignantly puts it: “Are we re-defining intelligence to suit the machines – and in doing so, carving out some humans?”.