Our visit to the Ishiguro Laboratory.
Dept. of Adaptive Machine Systems.
Osaka University, Osaka, Japan.
Simon Laub, Nov 2008.
More video from our visit: ReplieeQ2 (wmv), nicknamed Ando-San (quicktime mov).
A Flash version (EOL 2020) of this page, here.
Remember Philip K. Dicks "Do Androids Dream of Electric Sheep" ? In that book, Early androids were detectable,
because of their limited intelligence. But then later on....
Standing in front of ReplieeQ2 in the Ishiguro Laboratory in Osaka, Japan -
you begin to wonder, if 'later on' has already started..?
Through the most helpful Ayako Fukunaga, secretary to prof. Ishiguro, we got a tour
of the Ishiguro Lab at the Dept. of Adaptive Machine Systems in Osaka, Japan, late nov. 2008.
Here, Ishiguros assistant Michihiro Shimada showed me and my Japan-travel comrade, Erik Futtrup, around.
The following is a report from our day in future land.
Main attraction obviously the ReplieeQ2. With flexible silicone for skin rather than hard plastic,
a huge number of sensors and motors to allow her to turn and react in a human-like manner
- she is quite lovely.
She can flutter her eyelids and move her hands like a human.
When she speaks, her lips moves correctly. Or at least - so it seemed to me. I dont speak japanese.
So far, she doesn't move around. Her legs and feet are attached to the chair that
she sits on, quite immobile. With so much else going on with her, thats quite relaxing.
Surely, some of her movements are somewhat staccato, android-like.
But then again other moves are quite gracious, human like.
Which is what her creator obviously is going for.
According to interviews, Professor Ishiguro believes that it may prove possible to build an android
that could pass for a human, if only for a brief period:
'An android could get away with it for a short time, 5-10 seconds. However, if we carefully select the situation,
we could extend that, to perhaps 10 minutes', he is quoted saying to the BBC.
ReplieeQ2 has 41 body sensors, which can all trigger a number of responses.
Press her below the ear and she becomes all cute and giddy etc.
Depending on the program she is running.
With only 4-5 people working in Ishiguros group (Nov. 08),
some of ReplieeQ2 responses are actually quite amazing.
Among ReplieeQ2s better tricks is the one, where
her control program gets input from sensors in the floor.
Most people have a hard time figuring
out how she can know,
- sitting down, facing the other way -
when someone is approaching her.
People guess advanced audio sensors etc. before,
after a long time, someone hits upon the obvious idea with sensors in the floor.
One might call it cheating, but if sensors in the floor
works just as well, then it's allowed in the android world.
However, with a little patience, surely the advanced, humanlike audio system
will also be there some day?
With Ishiguros group busy with groundbreaking research,
the actual machinery of Repliee Q2 is made by Kokoro (Business slogan: 'Get the latest android here,
she can play an
active part as an attendant at an event. Or buy our big lifelike dinosaur.').
Apparently you can order just about anything from these guys.
Which is very practical for Ishiguros group when they
work on telepresence.
- Where telepresence is all about technologies that allow a person to feel as,
if they were present somewhere else, and/or to have an effect,
at a location other than their true location. -
Normally, you would mount a video camera on one of the little yellow Wakamaru robots,
and have fun remote controlling that.
But, as has been widely published, Ishiguro took that to a
whole new level,
when he had Kokoro make Gemonoid, his own android doppelgänger.
With a thing like that you can be home in bed, while the other you,
goes to work for you, picks up groceries, chats with the wife, and what have you.
Actually, Ishiguro also gave his 5 year old daughter this same ultimate robot gift.
But when she was invited to the lab, to see her very own robot-doppelgänger,
she was so terrified by it, that she refused to set foot in his lab
ever again.
Apparently, noone had actually told her the 'flower and bee story' on how to
make robot-doppelgängers.
Now, the robotic replicate daughter is standing in a closet in the laboratory behind Repliee Q2.
It still manages to scare lab visitors.
Perhaps it is just too lifelike? Or just not as cute as ReplieeQ2?
Which btw. is modelled after Ayako Fujii, a famous NHK news announcer.
Perhaps already fast asleep at home, while ReplieeQ2 reads the news on tv for her?
So, is ReplieeQ2 almost human?
Have we entered the Philip K. Dick world here in Osaka?
With androids passing for humans
for as long as 10 min?
Obviously not yet. So far, ReplieeQ2 doesn't do any really clever stuff
- like catching a ball you throw her - and only the yellow ones,
and only when
you tell her that she is cute, without using that word.
No, now sensors can trigger preprogrammed 'simple' behaviours. And that's it.
The big question remains. Will a future ReplieeQ2
be a true Philip K Dick android, or are we just
talking clever dolls here?
According to one 'law' of technology, we tend to overestimate
the pace of technological
progress in the short run, and underestimate it in
the long run. So, you wonder...
Still, ReplieeQ2 has an enormous gap to cross.
I recently attended a seminar,
where Professor David Favrholdt (of SDU)
listed basic human instincts:
Human behaviour is at the very least influenced
by instincts to get a) food and drink b) sex c) avoid danger
d) fight for position in a group, followed by domination or submission behaviour
e) parental care, being helpful f) community, being safe g) explore the surroundings.
ReplieeQ2 has none of these instincts. So thats one difference :-)
David Favrholdt continued the seminar with listing different kinds of
human knowledge:
Knowledge 'that' - you can e.g. eat fruit a, not eat fruit b, etc.
Knowledge 'how' - you can build a pyramid, make silk etc.
Knowledge 'why' - does ice melt at a certain temperature? etc. Use theory.
It takes clever humans to get to the last two types of knowledge.
And you could argue that many humans never goes further than type one knowledge, perhaps type two.
Obviously, ReplieeQ2 is nowhere near anything but 'that' knowledge.
And it gets worse. In humans we assume there is an inner mental state connected
to a behaviour.
Take Buddhist litterature. Here the human mind is decribed as being a
very complex thing
with an enormous number of inner states.
Actually, Buddhist litterature lists 84.000 different kinds of negative human emotions.
Which must be dealt with in order to be on the path to
enlightenment.
Ok, these multifacetted emotions
boils down to just five main ones: Hatred, desire, confusion,
pride and jealousy.
Still, there are many...
And in the other spectrum of the mind we find at least seven types of happy states:
Amusement, fiero (the delight of meeting a challenge),
excitement (novelty), awe (wonder),
sensory pleasure
(in each ot the senses), calm peacefulness.
- And next to nothing of this is modelled in the ReplieeQ2.
Instead, now human observers projects these states onto the ReplieeQ2.
Because
we assume that something that displays a behaviour must have an inner state?
And it gets worse. We humans do not agree, at all, on
whether we are anything near
an understanding on what a human mind is in the first place.
So, how should we be able to agree on whether an android has reached human levels?
E.g. according to Buddhism, the sublest state of the mind
- the very essential
nature of awareness itself - has no neural correlates.
Western neuroscience on the other hand thinks the brain is the mind.
I.e. here Cartesian dualism is not mainstream.
Indeed, one philosopher called it the
'myth of the ghost in the machine'.
Instead, the understanding is that
all mental events,
no matter how
exotic, will be some brain state or another.
That gives hope for a future ReplieeQ2 being all human.
But, along comes David Chalmers with his hard problem of consciousness:
To explain how physical processes in the brain gives rise to subjective experience.
- According to Chalmers -, thats different from explaining 'easy' problems like reaction to stimuli,
focus of attention etc.
'Easy' because we can imagine them being computed - experience itself, what it is like - is something else?
You could argue that the hard problem will be solved when all the easy problems
are solved.
Or, that with all the easy problems solved,we will have a much better platform to address the hard problem.
Or,you could argue that dealing with the hard problem will take some completely new understanding.
Anyway, claiming anything like a subjective android reality for a future ReplieeQ2
- binding its various sensor inputs together to one subjective reality - is for now pure science fiction.
Reality is dangerous territory. According to neurologist Antonio Damasio: If our organisms were designed
differently,the constructions we make of the world around us would be different as well. We do not know,
and it is improbable that we will ever know, what 'absolute' reality is. (p. 97 in 'Descartes Error').
So, on that happy thought. That we know nothing.
We can conclude that we are far away from any firm understanding of human
inner life.
Now, ReplieeQ2 is not almost human. Will she ever be? Perhaps ? :-)
Why is ReplieeQ2 such an emotional issue in debates?
In many parts of the world androids taking over jobs would surely be a
big issue.
But apparently not in ReplieeQ2's homecountry of Japan.
At least we didn't find any angry 'robots will make us redundant'
crowd outside the Ishiguro lab.
Instead, with Japan's population expected to slide by around a quarter by 2050, and immigration a sensitive issue,
the idea that you can develop humanoid robots that can work as maids etc. seems to be broadly accepted.
Not so elsewhere. On the plane back home from Japan, I showed a fellow traveller, a dentist from
New Caledonia, video of ReplieeQ2. He was not all praise:
- They have seen Terminator and have learned nothing?
Such attitudes follows a long tradition. Czech writer Karel Čapek
coined the term 'robot' in 1920.
In his play robots are slaves.
They soon outnumber humans, gets ever expanding intelligence.
Eventually there
is a robot revolt, where the human race is wiped out.
Makes you wonder, the person who invented the modern
concept of robots predicted
they would wipe us out in the end.
Famous Sf author Asimov wasn't much better. Starting back in 1951 with his I Robot short stories collection,
the robots are quite error prone. Or, if the are not error prone, then they are out implementing
grand schemes for humankinds future, without letting any humans in on the master plan.
So much for the idea, that robots are just innocent computers with legs.
No wonder then that some people feel a bit uneasy when it comes to robots :-)
Even when the robots dutifully play their part as slaves it's no good.
Remember senator Palpatine when he stands there on the balcony inspecting his droid army?
We homo sap know, thats what evil people do, they keep slaves and they turn people into slaves.
Bad people simply like people to obey. It gives them a twisted sense of self importance and power.
Human domination instincts gone amok.
Good people on other hand give people freedom and joy.
So, if you are a roboticist in the business of making intelligent android slaves (to join the Palpatine droid army,
or for you to boss around) you better hire a pr. image consultant to tell the public that you are only
making cute little, innocent vacuum cleaners at the robot plant.... :-)
All in all, there is a certain irony here, that some of those,
who are most opposed to having androids
sharing our world,
are also
those, who actually believe that androids could eventually be really 'human-like'!?
Certainly, it takes human imagination to believe, that androids could eventually
suffer as slaves or run amok and try to kill people.
Even when the androids aren't stealing jobs and aren't fulfilling
twisted human dreams of becoming emperor,
some people still don't like them.
For sure, some people just don't like new things. New stuff takes you out of your comfort zone.
It forces you to exercise your brain,
and worst of all, perhaps you will end up with a need to adjust your worldview...
Bad thing... After all, its your worldview, knowledge representation, skills, know-how,
understanding of the world - that gives identity and pride.
Come to think about it, if humans are not careful they will end up being so full
of pride
that they stop updating their knowledge,
which of course will eventually make
the knowledge obsolete and irrelevant.
Becoming mentally old and just not liking new things is such an easy human pitfall...
And there is so much new stuff going on with the androids.
E.g.
a) Life without genes - is one such new thing.
Where, for biological beings, the purpose of a body is genes way of making more genes.
A chicken is the eggs way of making more eggs. So to speak.
Then, whats the purpose of an android body? To make genes in some other (biological) body happy?
Really? Life without genes? Surely, the genes didn't see that one coming. And certainly,
it will take some time for humans to get used to the idea!
Afterall, most people aren't like Steven Pinker, who once wrote, on choosing not to reproduce,
'If my genes don't like it they can jump in the lake'.
b) Life without growth and reproduction - is another first.
It has never been easy figuring out what life really is:
E.g. can you have lifeless parts as the building blocks of life?,
bodies of alive organisms that are half dead?
So, if we are already a little confused, the future just gives it another notch.
I.e. consider science fiction like this: What if you take a super high resolution brainscan today,
and future engineers uses the scanned picture to resurrect a you-copy.
Say in 500 years time and inside a robot body?
Will it then be a 'not really alive' robot, or almost you?
And what if that robot is blown up?
Does anyone die then?
c) Morality in a robot world - is also new, unchartered waters.
E.g. Consider the situation only 11 years from now, in 2020,
when 30 percent of the American army will be robotic...
With fewer human lives at risk (on one side at least), will it not be much easier to go to war then?
etc.
Adjusting old wisdom is painful - emotional.
And with issues like slaves and human instincts gone awry thrown in there
- obviously,
arguments are going to be heated.
What did Philip K. Dick think?
Philip K. Dick
was largely unknown to the general public at the time of his death in 1982.
And it should be noted, that he died before the internet age, never saw an Ipod or a modern cell phone.
But he did what SF authors do - they survey the future and report back to the rest of us.
And when his novel Do Androids Dream of Electric Sheep was made into the
movie Blade Runner
he became world famous.
And Philip K. Dick saw a future where androids could end up being very human-like indeed.
So much so, that in BladeRunner you have to look for a special reflection in the
eye to tell,
who is human and who is android.
A blink in an eye to tell the difference between a life as a slave or a free homo sap. life.
And sometimes the eye reflection is not even enough.
Then you have to apply empathy tests. Like
Dicks (now famous) Voight-Kampff test
that distinguishes human from android, by measuring blushing,
involuntary eye movement,
and responses to emotional questions about harming animals
(Replicants are not that empathic...
It supposedly takes time for them to come up with the good, empathic human answers).
And Dicks vision gets worse. There will be androization - humans becoming just like androids.
Intelligent beings will lead false existences or, worse still, existences imposed on them by those in power.
Intelligent beings will be inanimate, reasonable, obedient and predictible elements in manipulative systems.
In short they will be robots.
Imagine (intelligent beings) not controlling their own desires, wants and talents ...
But hey, wait a minute - if intelligent and not free is 'android horror' were humans then androids all along?
Confused on all of this - Dick doesn't really offer any solid solutions.
He might be disenchanted with future society,
but he offers few hints
on how a better society can be constructed.
Sure, there should be more empathy and compassion, and
less violence and revenge.
Who would disagree?
Prophets rarely comes down from their mountain seclusion to bring us happy news.
And Dick was no exception. Poor future androids are intelligent but still slaves.
Future humans are obedient and manipulated. A little kindness and empathy doesn't change much,
but it's the best anyone can do. Because, in the Dick world, the future has messed up human heads
and confused them about the nature of reality itself.
Is ReplieeQ2 proof that Philip K. Dick was right about everything?
Surely not.
Doomsday preachers don't like to do it. But we could give it a go: Let our reasonable frontal lobes
take control and tell our fear center the amygdala to back down a little.
Then a Dick future isn't inevitable. Happy people with a good sense of humour and a steady hand on reality
are not impossible, but as believable as a Dick vision of sad, humourless and messed up future humans.
Surely, if you can actually believe that people will be able to make supersmart human-like
androids in the future,
then you can also believe, something as outrageous, as a happy future for mankind.
In the end - an optimist will tell you the glass is half-full; the pessimist, half-empty;
and the engineer will tell you the glass is twice the size it needs to be.
The rest of us will know the future when we get there.
-----
Just about to happen - or not, in e.g. The Electric Ant Dick gives us his vision:
- You're a successful man, Mr. Poole. But, Mr. Poole, you're not a man. You're an electric ant.
- Christ, Poole said, stunned.
- So we can't really treat you here, now that we've found out. We knew, of course,
as soon as we examined your injured right hand; we saw the electronic components
and then we made torso x-rays and of course they bore out our hypothesis.
- What, Poole said, is an 'electric ant'? But he knew; he could decipher the term.
A nurse said, An organic robot.
- I see, Poole said. Frigid perspiration rose to the surface of his skin, across all his body.
- You didn't know, the doctor said.
- Your hand can be made at a reasonable expense, either to yourself, if you're self-owned,
or to your owners, if such there are.
In any case you'll be back at your desk at Tri-Plan functioning just as before.
- Except, Poole said, now I know. He wondered if Danceman or Sarah
or any of the others at the office knew.
...
I think I'll kill myself, he said to himself. But I'm probably programmed not to do that;
it would be a costly waste which my owner would have to absorb. And he wouldn't want to.
...
Scan me visually, he instructed the computer. And tell me where I will find the
programming mechanism, which controls my thoughts and behavior.
The computer said, Remove your chest panel. Apply pressure at your breastbone
and then ease outward.
He did so. A section of his chest came off; dizzily, he set it down on the floor.
...
Two men bent over him, their hands full of tools. Maintenance men, he realized.
They've been working on me.
One of the uniformed maintenance men said, - You've been playing around with your reality tape.
Michihiro Shimada demonstrates Repliee Q2 (Ando-San) to us.
Simon Laub.
Nov. 2008.
--- This page as
pdf file (e) ---
--- Datalog Bladet, Aargang 31, April 2009.
p. 5 - 16. (pdf) (d) ---
Continue for
Robot Index or
Homepage Index
Simon Laub
Email: simonlaub.mail@FILTER.gmail.com
(remove filter and dot from address before use)
www.simonlaub.net