Asimovian 21st century robotics.
High-Level cognitive functions in robots, as envisioned in Asimovs books.
Most of Asimov's robot short stories are set in the first age of robotics
and space exploration.
A recurring theme is the
Three Laws of Robotics,
hardwired in a robot's (positronic) brain.
All robots in his fiction must obey these laws,
which ensure that a robot does not turn against its creators.
But the underlining question behind it all is of course: How intelligent will these 21st century robots
be?
How human-like? Will the coming centuries see things like:
a) Robot Love. Robots capable of love
and perhaps
capable of passionately killing for love?
b) Intelligent robots that
use force to realize their plans
for the future. A Pax Robotica?
c) Robots motivated by built-in human urges, such as sex?
d) Robots that are sophisticated
enough to use all the dimensions of power to get what they want?
Asimov certainly had opinions about it all. Opinions that will
be explored below:
1. Roboticide and Robot love.
2. Pax Robotica.
3. Robot Sex.
4. The 3 dimensions of power. Robot power and Asimov.
Roboticide and Robot love.
For those of us who enjoy taking a
peek of things to come, as we go down the
road of
machine intelligence, Isaac Asimov
never disappoints - And I would dare say
that he is still the best. Just reread
Robots of Dawn and is again amazed by
the many insights
(on the future) one finds
in this book.
In Robots of Dawn we are faced with the horrors
of a roboticide. The killing of a humaniod
robot. A robot who was loved by a (human) woman, and
who made love to a (human) woman.
At one level a pretty good detective story,
at another level Asimov once again explores the future
of mankind in this future robot world (of ours).
Guided by the
Three Laws of Robotics robots
must protect humans and humanity.
I.e. a kind of hardcoded love?
Perhaps even love passionate enough to kill (other robots) for?
And certainly
love so strong that robots cant refuse to make
physical love to humans who so desire.
And robots dont need to do anything
halfheartedly. They can be absolutely courageous
in love, as they know no fear of pain or death, because
there is no pain or death (for them).
So, one might have expected robots to face all the
dangers life might throw at the humans (they love) ?
Wouldnt that be the ultimate robot love?
Not really, according to Giskard (Asimov) that would be
dangerous for humans in the long run.
So, the mindreading robot Giskard
(magic is just a very advanced technology,
as Arthur C. Clarke would have put it) decides
that humans will have to settle (face) the Galaxy
without
the help of Robots. Somehow the difficulties,
dangers and harm without measure (things
the robots could prevent if present) will
be better for the evolvement of man in the long run, than
the safety advanced robots could bring.
Love can indeed have many forms!
When humans have passed a certain threshold,
someday in the far future, robots can perhaps
intervene once again.
All in the same way as parents know that
they can't be overprotective and must let their children
face the world on their own.
So Giskard reasons he must override the
First Law of
Robotics in the long term interest of humanity.
Which makes you wonder what Asimov really is saying?
Robots might kill other Robots out of love
(Guided by the
First Law of Robotics). Robots might also
make love to a human (Guided by the
First Law of Robotics).
But robots can't help humans in the task and
trials of being a human.
Is he, Asimov, being a pessimist or an optimist here?
Pessimist in the sense that our greatest inventions
really comes to nothing faced with the really tough
questions of life - Or an optimist, as we already
have all the powers within ourselves needed
to face these challenges?
--------
The Original Laws of Robotics
(The Calvinian Religion)
1. A robot may not injure a human being or,
through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human
beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence,
as long as such protection does not conflict
with the First or Second Law.
The Zeroth Law
(The Giskardian Reformation)
0. A robot must act in the long-range interest of
humanity as a whole,and may overrule all other laws
whenever it seems necessary for that ultimate goal.
---------
-- Posted on Usenet: 12-01-2003 --
Simon Laub.
www.simonlaub.net
Simon Laub.
Page revised Dec. 2008.
Continue for
Robot Index or
Homepage Index
Picture is from the Adaptive Machine Systems
lab in Osaka Japan. Nov. 2008.