From: Simon Laub (Simon.Laub@FILTER.mail.tele.dk) Subject: I, Computer - conscious and with a personality, not ever? Newsgroups: rec.arts.sf.written, rec.arts.sf.science, comp.ai.philosophy, comp.society.futures, talk.philosophy.humanism Date: 2004-04-19 I, Computer - conscious and with a personality, not ever? ============================================================= Just met one of those sceptics, who doesn't think there will ever be anyone saying "I, computer". Sure, it is refreshing (for a change) to meet the diehards, who doesn't buy into any kind of techno jingoism. Personally, I was kind of beginning to think that we are all just sitting here waiting for computers that will plead with people not to turn them off and reply "YES", when asked whether they are conscious. Thats what SF does to a guy. But strangely, you can still get into heated arguments (with some of your fellow humans), if you suggest that some future computer will be really, really conscious. Apparently, consciousness is still the big taboo thought in all of this. This sceptic almost gave me the complete Searle Chinese Room argument, that computers can only pretend to be conscious. And surely there must be some "secret, unknown ingredient in consciousness" ? As nothing is real - unless its on the internet - a resume of the debate follows :-) Constructing personalities for humans and robots. -------------------------------------------------------------- It started off ok with us agreeing that personalities can be constructed. A hard problem for sure - but possible. We also agreed, more or less, on what a consciouss machine should be able to do. "The secret ingredient" was a bit more troublesome. With the details it went something like this --->>> I.e. we agreed that a human personality is constructed as a complex interplay between genes and culture. Obviously, it is difficult to make a map between a particular gene or life experience and a particular personality. Nevertheless, statistically significant links exists between a particular gene (that people might have or not have) and personality traits. The trouble is that (for starters) it is likely that the genetic component of personality comprises hundreds of interacting genes. The interesting part is that there is such a link between personality and genes and that some "personality genes" have actually been found (see e.g. New Scientist, September 13th 2003). A personality decides how a person typically think, act and feel. Often, such personalities are described using five dimensions, where the score you get in one dimension has no bearing on the others! In each dimension most people are found around the middle with a few at the extremes. I.e. the following description of personality dimensions could be a starting point: Neuroticism: Measures emotional instability. High scores are anxious and with low self esteem, while low scores are easy-going and at ease. Extroversion: Measures happiness, energy level and people skills. Hich scores are approachable and assertive, while low scores are introverted and submissive. Openness to experience: People who score highly like novelty for its own sake, while people at the other end like routine. Agreeableness: High scores are friendly and warm, while low scores are shy and critical. Conscientiouness: Measures degree of organisation. Hich scores are disciplined while low scores are easily distracted. Now, in order to make a robot "personality" you don't necessarily have to build it bottom up from "robot genes" and long robot lives with lots of robot experiences. The sceptic and I actually agreed that in principle every behaviour could be directly programmed (by the ever present 100 monkey programmers) as highlevel rules, where a given input (feeling, sensation) is mapped to a certain behaviour, according to the personalities position in such a five dimensional vector space. It follows that robot behaviour will be pretty easy to explain. No behaviour is necessarily emergent - it doesn't need to develop automatically - but could instead be very much pre-defined by a programmer and the robots previous interaction with the environment. Making such robot personalities much easier to understand - and deal with :-) So, given enough (enormous) resources it should be possible to give a robot a (kind of) personality based on the workings of simple highlevel rules!? An enormous amount of such co-contributing rules that is (And lets not, for a moment, worry that we will need an infinity of such rules - And only have limited resources. After all, Chat-Robot engineers doesn't seem to worry about the fact that you can tilt most chat robots by questions such as "What does the letter M look like upside down"? There will never be enough rules to describe everything). Still, we actually agreed that simple rules will go a long way in constructing personalities. Roadmap for consciousness. ---------------------------------------- We even agreed that consciousness can be broken down into a number of core functionalities. Igor Alexander of Imperial College, London, have presented a broad outline of what a conscious machine must be able to do. In his words, a conscious machine must have a "depiction" that matches our inner sensations. In order to form consciousness, these depictions must have at least five qualities (as a minimum): First, there must be a sense of place ( "I" am the middle of an out there world). Secondly, there must be a sense of imagination and past. Thirdly, the system must be able to focus. By focussing attention the world becomes purposeful. Fourth, A sense of planning. Alternative plans can be laid out. And it can be evaluated how the world will react to these plans. Finally, emotions will guide the system in its choice of which plans are good and which are not. If a sequence of actions resulted in a positive outcome in the past, this plan is reinforced. Again, this might not be "really" consciousness, but if noone can tell the difference from the outside, and if the robot itself thinks it is "conscious" - who cares about the difference? So far we agreed! So, lets call in the engineers, I thought! The secret ingredient. --------------------------------- Except of course, according to the critic, we have left out some crucial ingredient necessary to make the whole thing work? I would have been happy to see how far we could go with what we have. Sure, it might only take us that far - but why worry now? My critic friend on the other hand obviously thought we would go nowhere, because the secret ingredient to him is the crucial ingredient in the mix. We didn't get to any kind of solution on that, but some interesting secret ingredients came up! Quantum entanglement took the prize as the most mysterious one. Einstein called it "spooky action at a distance". Set things up correctly, and you can instantaneously affect the properties of a particle lightyears away by doing something with its entangled particle friend down here on Earth. When you see light from some distant star, some of its photons are certainly entangled with atoms of the star, so when you see/measure the photon, you do something to the star and its atoms..... And if entanglement between widely separated particles aren't bad enough, then according to Caslav Brukner of the university of Wienna, moments of time can also become entangled. So, electrons in our brains are an entangled mess, and perhaps they are entangled with distant stars or entangled with events of some distant future or past. Experiments on Tau Ceti as well as future experiments here on Earth might influence the state of our very own quantum computer! And somehow our consciousness is sitting right next to (or inside) all of this - So, according to the critic, surely quantum entanglement (or some other secret ingredient) must also be an ingredient in human consciousness. And surely quantum entaglement isn't a "straight forward" thing, easily explained or replicated in a robot !? ---- Today (2004), in London you can hardly walk two feet without being filmed by 5 security surveillance cameras. According to recent estimates the average commuter in London is filmed 300 times a day. So, the critic and I ended up agreeing that in the future it is reasonable to assume that many of these cameras will be manned by computers. Computers that will sound the alarm to human supervisors in the event of crime or accident (anomalous behaviour). It is still science fiction to suggest that these computers will be watching robots intermingling with humans. And even more so to suggest that crime in the future could be robots mugging other robots (for good parts), while police computers send in the police robots. Police robots with custom-made friendly (but strict) personalities?! We agreed that it will probably happen though. But, whether these police robots will be conscious or just masquerading as conscious we didn't agree on. Somehow, neither side came up with a killer punchline? A nice slippery subject this is. FUT: rec.arts.sf.written -Simon Simon Laub silanian.tripod.com