Impressions and Links
from AISB 2011.
Machine Consciousness.

The University of York.
4th to 7th April 2011

Wow, what a conference, what a wonderful gathering of brilliant minds!
I certainly enjoyed my days at the AISB 2011 convention!
The convention was held at the new Heslington East campus, and had many tracks. Machine Consciousness, Computing & Philosophy, AI and Games to name a few. My main focus of attention was the Machine Consciousness track.
I do realize that some people might take offense to headlines like ''Machine Consciousness''.
At the conference Igor Aleksander reminded me that computer memory used to make people feel much the same way. Back then (60's, 70's) you should use the term computer storage, if you wanted any grants.

Hopefully, the reality behind the words will be somewhat clearer, after you have read this page.

1. Computer Models of Cognition.

The conference had a brilliant mix of concrete projects and more speculative presentations. The first speaker was John G. Taylor (Trivia: I had met him and Y. Kinouchi on a local bus out to the campus area earlier that morning).
John G. Taylor has always been a great inspiration. And on his homepage, Race for consciousness (closed in 2012, so use archive.org or similar), you will find many great articles on consciousness, attention and more.
John Taylor might now be an elderly gentleman, but he certainly doesn't beat around the bush, And on his page one finds papers like: ''Roadmap for the possible construction of a sequence of artificial brains based on guidance from the human brain'', ''Will artificial brains ever really think?'' etc.
At the conference he presented ''Can functional and phenomenal consciousness be divided?'', where he deals with attention architectures in the brain. Starting from using attention mechanisms to pick out the most salient input in a scene, he ended his talk with remarks on how an understanding of attention control might help us learn more about how the conscious experience is constructed (in numerous animals).
By consctructing ''simple'' cases of consciousness, he argued (my interpretation of his words) that along the evolutionary route to real consciousness, some stages would have less ownership etc, such that functional and phenomenal consciousness can be divided in these stages.

Self System in a Model of Cognition.

Next up was Uma Ramamurthy. Her paper considered several issues that might arise, when attempting to include a self-system in a software system/cognitive robot. The main goal of her work, implementing a self system in the LIDA model, is to provide a better and more complete understanding of cognition and the Global Workspace Theory. In her words: ''The self system is directly linked to consciousness, and as we implement models of machine consciousness, it is imperative that we include the self system in these models''.
The LIDA cognitive cycle can be subdivided into three phases: the understanding phase, the consciousness phase, and the action selection phase. Attention codelets begins the consciousness phase by forming coalitions of selected portions of the current situational model and moving them to Global Workspace. A competition in Global Workspace then selects the most salient coalition, whose content then becomes the content of consciousness.
It is within this LIDA model, that it will be interesting to see, if Antonio Damasios Proto-self, Minimal Core Self and Extended Self can be meaningfully implemented. The Proto-self, seen as a short-term collection of neural-patterns of activity representing the current state of the organism, might already be present in LIDA. The work is therefore concentrated on the Minimal Core Self and the Extended Self.
And the presentation was about the challenges involved in this work.

A model of primitive consciousness in an autonomously adaptive system under a framework of reinforcement learning.

Y.Kanouchi, from Japan, presented a model of primitive consciousness composed of stochastic neural networks, that autonomously adapts without a teacher to its environment. The system is composed of six modules: A perception module, an integration module that calculates a candidate for an action, a motor control module, an episodic memory module, a working memory module and a basic control module, which has an evaluation mechanism influenced by emotion.
The presentation dealt with the preparations to simulate this system on a computer to clarify its operational characteristics in more detail.

A Cognitive Neuroscience Inspired Codelet-based Cognitive Architecture for the Control of Artificial Creatures with Incremental Levels of Machine Consciousness.

Klaus Raizer, from Brazil, presented a paper on development of artificial creatures, controlled by cognitive architectures, with different levels of machine consciousness.
The iCub (the rather pricy - 200,000-250,000 euro - small humanoid robot, for research into human cognition and artificial intelligence) serve as the platform for the experiments, while the triune brain theory, proposed by MacLean, serve as a roadmap to achieve each development stage.
Their group propose the application of cognitive neuroscience concepts to incrementally develop a cognitive architecture, following the evolutionary steps taken by animal brains.
A completely codelet-based system ''core'' has been implemented, serving the whole architecture. And Raizer argued, according to Dennett and Hofstadter, that consciousness is the emergence of a serial stream on top of a set of parallel interacting devices (In their world: Codelets) - and that a codelet-based cognitive architecture gives a biological plausible system - there was some discussion in plenum.
Obviously, not all were convinced that there is a simple road from codelets to consciousness.
However, for them it was important to state, that some experiments try to better understand or make important discoveries about biological consciousness. This is not the case in this work. Here the aim is to take advantage of new findings in science to build better technologies.

2. Action Selection and Artificial General Intelligence.

Joanna J. Bryson presented A Role for Consciousness in Action Selection. The point here is that cognitive strategies generally cost time. And that time for cognitive processing delays action. Which is expensive, another agent might take advantage of the situation. Indeed, biological consciousness is intrinsically slow and noisy.
So, what compensates for this loss of time? Apparently, plasticity, the ability to solve problems that change more rapidly than other ways of action selection can manage.
What is the good attention strategy then?
1. Focus attention on the actions you are already taking
(It is likely that you will need to take the same kind of action in the future).
2. Focus attention longer on things in your environment you cannot predict.
A system, where we focus attention, at least briefly, on unexpected, loud or novel sounds or visual motions might be a good model for e.g. grazing animals. If we add a drive to actively explore novel situations, we might come up with something like what more creative species like predators or primates have. If a new strategy becomes more attractive, the agent might be described as having an insight - an old plan is flushed and a new plan is selected.
Joanna Bryson also had some insigths on ethics and consciousness. I.e. according to her, ethics is an evolved mechanism for sustaining societies. And it is most efficient when it appropriately allocates responsibility. Indeed, those who are aware are more likely to be responsible than those who are not. Really, only the truly conscious can be real moral agents?
So, what about conscious machines? Here, she believes the most stable solution for human society is to value humanity over robots, and maintain our resposibility for the machines we make. Humans should be responsible for the machines, and the actions the machines might take.
Obviously, the Hard-AI crowd thought that sounded a lot like keeping conscious beings as slaves.

Murray Shanahan then presented a paper on Artificial General Intelligence Requires Consciousness.
Shanahans publication page is an absolute delight, and his talk about building coalitions in the mind probably also took some input from his recent book: Embodiment and the inner life, Cognition and Consciousness in the Space of Possible Minds (How is the inner life of a human being constituted, what are the neural underpinnings of the conscious condition etc.) (which I'm looking forward to read).

Antonios Chellas presentation, High-Dimensional Perceptual Signals and Synthetic Phenomenology, dealt with the typical dimensionality reduction in perceptual systems. The reductions, as done e.g. in typical robot vision applications, are in his view instances of ''unconscious'' processing, whereas a conscious process must deal with and exploit the richness of the entire signal coming from the retina.

3. Stephen Wolfram: Intelligence and the computational Universe.

It was then time for a video-linkup to Stephen Wolfram in America (The video-linkup gave Wolframs presentation a nice touch of ''science fiction'' / ''his masters voice'', but that is another story).
Wolframs book, A New Kind of Science, contains a systematic, empirical investigation of computational systems for their own sake. In order to study simple rules and their often complex behaviour, Wolfram believes it is necessary to systematically explore all of these computational systems and document what they do. According to Wolfram, some complex computations cannot be short-cutted or ''reduced'', and this is ultimately the reason why computational models of nature must be considered (Btw, Wolfram has been criticized for his lack of modesty, poor editing of the book, lack of mathematical rigor etc. But the book was a bestseller, just as his program Mathematica is).

In his presentation, he started with, what he calls, the breakdown of recent physical models of the universe...
To him, it is not surprising that a physical model breaks down, if you always assume that the thing you are trying to describe is much simpler, than the observer trying to describe it. In current physics we assume that a ''simple'' mathematical representation of the universe exist, rather than a complex computational representation. Implicitly, we assume that the observer is much more complex than the thing being observed.
But, how can we know that we can describe the universe with relatively simple physical equations? How can we know that the universe has a simpler structure than us, the observers? His answer is, of course, that we can't know. And it might actually be the reason why our ''simple'' physical models of the universe fails in these years.
He has therefore been looking at various programs to generate computational universes. And, according to Wolfram, some of these computer generated universes ends up being really good approximations for our own universe. In some of them he finds special relativity, quantum effects like the Bell inequality etc.
Giving us yet another Corpernicus-push. Our universe is not the only one, but just one of many, which can be generated/simulated by (Wolfram kinds of) computational systems.

Next he presented The Wolfram alpha. I especially like Wolfram Tones, where his machine generate musical seeds, that you can then play around with. It is all about outsourcing (human) cognitive tasks to machines (like, coming up with a musical seed).
Eventually, the human provides only the general direction on what needs to be done, the machines fill in the gaps.
Notice also the difference between Google, Watson and other search engines, and then the Wolfram Alpha. All of these other search engine ''lookup'' data, and cannot produce new data. Wolfram Alpha on the other hand can produce new, never before seen data, when you send tasks to its (Mathematica) engine.

4. Katie Slocombe: Primate communication.

The next invited speaker was Katie Slocombe. Her research area is primate communication. And she gave a really nice and humorous account of her work with chimps in Edinburgh Zoo, and in the wild (Uganda).
Obviously, language is important for human cognition. Indeed, language may have been one of the "tools" that boosted humanity and led to the colonisation of the whole planet. Quentin Atkinson and others have speculated: ''That language was the stepping stone for civilisation, which led to better co-ordination and co-operation that might have led us to expand'' (Atkinson have also speculated that ''The evidence suggests that there was a single origin of language, rather than a number that happened independently").
Katie Slocombes presentation was less speculative though. And started with an introduction to various monkey calls (for predators like snake, leopard and eagle). Eventually, I think we could all make monkey calls for snake, leopard and eagle. Along with calls for good and not so good food.
We were then introduced to Liberius, from Edinburgh Zoo. Apparently quite horrible to make experiments with. I.e. most chimp experiments ends with a snack for a job well done. But, Liberius' mother had apparently ruined Liberius career as a ''research chimp'', as she had always eaten pretty much everything around, making poor Liberius quite impatient, and eager for receiving his reward immidiately (before his mother could see it) - spoiling pretty much all the experients you could think of.
Hilarious stuff.
However simple it looks in textbooks, out there in the real world, animal experiments are not easy with real chimps.

5. Consciousness, Meaning and the Future Phenomenology.

In Ricardo Sanz' presentation about Consciousness, Meaning and the Future Phenomenology he explained, that research into machine consciousness is justified in terms of the potential increase of functionality, but also as a source of experimentation with models of human consciousness to evaluate their value.
He gave some really good points about the relationship between cognitive neuroscience, cognitive modelling, cognitive theory, cognitive robotics and robotics. That these fields actually need each other, are closely connected and will benefit from interdisciplinary work.

His words about ''engineering the right phenomenology mechanisms, because it will be the origin of the intrinsic motivations of the agents... that these will not be human phenomenologies, but the phenomenologies that when deployed will make the agents pursue our satisfaction'' seemed very logical. And certainly ''we need not only a better understanding of the artificial, but also of our own consciousness to do these things.''

Would a super-intelligent AI necessarily be (super-)conscious?

Futurists are predicting a ''technological singularity'', where artifical super-intelligence, AI++, explodes onto the scene.
Steve Torrances presentation asked the question whether AI++ would necessarily also be conscious, or perhaps even super-conscious.
Sure, some might be tempted to dismiss all singularity scenarios as entirely baseless. But, according to Steve Torrance, it would be ill advised simply to turn ones back on the subject. The fundamental changes in human history, that a super-intelligence would bring about, is something we need to think carefully about.
Will an AI++ be conscious? And what does that mean?

On the consciousness of super-AIs the answer must be somewhere in between: a) Super AIs would necessarily be conscious beings. And b) They would be super smart, highly powerful zombies (totally without conscious awareness). The answers goes from hard-scepticism: Consciousness does not drop out from super AI, because super intelligence is theoretically impossible using current day AI techniques. To: A super AI would be likely to have all the functional features of consciousness that we have, and the functional features are all there are to consciousness.

Today (we can say), much work is being done on developing ''artificial general intelligence'' (AGI) (E.g. see my Jeff Hawkins post). Where, instead of ''islands'' of domain specific ability, such systems will exhibit mainlands of operative capacity.
And any agent that might qualify as AI++ has to be build around a robust AGI model. Yet, prototypes aside, noone has come near to developing a serious practical contender for AGI....
Obviously, this might all change in the future though.

Anyhow, if we accept the possibility of AI++, would such a systems conscious states exhibit some kind of quantitative, or qualitative difference from human conscious states? For that matter, do the phenomenal states of a human, whose cognitive cognitive states are situated towards the high end of the intelligence distribution, differ from those of a human with scores at the low end?
If there is a difference, then our present-day ethical systems are based upon the assumption that every human being is entitled to similiar consideration in terms of rights to avoid suffering and seek personal satisfaction and fulfillment. Should we then with the advent of AI++ agents expect new, higher, layers to be added to the moral hierarchy (especially if these beings possess levels of phenomenal experiences that far exceed human capacities) ?
Indeed, should the moral interests of super-AIs therefore take precedence over the moral claims of humans?
And, concerning the morality of the AI++, is it safe to assume, that moral problems always results from some rational failure? Which we must assume that the AI++ can be guaranteed not to have?

Surely, Steve Torrances talk made it absolutely clear that there is a lot to consider, as we get closer to the age of the AI++s. Certainly, there will indeed be many moral issues to consider.

6. Information Integration, Data Integration and Machine Consciousness.

Tononis theory of integrated information (See my post here) has two axioms:
1. Consciousness is highly informative. This is because each particular conscious state, when it occurs, rules out an immense number of other states, from which it differs in it own particular way.

2. Conscious information is integrated. Whatever scene enters consciousness remains whole and complete; it cannot be subdivided into independent and unrelated components that can experienced on its own.
The unified nature of consciousness stems from a multitude of interactions among relevant parts of the brain. If areas of the brain becomes disconnected, as occurs in anesthesia or deep sleep - consciousness wanes and perhaps disappears.


In their presentation,World-related Integrated Information: Enactivist and Phenomenal Perspectives, Igor Aleksander and Mike Beaton, stated that they agree with Tononis intuitions. But they argue that there are important aspects to consciousness, which are overlooked in Tononis axioms. Consciousness needs to be about the world, and it needs to involve interaction with the world.

In the case of brains and behavious; a subject's information may involve body and world in ways, which mean that the subject's information simply isn't decodable from from the subjects brain state.

The information a subject gain about the world depends not only on the level of integration in the subject's brain, but also on the level of integration between subject and world, accross the sensory interface.

Therefore, in common between these two views are the claims, that conscious information is always about the world. And that consciousness fundamentally involves interaction with the world.

Information Integration, Data Integration and Machine Consciousness.

David Gamez's presentation also dealt with mathematical and algorithmic theories of consciousness.

Tononi not only claims that there is a correlation between data integration and consciousness. But claims that consciousness actually is data integration. Gamez thinks that data information might not be the final theory, but that there will eventually be some sort of algorithmic theory to decide consciousness!?

Indeed, as scanning techniques develops, it may soon be possible to get exact data about the location and connections of every neuron in the mouse brain. This would then enable real-time simulation of a particular mouses brain - which might be capable of experiencing the same pain as the original mouse?

Likewise, peoples brains might eventually be scanned, and the information used to build a simulation in a computer. If information integration, or a similar theory, could then be shown to make accurate predictions about consciousness, then it could be used to predict whether the simulated brain is as conscious as the brain was before the persons death.....

Indeed, accurate predictions about consciousness in humans have many applications. Identifying whether a person is unconscious during an operation. Measuring the degree of consciousness in coma patients etc.

Dreaming.

In the closing panel discussion, Ricardo Sanz had some brilliant observations about dreams and dreaming. And why conscious machines needs to dream.
I.e. when dreaming, we score ourselves in simulations of dangerous situations. And rehearse appropriate responses. All in preparation for the problems we will likely face the following day.

7. Experiments.

Various games and products were presented in the Ron Cooke Hub.

I especially liked
Dr. Tom Froese's (DPhil, MEng Ikegami Laboratory,
Department of General Systems Studies, Graduate School of Arts and Sciences, The University of Tokyo)
''virtual'' walking stick.

With his device attached to your hand, it will send out an infrared beam and you will get vibrations in your hand, if you approach some obstacle. What a brilliant idea! 100 percent sure that will be a great tool for all suffering from blindness or visual impairment.
Read more about it here: Enactivetorch, Froese blog.

EcceRobot.

Owen Holland gave the last presentation. A demonstration of his EcceRobot, robot system. A platform used for investigation of human-like cognitive features.
Owen Holland previously worked on drones. But somehow real robots are more exciting, of course :-)

Spiegel TV has some introductions to the project:
Spiegel TV - Embodied cognition and Spiegel Tv - Frankensteins Traum.

And there are lots of EcceRobot Youtube videos out there. Unfortunately, not any that shows the really scary onboard video camera feed....





Its body movements are shaky though.

And as we watch the robot, it takes pictures of us.



- - -


More at:
The society for the study of Artificial Intelligence and Simulation of Behaviour: AISB.


-Simon

Simon Laub
www.simonlaub.net

The Ego Trick
About www.simonlaub.net | Site Index | Post Index | Connections | Future Minds | Mind Design | NeuroSky | Contact Info
© April 2011 Simon Laub - www.simonlaub.dk - www.simonlaub.net
Original page design - April 9th 2011. Simon Laub - Aarhus, Denmark, Europe.