Impressions and Links
from the University of Reading
Enactive Cognition Conference 2012.

Enactive Cognition, the University of Reading.
February 27th to 28th 2012.

Set in the tranquil heart of Windsor Great Park, Cumberland Lodge was the perfect setting for University of Readings ''Foundations of Enactive Cognitive Science'' conference.

Built in 1652, close to Windsor Castle and Eton, and only made accessible to public conferences in 1947, the lodge is a place oozing with english charm and character.
T.S. Eliot, Paul Tillich, Sir Karl Popper, Stanley Spencer, A.J. Ayer - to name just a few - have all visited the lodge. And have used the unique settings as a backdrop for contemporary discussions on science, society etc.

Queen Elizabeth II is the current patron of the lodge.
For more about Cumberland
Lodge see here.

1. Foundations of Enactive Cognitive Science.

A short introduction to the conference could be found (January 2012) at the University of Reading homepage:
The pursuit of cognitive science is concerned with the scientific study of the mind. Interdisciplinary in nature, the discipline spans philosophy, cognitive psychology, neuroscience, artificial intelligence, linguistics, anthropology, social sciences, biology and physics - as well as any other discipline with a perspective on the workings of the mind.
Enactive cognitive science emerges from (these) diverse research interests, and has yet to mature into a discipline on its own....
Enactive cognitive science distances itself from classical cognitivist and computational perspectives, by broadening the current focus on the brain and including the body and its relationship to the outside world.
And, sure, keynotes from Mark Bickhard, Fred Cummins, Tom Froese, Thomas Fuchs and Kevin O'Regan, together with diverse contributions from oral presentations and posters, made it a very exciting event to attend.
In AISB No. 135, Marilyn Panayi (Ph.D. candidate, School of Health Sciences, City University, London) summarized the event with these succinct headlines:
The meeting provided a truly interdisciplinary forum that focused on the challenges and future directions of research for cognitive science.
The meeting revisited Maturana and Varela's biological systems framework (*) that places cognition as knowledge, and knowledge as action, at the centre of how organisms bring forth their interaction with the world. Such approaches reconsiders the dependence on representation for cognitive processes.
Discussions emphasized the need to revisit the theoretical foundations for embodiment in cognitive science, together with consequences for conventional computational models.

* The Tree Of Knowledge (by Maturana and Varela) outlines a unified scientific conception of mind, matter, and life. The central insight that cognition is not a representation of the world ''out there'', but rather a ''bringing forth of the world through the process of living itself''. A book about ''Knowing how we know''.
And here, cognition and consciousness can only be understood in terms of the enactive structures in which they arise, namely the body.

See more in the Wikipedia Enactivism article.

1.1. Page Overview. Impressions and Links, Preparations and Background Material.

In the next section (2) you will find my impressions and links from (some of) the presentations at the conference
(Please notice: These notes don't do justice to the often brilliant presentations that initiated them! So, please read the original presentations to avoid any distortions ...).

In sections (3 - 9) you will find notes that were jotted down in january 2012 (mostly), as part of my preparations for the conference.

2. Impressions and Links from the 2012 Enactive Cognition Conference.

The Enactive Perspective: Tom Frose started the conference with the talk ''Don't look back: Enaction is the future of cognitive science'', a passionate defense of the Enactive Paradigm. According to Frose, we should distance ourselves from the classical cognitivist and computational perspectives, and try to broaden the current focus on the brain, by including the body and its relationship to the outside world.
Future research will decide what an ''Enactive Perspective'' really is, but the presentation made it clear that human cognition is influenced by lots of stuff, that narrower perspectives might miss (E.g. sounds and vibrations picked up in the environment might trigger thought sequences that other perspectives can't account for).

Temporal Experience: David Silverman talked about Enactive Perception (With the title: ''Temporal Experience and The Sensorimotor Contingency Theory''). In the end of the talk, a rather clever model of ''Retentional Enactive Perception'' was presented.
As I understood it: The model was called retentional, because the experimental ''now'' has duration. Things that are present ''now'' are accompanied by ''retentions'' from the past, or ''protentions'' into the future.
In the model, perception occurs when we track dependencies between sense input and movements in space and time...
And the model was enactive, because things out there help us bring about a ''unity of experience''.

Silverman said he was influenced by O'Regan and Noe's work.
Which made sense to me, as I looked into their paper ''A sensorimotor theory of perceptual experience''.
Apparently, O'Regan and Noe writes about some of the same mechanisms, where things out there helps establish order: ''There are many processes underlying life (respiration, reproduction, ingestion, digestion, movement, maturation, cell division etc.) ... Life has unity because it is a coordinated way of behaving within an environment. It is not something that is generated from the underlying mechanisms''.
The unity of the experience, we believe, can be explained by the unity of the activity itself, that is, by the fact that there is a coherent pattern of coordinated behavior that is responsive to external circumstances.
Enactive Robotics: Frank Broz talked about ''Enactive development of turn-taking behaviours in a childlike humanoid robot: Interaction histories and short term memory''.
Enormously interestingly, we were here given some frontline stories from working with an iCub.
The iCub is a 1 metre high humanoid robot testbed for research into human cognition and artificial intelligence.

There are only 20 iCubs in the world.
We know that many of an infants earliest perceptual skills are human centered. E.g. babies are hardwired to pick up (other) human faces.
So, it is interesting to model these things (By using an eye-tracker to detect visual attention etc. etc.).
And in the future, hopefully, it will be possible to see more complex behaviours, after these simpler (built in) behaviours.
E.g. headshaking is emergent, as the robot is trying to avoid seeing what it doesn't like.
Certainly, social interaction (''out there stuff'') will be important for creating intelligent behaviours!

A Robotic Self: Elena Antonova talked about ''Do enactive cognitive robots need a self? Lessons from neuroscience''.
And started by warning us that many fMRI imaging studies are based upon ''2 - 3 percent over baselevel activation'' in some brain centers. For some people such an activation increase is a lot !? But what it really means might not be so easily understood...
We were also warned that the concept self has many meaning and definitions.
And what about the robots? Are we talking small insect like robots, or are we talking about human like robots with a temporarily extended horizon?

Many interesting thoughts were presented. Along the way, I noticed that it would be relevant to consider ''How much intelligence you can have >>without<< internal representation''. And, that ''self modelling robots that develop and maintain body models'' must be very relevant in the context of enaction.

Synchronization: Fred Cummins talked about ''Skilled movement and synchronization''.
He started with the rather controversial (and funny) remark that ''We become machines, as we go into specific domains'' - E.g. when we play tennis, we are actually acting as tennis robots.
I.e. the brain is a prediction machine - and establishing a period (clock) allows for (better) predictions.
First we looked at synchronous speech (choral speech, chanting, pledge of allegiance).
Without practice:
Speakers manage to synchronize such that corresponding points in the two parallel speech channels are no more than about 40 ms apart on average.
This ability is all the more remarkable, as utterance-to-utterance variability, even within one speaker, would suggest that two speakers might have great difficulty in precisely timing their spoken durations to match those of another speaker.
Indeed, the degree of synchrony achieved seems to rule out any simple reactive account, in which one speaker listens and then acts.
Speech is highly coordinated (Some members of the audience didn't really believe that it could be that coordinated. But surely, it is highly coordinated).
And, note, the interesting part here is that such coordinations is a kind of sense-making.

The ability to dance to music is another example of such sense-making.
We have it, apes don't...
And again: People who are familiar with each other, get synchronized quicker that those who are not familiar.

For speech synchronization, it is clear that the resulting synchronous speech is, indeed, a collaborative effort in which each speaker accommodates to the other. Sure, maybe, speakers use their peripheral (not foveal) vision to help out. Still, a simple reactive account is not enough to explain what might be going on?
I.e. it is better to use an enactive approach, where perception and action are inseparable, and where there is focus on the rich embedding of an organism in its environment.

Concepts: Joel Parthemore talked about ''Concepts Enacted: Toward an enactive theory of concepts''.
Concepts are of course very important building blocks of human thought. And Joel Parthemores presentation certainly made it clear that we use concepts everywhere.
But what about concepts and enaction? Parthemore gave this definition of enaction:
Although enaction means many things to many people, a central feature of most if not all enactive approaches is an emphasis on the embodiment of cognitive agents and their embeddedness in a particular environment. Agent cannot be separated from environment nor cognition from lived experience.
And followed up by saying that concepts are located in the interaction with the environment. Concepts need an agent to live in, but is in the interaction:
An enactive theory of concepts ultimately locates concepts not in the concept-possessing-and-using agent (say, as internal, a priori mental representations) nor in the agent's environment.
Concepts are enacted out of the dynamic interaction of the agent with the agent's environment.

When mental representations are being discussed, agent will be in the foreground and environment in the background; when affordances (An affordance is a quality of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting) are being dicussed, environment will be in the foreground and agent in the background.
In the end Parthemore had some interesting thoughts about language being parasitic to concepts (where language allows for higher levels of abstractions). Where, imho, it would have been very interesting to hear more about these links, and how it all fits within the enactive paradigm.

Memory: Jose Donoso (Berlin) talked about ''The many heads of the Scylla monster of memory''.
We know that we have some representation of the environment within our heads. E.g.
London Cab drivers' grey matter enlarges and adapts to help them store a detailed mental map of the city.
Studies have shown that the hippocampus of the drivers has changed its structure to accommodate their huge amount of navigating experience. I.e. One particular region of the hippocampus, the posterior (or back), was bigger in the taxi drivers than in controls.
Indeed, 40 percent of the brain is all about visualization.
But can we make any sense of this representation just by looking at the brain?
Isn't it like looking at a submarine driver and his instruments, and then trying to figure out the situation, based only on this information?
And, obviously, there are other many other complications. I.e. it doesn't make things easier that active situations might be processed differently than passive situations ? etc. etc.
If nothing else, then Donoso's interesting talk certainly convinced me that (procedural memory, learning how to ride a bike) ''enactive memory'' is a rather complex thing.

Language: Didier Bottiniau talked about ''Describing a language as a tekhne kognitike: The case of Breton''.
We looked at language as a cognitive technique.
Bottineau gave Inner Speak as one example (Where ''inner speak'' can be both for ones own benefit and for the benefit of others).
Of course, in all cases, language is an amplifier of intelligence.
Interestingly, studies have shown that written languages tends to be egocentric, whereas spoken languages tend to be less so. Giving completely different perspectives on the world.
In some languages, such as Breton (?), the motif of the speaker controls the building of the sentence (Yoda like): ''Revealed your intentions are'' (I.e. the sentence starts with the answer to the question).
The fact that interactivity grounds language, and that language renders possible human action and co-action, is relevant here, as it points towards enactive cognitive models to deal with these phenomenons.

Schizophrenia: Thomas Fuchs (Heidelberg) talked about ''Embodiment, enactivism and schizophrenia''.
According to Fuchs, ''(Many) Mental disorders may be described as problems of self-awareness and being in the world''.

Schizophrenia might be seen as a disembodiment. With:
- A weakened basic sense of self (ipseity).
- A disembodiment of action and perception.
(''I couldn't do anything without thinking about it''. There is no longer a unified string of actions).
- A basic sense of detachment.
Minkowski writes about the illness:
- Loss of vital contact to reality.
- Being alien to the world.
- Feel I am not part of this world.

Patients with this illness need to constantly spend time reconstructing a shattered world.
There is a loss of familiar patterns of perceptions.

One might then ask: Are these delusions in the ''brain''?
Probably, often, the delusions are a property of (failed) interaction, in the social world.
I.e. to live in the world we need a shared trust between people, an atonement to others.
Some of these problems for the schizophrenic follows after a breakdown in trust.
Studies have shown (controversially) that a schizophrenic is often the weak point in a (disfunctional) family....

Looking at a person as an enactive system, one might then ask: What are the exact boundaries of enactive systems?
According to Fuchs: ''The patient is ill means that his world is ill''.
So, we need an extended view of mental illness. Where we are dealing with a disturbed way of enacting in the world, and perhaps a disturbed ecology in the patients world.

Phenomenal Consciousness: Kevin O'Regan (Paris) talked about ''How enactive is the sensorimotor approach to phenomenal consciousness''.
This great talk started with a nice question:
''Why are loud sounds loud?''
Obviously, It hasn't to do with the neural activation code. After all, the neural codes are arbitrary.
Whatever explanation you come up with, there seems to an explanatory gap...
Qualia seems impossible to explain from just one point of view?
On Laboratiore Psychologie de la Perception one reads:
Qualia is the hard problem of consciousness.
Other questions like the question of why we have selves or why we can become aware of things and use them in our rational actions and thought, are considered not so hard.
Many people think there is a fundamental obstacle in dealing with the ''hard'' problem of consciousness.
Has led me to a new way of thinking about the "hard" kind of consciousness. In this, I consider that the feel of a sensory experience is not something which is somehow generated by the brain, but is rather a quality of how we interact with our environment.
In How to build a robot that feels O'Regan continues:
I remember as a child asking my mother what a headache was like and never getting a satisfactory answer until the next day I actually got one. Many people have asked themselves whether their spouse or companion see colors the same way as they do!

Instead the sensorimotor view suggests that we should think of a feel in a new way, namely as a way of interacting with the world.
This may not make very much sense at first, so let's take a concrete example, namely the example of softness.
Rather, the softness of the sponge is a quality of the way we interact with sponges. When you press on the sponge, it cedes under our pressure. What we mean by softness is that fact...
Color is the philosopher's prototype of a sensory quality.
At first it seems counterintuitive to imagine that color sensation has something to do with sensorimotor dependencies: After all, the redness of red is apparent even when one stares at a red surface without moving at all.
(but) In fact what determines whether a surface appears red is the fact that it absorbs a lot of short wavelength light and reflects a lot of long wavelength light. But the actual amount of short and long wavelength light coming into the eye at any moment will be mainly determined by how much there is in the incoming light, coming from illumination sources.
Thus what really determines perceived color of a surface is the law that links incoming light to outgoing light. Seeing color then, involves the brain figuring out what that law is.
The obvious way to do this would be by sampling the actual illumination, sampling the light coming into the eye, and then, based on a comparison of the two, deducing what the law linking the two is.
One way they could do it is to experiment around a little bit, moving the surface around under different lights, and ascertaining what the law is by comparing inputs to outputs.
So in that respect the law can be seen as being a sensorimotor law.
Munsell chips are often used in color experiments. Their reflectance spectra are available for download off the web ...Some of the matrices had a special property: they were what is called singular...
In other words these matrices represent input - output laws that are in some sense simpler than the average run of the mill matrices.
As though those colors which tend to be given names, are precisely those simple colors that project incoming light into smaller dimensional subspace of the three dimensional space of possible lights.
On the other hand it does seem reasonable that names should most frequently be given to colors that are simple in the sense that when you move them around under different illuminations, their reflections remain particularly stable compared to other colors.
So in my opinion the finding that we are able to so accurately predict color naming from first principles, using only the idea of the sensorimotor approach, is a great victory for this approach.
A robot (nowadays) needs to have its sensors calibrated, but a human doesn't.
Somehow we humans can agree what (e.g.) the color red is...
Which is understandable within this ''sensorimotor theory''.
Just move around - do a little enaction - and bingo:
''Colors that are simple in the sense that when you move them around under different illuminations, their reflections remain particularly stable compared to other colors'' are given names we can agree on.

I don't think O'Regan came back to the loud sound problem from the beginning of the talk. But again, there might be an enaction part to this: ''If you are listening to a sound, any small movement of your head immediately changes the sensory input to your ears in a systematic and lawful way''.

All in all, rather mesmerizing and persuasive. Indeed, as Horatius would have said: ''Sapere aude'' (Dare to know. Dimidium facti qui coepit habet: Sapere aude. He who has begun is half done: Dare to know!).
What a great talk, and what a great conference!

Nota Bene: Back in January (less mesmerized) I had jotted down some comments about O'Regans book ''Why red doesn't sound like a bell''. See below.

3. The meaning of embodiment.

Found a good overview article in Topics of Cognitive Science 2012, p. 740 - 758, about The Meaning of Embodiment,
by Julian Kiverstein (Institute of Logic, Language and Computation, University of Amsterdam).
Reflections on how embodiment helps create human-like cognition:

Kiverstein presents two views:
- ''In one view, the body plays the role of supporting the computational unit that realize cognition''.
- ''Body enactivism argues by contrast that no computational account of cognition can account for the role of commonsense knowledge in our everyday practical engagement with the world''.
Ultimately, what separates these views of the body is a disagreement about the status of the founding idea in cognitive science that cognitive processes are computational processes.
I.e. can the contribution of the body to cognition be understood along computational lines?
Arguably, the 4EA (Embodied, Embedded, Extended, Enacted, Affective) movement within philosophy of cognitive science was launched 20 years ago with the publication of the book ''The Embodied Mind'' (Varela, Thompson and Rosch). Here cognition was understood in terms of the sense making activities of living organisms.
Later came ''Being There'' (Clark, 1997), which argued that a human brain does not carry out its function in isolation from the body and world, but should instead be understood as a controller of our embodied activity in the world.

a) But, according to Kiverstein: Yes, minds are embodied and embedded, but they still depend crucially on brains to compute and represent.
Sometimes computation can be done through the recruitment and bodily manipulation of external artifacts, other times it can be done entirely within the head.
Reformists, in common with conservatives, retain the idea of cognition as computation, but they depart from the conservatives in seeking to enrich the traditional idea of computation, so as to open up space for body and environment to play a role in implementing information processing.
The body can co-opt external tools and technologies into its problem solving routines, so that the tools and technologies combine with the body to extend and augment cognition.
b) But for others, there is more, the body has another role entirely. They see the body as the source of meaning.
The problem has a superficial resemblance to what is sometimes called the symbol-grounding problem. Which takes of from Searles well-known arguments that syntax is not sufficient for semantics.
Harnad took up Searle's worry in arguing that the representations appealed to in computational explanations are meaningless because they are not ''grounded'' in the right way in perception and action...
Larry Barsalou agreed and put forward his account of concepts as perceptual symbols in response.
Barsalou proposed that we understand concepts as ''perceptual symbols'' or re-enactments of neural representations that originated in perception and action...
Thereby providing the missing link between the symbol and what it stands for in the external world.
(See the section about Situated Conceptualization for more about Barsalou's ideas).

Kiverstein seems to be saying that the body might help human cognition by taking a computational load or by creating sense-making. Nevertheless, for Kiverstein, the brain is still a computational unit:
In predictive coding models of brain function, the central problem facing our perceptual system is to learn about the world on the basis of states and changes to these states.
The way our brains solve this problem is to learn statistical regularities relating to the ways in which sensory input tend to vary over space and time.
It is a near consensus that perceptual processing in the brain is organized hierarchically. Predictive coding models hypothesize that each layer in the hierarchy employs statistical knowledge to predict the current inputs it will receive from the layer below.
When the predictions are correct, there is match between expected and actual input, and the result is perception.
When there is a mismatch, an error signal is generated that can be used to update neural representations at higher levels, until the right hypothesis is found. ''The task for the brain is to make life less surprising''.

Kiverstein concludes:
The predictive brain is, however, also a computational brain.
If we can give a predictive coding explanation of how the body gears into the world along the lines of the above, this would be to provide a computational explanation of how the body makes meaning...
Would it follow that victory goes to body functionalist - view or to the body conservative - view?
Not at all, the body enactivist understand embodiment in terms of bodily skills we draw on all the time, when we act unreflectively.
The brain might be coordinating a cascade of increasingly fine grained and detailed predictions. But all of this is taking place in the service of a sense making activity of an embodied agent.
I.e. Kiverstein ends the article with the conclusion that: Perception, affect and action can not be adequately accounted for by a linear model of causality in which perception provides input to a brain that then issues instructions to the body.
In real life, the body might both give the brain added computational powers, as well as helping human cognition to stay meaningful and grounded.

4. Embodied Cognition and Bodily Perceptions.

David DiSalvo also deals with embodiment in his book ''What makes your brain happy, and why you should do the opposite''...
According to DiSalvo, bodily perceptions strongly influence how we think. I.e. physical sensations influence our perception and affect us without our notice.

Weight as an embodiment of importance: DiSalvo presents a study, where researchers investigated whether judgements of importance are tied to an experience of weight?
Indeed, the more someone can lift - or look as she or he can lift - the more impressive.
Weight is even an socioeconomic force, as is the size of someones car...
In a study, paticipants were asked to a agree or disagree with arguments of varying strengths, while holding a light or a heavy clipbord. Here, people holding heavy clipboards assumed stronger, more polarized positions than those holding light clipboards. They also made signicantly stronger arguments in defense of the positions.
Warm and cold feelings and embodiment: If you feel emotional close to someone, you have ''warm feelings'' towards that person. If you have a falling out with someone, they might give you a ''a cold shoulder''.
In the book DiSalvo presents a study, where people were asked to fill out a questionaire and handed either a warm or a cold beverage. Afterwards they are asked to rate their closeness between themselves and another person.
The result: Subjects holding the warm beverage had a signicantly higher level of perceived closeness to the person they selected than people holding a cold beverage.

And, on and on it goes.
In a MIT/Harvard study from 2010 showed than when you go for a job interview be sure to carry you resume in a weighty, well constructed padfolio. According to the study, job candidates appear more important, when they are associated with heavy objects.

According to DiSalvo: Embodiment and bodily perceptions are certainly real things in cognition...

5. Thinking with your body - The Enteric Nervous System.

Interesting article in the December 15th 2012 issue of NewScientist:

According to NewScientist reporter Emma Young: ''Your brain isn't the only organ that influences your mood and behaviour''.
Embedded in the wall of the gut, the enteric nervous system has long been know to control digestion. Now it seems it also plays an important role in our physical and mental well-being!
Digestion is a complicated business, so it makes sense to have a dedicated network of nerves to oversee it. The ENS maintains the biochemical environment within different sections of the gut, keeping them at correct pH and chemical composition needed for digestive enzymes to do their work.
And, remember, eating is fraught with danger. The gut must stop potentially dangerous invaders, such as bacteria and viruses, from getting inside the body. In case any are detected, the gut brain must then trigger diarrhoea or alert the brain in the head, which might then decide to initiate vomiting.
And there is more!
According to Emma Young: ''Nerve signals sent from the gut to the brain do appear to affect mood! Indeed, research published in 2006 indicates that stimulation of the Vagus nerve (that conveys sensory information about the state of the body's organs to the central nervous system) can be an effective treatment for chronic depression, that has failed to respond to other treatments (British Journal of Psychiatry, vol. 189. p. 282).
Which might explain why fatty foods make us feel good: When ingested, receptors in the gut, send nerve signals to the brain (that we should be happy ...).
And there are other connections between these two brains.
When we are stressed, blood is being diverted away from the stomach to the muscles, as part of a fight or flight response started by the brain (in your head). Which might lead to the feeling of ''butterflies'' in the stomach (which the brain in the head detects...).

Indeed, a lot of the information we get about the environment comes from our gut (which is really on the outside of your body, like the eyes and the ears...).

And therefore, it makes perfectly good sense that what the gut ''sees'' should influence our mood and behaviour.

6. Embedded in a Social World. Social Intelligence.

6.1. Our social world.

In the December 1st 2012 issue of NewScientist, Lisa Raffensperger reports about our social life and how it is connected to our health. I.e. ''the sting of social rejection fires up the same neural pathways as pain from a burn, revealing that social life and health are linked.''
Although they started out friendly, the computerized players soon stopped throwing the ball to the volunteer. It might seem like a trifling insult, but some subjects reported strongly to the slight - slumping in their seats or making a rude hand gesture at the screen.
All the while, a functional MRI scanner recorded the volunteer's brain activity, revealing a surge in the dorsal anterior cingulate cortex (dACC), when they began to feel isolated (Science, vol 302, p. 290)
(Nb. the dACC is known to be an important part of the brain's pain network, determining how upsetting we find an injury. The response can vary depending on circumstances. Bumping your head might seem like a big deal in the office, but during a football game you might barely notice the blow).
The social world can hurt us more deeply than virtual ball players though:
Ethan Kross at the University of Michigan recruited 40 people, who had been through a break-up within the past 6 months and asked them to view a photo of their ex, while reclining in an fMRI scanner.
After a brief intermission, the volunteers were also given a painful jolt of heat to their forearms.
As expected, the dACC and the anterior insula lit up in both cases. But surprisingly, the brain's sensory centers, which reflect the physical discomfort that accompanies a wound, also showed pronounced activity - the first evidence that the feeling of heartbreak can literally hurt.
Which might explain the difference between introverts and extroverts: Extroverts have been shown to have a higher pain tolerance than introverts, and this is mirrored by their greater tolerance for social rejection!

Indeed, our social world matters a great deal, and surely affect our feelings, and (therefore) us.

6.2. Social Intelligence.

Is obviously also very important. In AISB No. 135 Wurzinger notes (Quo Vadis, AI):
So, much so that in the absence of social influences per se there is hardly any process, which allow us to identify the owner as typically ''human'', as research into abandoned childred has so abundantly demonstrated.
We become humans trough transactions with other humans. Brains responds to cues we get from other people. We form each other in transactions.
And, we don't know how the transaction ends, until we know how the other responds.
See Youtube Video (Interview with Charles Tilly by Daniel Little. December 15, 2007 at University of Michigan).

7. Artificial Intelligence and Cognitive Systems.

Many within the AI community have been displeased with the trend within AI to move towards specialized subfields that have little to tell us about (human like) intelligence.
And surely, many want the community back towards working with theories of the mind.
But does the trend encompass Enaction?

7.1. Computational Models for Human Behaviour.

Super article by Pat Langley in AISB No. 133 about moving AI away from a set of narrow, specialized subfields that have little to tell us about (human like) intelligence. And go back to an earlier assumption prevalent in early AI research - ''that the design and construction of intelligent systems has much to learn from human cognition.''

Actually, ''many central (AI) ideas in knowledge representation, planning, natural language, and learning were originally motivated by insights from cognitive psychology and linguistics, and many influential AI systems doubled as computational models of human behaviour.''
And (obviously) as humans, we are really interested in human intelligence!
''When we say that human exhibit intelligence. we mean they have the capacity to engage in multi-step reasoning, to understand the meaning of natural language, to design innovative artifacts, to generate novel plans that achieve goals, and even to reason about their own reasoning.''
Thats not the same as:
''Machine learning now focus almost exclusively on classification and reactive control.
Natural language processing has scaled down from attempts of understanding, to text classification and information retrieval. But does advances here really give us insights into the nature of (human like) intelligence?
The original character of machine learning was to acquire structured knowledge from limited experience.
Not to emphasize induction of statistical predictors from large datasets.

Indeed, Pat Langley is right to note that we ''should use the earlier assumptions of the cognitive systems approach as heuristics to direct our search toward true theories of the mind.''

So much more interesting that yet another statistical machine learning text algorithm,
that doesn't tell us all that much about real (human) intelligence.

7.2. Narrow, specialized AI subfields vs. theories of (human like) intelligence.

In my mind, Pat Langleys article, Artificial Intelligence and Cognitive Systems, in AISB No. 133 was absolutely right on the money, that AI research should move away from a set of narrow, specialized subfields that have little to tell us about (human like) intelligence. And back to something that also tells us about real (human-like) intelligence.
AI should indeed not, just, be about applying computational techniques on ''big data'' to real problem solving, at the expense of providing insights on big questions about the mind.

But then what about the ''four E's'' (Defining characteristics for a new era of cognitive science?): Ecological, accounting for the environment; Embodied, concerning the physical presence of a system; Embedded, concerning the system's relation to the environment; Enactive, concerning the role of action ?

In AISB No. 134 Tom Froese puts it like this:
I believe that cognitive science should replace the computationalist metaphor with an existential stance that centers on the living (biological) and lived (experimential) body.

I suggest that a more encompassing science of human nature must be able to intertwine an understanding of how the fundamental values that are governed by our matabolic existence are shaped by the enabling and constraining concerns of our socio-cultural existence.

To do so we need to be able to integrate dynamic processes at a range of time scales, and at a range of levels (individual, dyadic, social, cultural etc.), and we must be able to connect those dynamics with changes in the first-person experience of the people who are embedded in them.
All of these areas of research together form what has been called the paradigm of ''enaction''.

Well, let the best theory of mind win!

7.3. Robotics, Smart Life Systems & Embodied Cognition.

Robotics is obviously (going to be) a great platform for studying embodied cognition. E.g. see Rolf Pfeifers cognition as an emergent phenomenon from sensory, motor and interaction processes.

Future systems that monitor the physical, physiological status of individuals and systems that try to enhance the performance of healthy people etc. will surely also be influenced by ideas coming from embodied cognition and enaction.

E.g. see the Guardian Angels for a Smarter Life project (A EU Flagship project that will develop technologies for extremely energy-efficient, smart, electronic personal companions that will assist humans from infancy to old age).

8. Limits of computation.

8.1. Cognitive Models. The Spaun System.

Cognitive models can be quite impressive indeed.

E.g. in AISB No. 135, Terry Steward wrote about the Spaun system:

''Our largest model to date is Spaun, a 2.5 million spiking neuron model with a vision system, a single 6 muscle 3 joint arm for output, and a selective routing system (analogous to the production system) implemented in spiking neurons comprising the cortex (for working memory storage), the basal ganglia (for action selection) and the thalamus (for selectively routing information between cortical areas)''...

...''Various other cortical areas are also modelled, allowing for transformations between visual, conceptual, and motor space, inductive pattern finding and list memory.
The model is capable of performing eight different psychological tasks, including recognizing handwritten digits, memorizing digit lists, pattern completion.
No changes are made to the model between tasks. We are aware of no other realistic neural model with this combination of flexibility and biological realism.''

The model is build in Nengo, an open source cross platform java application, which can be used as both a teaching tool, and a research tool (i.e. building something like Spaun). For more about Nengo, see here.

Impressive indeed, still, a sneaky suspicion that something might still be missing.
Sometimes, generally speaking, the model just doesn't really work due to systemic problems that just never goes away.
Perhaps, it is that thing with enaction again?

8.2. General Problems. Limits of computation.

Actually, there always seems to be something missing, even with the best of models.
Full understanding is indeed a tricky thing.

In NewScientist's May 7th 2011 issue, astronomer Martin Rees asks us to think about chimpanzees:
A chimpanzee can't understand quantum mechanics.....
Actually, it is not even aware that it can't.
The question that intrigues Rees is whether there are facets of the universe to which we humans are similarly oblivious. According to Rees:
There is no reason to believe that our brains are matched to understanding every level of reality.
And, sure, there are lots of problems for us out there.
E.g. - take the cosmic horizon. Beyond it there are things we will never see, and never know about ...
(Any object that is more than 46 billion light years away is receding at more than the speed of light. So, no information will ever reach us from these areas. Note: Nothing can travel faster than the speed of light, but the fabric of space-time itself can).

But there are also problems closer to home:
In mathematics (formal systems of knowledge) there are fundamental limits to what we can know. In 1931, Kurt Goedel formulated his incompleteness theory, which showed that certain mathematical systems cannot prove themselves to be true.
Alan Turing used Goedels work to uncover fundamental characteristics of computers.
Which gave us the Halting Problem:
It is impossible to devise a method that can be applied to any program to predict, whether or not it will finish its task and halt.
Sometimes, we will just have to try it out and wait ...

So, with problems at the core of our mental constructs, what can we really say about the world we live in?

Marcello Gleiser, a philosopher and physicist at Darmouth College in New Hampshire, has argued:
''That the notion of a theory of everything rests on an unproven assumption that the universe is inherently neat and symmetrical.
But, the very fact that the universe contains energy and matter is evidence against such symmetry, he says.
Nothingness is neater than something, so the fact that the universe is full of stuff could mean that it is surprisingly messy at heart.
NewScientist, May 7th 2011.

Certainly, everyone knows that prediction is hard...(Even on ''small'' scales like the Earths weather and our financial systems).
Weather predictions are not precise beyond 3 days. Predictions in economics mostly turn out to be false (Inflation rates are predictable only one month in advance. Look ahead two months and the mathematics show no predictability at all).
NewScientist, April 10th 2010.

And, following Marcello Gleiser, it is probably impossible to predict any sufficiently complex system that looks anything like our universe...

So, much for the hopes of generations...

(Sure, there are many interesting theories -
E.g. Riemanns hypothesis - about the zeroes of the zeta function, that would give us a method for calculating primes - Which, according to physicist Dyson, are strangely linked to nuclear energy levels ...?

But all of these theories are only close, to a theory about everything. For the Riemann hypothesis - No proof exist - and certainly - no one knows why prime numbers should have anything to do with atomic energy levels...)
Time flows uphill for the Yupno: Western languages are full of spatial metaphors for time. The past is behind us, and the future is stretching out ahead.
What was once thought to be a universal embodied cognition of time is in fact a cultural phenomenon....
For the Yupno people, of Papua New Guinea, time flows uphill and is not even linear.
In their time study with the Yupno, Nunez and colleagues find that the Yupno don't use their bodies as reference points for time. But rather their valley's slope and terrain. Analysis of their gestures suggests they co-locate the present with themselves, as do all previously studied groups (Picture for a moment how you probably point down at the ground when you talk about ''now.''). But, regardless of which way they are facing at the moment, the Yupno point uphill when talking about the future and downhill when talking about the past.
So, yes, we might think it is natural to think of time as a straight line. But that is an illusion.
It doesn't have to be that way....

And it could be even worse.
Maybe, a lot of the things we take for granted are not so self-evident after all. Core concepts of our thinking might indeed be hopelessly flawed.
Take the way we think about Time. According to the Yupnos there is certainly nothing self-evident about our understanding, of time. And, certainly, other key concepts in our thinking could be just as culturally biased...

9. Why red doesn't sound like a bell.

Kevin O'Regans book, ''Why red doesn't sound like a bell'', attempts to provide an account of phenomenal consciousness through a ''sensorimotor account of consciousness''.
In my understanding, sensorimotor theory stands in contrast with the standard view of e.g. vision
(Characterizing the latter: You can see because your brain makes a model or representation of the outside world encoded by neural impulses).
The sensorimotor account of consciousness, in contrast, states that there is no model, nor representation of the outside world in your brain.
Rather, seeing is a process of actively exploring / engaging the outside world.

People tend to be critical of this account, because:
- What about hallucinations?
- How about perceptions in paralyzed people?
Both seem like obvious counter examples to the ''sensorimotor account of consciousness'' ?

Still, O'Regan presents many interesting insights about ''raw feel'' (i.e. what is left of an experience after all measurable effects have been accounted for, i.e. phenomenal experience), and how it might be generated:
On the question of how imperfect, inconsistently sensitive sense organs could produce a raw feel that presents itself as continuous and detailed. O'Regan declares this effect to be an illusion. Features of raw feel can only be observed through active interrogation, which necessarily presents them in detail.
And his theory also makes some interesting predictions (According to Andrew Martin, in AISB No. 135):
For the quality of interaction to be consciously experienced, the being must also be consciously attending to that quality.
This stance has two important implications for what is not present in raw feel: Without conscious attendance raw feel are not felt, and without a notional sense of self experience cannot arise at all.
Babies and animals, therefore only experience pain, if they are considered to have developed a specific cognitive capability.
Concerning the experienced ''what it is like'' of the experience, O'Regan thinks (in my understanding) this derives from objective aspects of this engagement (among them bodiliness, grabbiness, insubordinateness and richness), which indicate that the engagement is one with the real world and involves the real senses.
Experiences grab our attention, and are subject to change as the body moves. Stimuli might change without a subjects acting. Interrogating the experience presents detailed information.

Which sounds about right. Still, you wonder if the explanatory gap (How can any neural structure generate ''raw feel''. Why one type of feel rather than another, ''Why red doesn't sound like a bell'') has really been bridged?

So, sure, the sensorimotor approach gives us some insights, but it would have been nice to know a lot more about what is going on with this qualia thing....
Somehow, it seems such a nice thing to have. I.e. (surely):
- Only robots without qualia, may become psychopathic, homicidal unconscious automata?
- Only machine-like corporations that doesn't experience qualia will pollute and destroy people and the environment?


Simon Laub

Nasslli 2012 | Cogsci 2012
About | Site Index | Post Index | Connections | The Ego Trick | Future Minds | Mind Design | NeuroSky | Contact Info
© March 2012 Simon Laub - -
Original page design - March 10th 2012. Simon Laub - Aarhus, Denmark, Europe.