Impressions and Links from
CogSci 2014

The 36th annual meeting of the Cognitive Science Society.
Quebec City, Canada. July 23 - 26, 2014.


Quebec City Convention Centre, Quebec, Canada




I had the great pleasure of taking part in Cogsci 2014. ''Cognitive Science meets Artificial Intelligence'': Human and artificial agents in interactive contexts.

Below you will find impressions from the conference, and links for further reading.







The Cogsci 2014 conference was held at the Centre des Congres in Quebec City, Canada.
Quebec City Convention Centre, Quebec, Canada



Tried to follow as many talks as possible. But, well, these notes are, of course, in no way, shape or form complete...
Rather, these notes were written on conference nights, as my way of keeping track of the events that I attended at the conference. And as a way of storing links and references for future reference.

But enough disclaimers, below, you'll find impressions and links from some of the conference talks and seminars, including links for further reading.

Great stuff indeed. And much (CogSci stuff) to look forward to in the coming years!


Disclaimer


1. Introduction.


Quebec City Convention Centre, Quebec, Canada






Cognitive Science Society
23 au 26 Juillet, 2014.







Quebec City, Quebec, Canada.
Quebec City Convention Centre, Quebec, Canada

1.1. Page Overview.
- Workshops, Sessions and Keynotes.

Below, in section (2), you will find impressions and links from one of Wednesdays workshops.
Followed, in section (3 - 5), by impressions and links from sessions, demos and keynotes that I followed Thursday, Friday & Saturday.
I prepared for the conference by reading up on Ray Jackendoff's book ''Thought and Meaning''. And I bought Jesse J. Prinz's book ''Beyond Human Nature'' at a book stand at the CogSci conference. See Book Thoughts in section (6).
In section (7) you will find misc. conference folders and links. Section (8) wraps up the week.

Please notice: These notes don't do justice to the often brilliant presentations that initiated them!
So, please read the original presentations to avoid any distortions ...

Quebec City Convention Centre, Quebec, Canada

2. Impressions from Wednesday, July 23rd.

2.1. Workshop: Deep Learning & the Brain.

Workshop organizer Andrew Saxe introduced the workshop:
...The invited talks are meant to span a broad range of perspectives on deep learning and the brain, and concentrated mostly on visual processing. Visual object recognition is the area most studied in prior deep learning work both in machine learning and cognitive science, and hence makes a natural first focus for a workshop.
First up was Tom Griffiths, Berkeley, who talked about ''Combining deep networks and Bayesian inference''.
Followed by Chris Eliasmith with more on ''How to Build a Brain'': Obviously, as always, an extremely interesting talk. For more, see my notes on his CogSci 2013 workshop:

See section 2 (How to Build a Brain) on my CogSci 2013 page

Next up was Yoshua Bengio, with ''Deep Learning, Brains and the Evolution of Culture''.

Where the main idea was that:
So, how do humans learn then? In Yoshua Bengios words:
A human brain can learn high-level abstractions if guided by the signals produced by other humans, which act as hints or indirect supervision for these high-level abstractions.
Obviously this has implications for AI research. In Yoshua Bengios words:
AI learning:
  • Collections of learning agents building on each other's discoveries to build up towards higher-level abstractions.
  • Guiding computers just like we guide children.
Next up was, Tomaso Paggio, McGovern Institure for Brain Resaerch at MIT, who had some pretty clear ideas about the future of Machine Learning. According to Paggio, ''Machine Learning has the last 20 years been about supervised learning. The next phase of ML is likely to be much more about unsupervised learning.''...

Well, the future will teach us...

The end of a great first day at CogSci 2014.

Deep Learning

3. Impressions from Thursday, July 24th.

3.1. Plenary - Dedre Gentner.

Dedre Gentner, Northwestern University, talked about analogy and similarity - and human thinking.

In Gentners words:
Much of humankind's remarkable mental aptitude can be attributed to analogical ability - the ability to perceive and use relational similarity...
...
Analogy is the perception of like relational patterns across different contexts. The ability to perceive and use purely relational similarity is a major contributor - arguably the major contributor - to our species' remarkable mental agility.
Apparently, it is possible to set up 3 stages in the process of making an analogy.
Something like: In the end of the talk, Gentner speculated that ''We get better at analogies, as we obtain more relational structure, as we grow up''.

CogSci 2014. Dedre Gentner talk

3.2. Symposium: Triangulating surprise. Expectations, Uncertainty and Making-sense.

A surprise is not all bad. And a big surprise might be even better ...

According to the (sessions) first speaker, Edward L. Munnich, University of San Francisco:
With modest surprise, there may be little or no belief change, but conditions that heighten surprise - engaging foresight, or providing striking facts, episodes, or explanations yield dramatic belief revisions as people seek coherence...
Jeffrey Loewenstein, University of Illinois, continued with a talk about ''Surprise and Social Influence''.
Clearly:
Surprise often follow a ''Repetition - Break'' structure:

In advertising, advertisers often set up a plot, that they then can break.
- Which gives a surprise, which people will share on Youtube etc.
The more surprising, the more shares ...

Humour follows the same pattern (repetition - break).
Where comedians play with a given setup (repetition), and tell us a new ''follow-up'' (break, punchline).
This plot structure teaches an expectation with initial, repeated events. Then it applies a contrasting event to generate surprise...
Phil and Rebecca Maguire, Kildare, Ireland, talked about surprise vs. probability.
Back in 2009, the Bulgarian Lottery picked the same number in straight draws:
The sports minister Svilen Neykov ordered a special review after 4, 15, 23, 24, 35, and 42 were drawn on Sept 6 and again on Sept 10 in consecutive lottery rounds.
The probability of this happening is 4.2 million to one, according to the Bulgarian mathematician Mihail Konstantinov, although he added that such coincidences can happen.
Well, but what if you get 1000 heads in a row?
Most people will be sceptical about such a result?

Clearly:
The most likely hypothesis, is the one which describes the data most succinctly, according to Occams Razor
In Maguires words:
The identification of a pattern in supposedly random data suggests the existence of an underlying structure where none was anticipated, a discrepancy that results in an urgent representational updating process.
Still, the Bulgarian lottery was apparently not rigged...

Foster and Keane, University College Dublin, presented the ''metacognitive explanation based (MEB) theory of surprise'':
that experienced surprise reflects the level of difficulty of constructing or retrieving an explanation for why a surprising outcome may have occurred.
I.e. here ''explanatory difficulties'' are taken as a measure of how big the surprise is.

3.3. Attention I.

3.3.1. Irene T. Skuballa, Applied Cognitive Psychology and Media Psychology, Tübingen, Germany.
Talked about ''Non-Verbal Pre-Training Based on Eye Movements to Foster Comprehension of Static and Dynamic Learning Environments''.

I.e. stimulating eye movements can foster learning and problem solving. Where studies have revealed a positive effect of a non-verbal eye movement pre-training on learning outcomes.

But things quickly get pretty complicated:
Current theories on eye movements heavily refer to research on reading comprehension, which do not necessarily apply to comprehension of complex graphics and (dynamic animated) pictures.
So, who knows, we might have to develop new theoretical approaches for eye movements in learning from graphics to understand the underlying processes.

For more, see their paper.

3.3.2. Michele Burigo, Cognitive Interaction Technology Excellence Center, University of Bielefeld, Germany.
Talked about ''Keeping the eyes on a fixation point modulates how a symbolic cue orients covert attention''.

The authors write in their article:
keeping the gaze on a fixation point may be effortful and require attentional resources.
... fixation control requires attention. If so, then the limited capacity hypothesis predicts that fewer attentional resources should be available for covert attention shifts to a non-central target when people are instructed to maintain central fixation.
Sadly, it is all a bit complicated, so we don't seem to end up with any super precise conclusions...?
However, these two experiments do not clarify whether the attentional resources engaged in keeping the eye on the fixation dot affected only reflexive shifts in covert attention or also voluntary shifts...
Again, the only thing everyone really seem to agree on is that more research is needed ...

A model of true and illusory recollection

3.4. Memory: Inference and Illusion.

3.4.1. Amandine Eve Rey, Lyon University, France.
Talked about ''Memory is Deceiving: a Typical Size Induces the Judgment Bias in the Ebbinghaus IIlusion''.

Rey writes:
Grounded cognition theories state that conceptual knowledge is closely linked to the current situation and embodied in sensory dimensions.
I.e. as we interact with the environment, ''knowledge related to our environment is continually recovered from memory... Which closely links the current situation to reactivated traces in memory''.

Here, the authors tries to demonstrate that ''a simulated dimension in memory constructs across past experiences can influence a perceptual judgment (now)''.

Showing, that the perceptual judgment of size can be influenced by the reactivation of a size in memory.

Interesting!

3.4.2. Kevin Darby, Ohio State University, Columbus, Ohio, USA.
Talked about ''The Cost of Learning: Interference Effects on Early Learning and Memory''.

Previous knowledge clearly influences how and what information we learn in the present.
And we all know that expertise in a particular domain, for example, increases memory capacity for information within that domain.

Here, the focus was on the ''effects of interference on associative learning''.
Where the authors find that ''interference is influenced by the amount of associative overlap between sets of information''.

Interestingly, the authors also reported that (especially) children are vulnerable to ''catastrophic levels of memory interference, in which new learning dramatically attenuates memory for previously acquired knowledge''. But introducing a 48-hr delay between learning and testing can apparently improve children's memory and eliminate interference quite a bit.

4. Impressions from Friday, July 25th.

4.1. Plenary - Minoru Asada.

Minoru Asada, Graduate School of Engineering, Osaka University (and former president of RoboCup) talked about ''Cognition and Robotics''.

Here, robotics is seen as a natural part of Cognitive Science (as Robotics include):

Minoru Asada Lecture. Quebec 2014

In Asada's lab they seek to understand how
A cognitive developmental approach to robotics can help us understand the development of increasingly complex cognitive processes in natural and artificial systems, and how such processes emerge through physical/social interaction.
They think that robotic development should be seen as a balance between an ''embedded'' robotic ''nature'' and nurture. And that this will give us (robot) learning and development.

In order to understand the mind we must not only look at the mind itself, we must also see the mind as interactions with the environment.

I.e. even something as personal as the self works in connection with an environment (Is synchronized with an environment) - We can talk about an ''ecological self'' and a ''social self''. Robots must also be ''connected'', and social, in order to be (really) smart.
E.g. if we want robots to truly understand (us), they must also understand things like ''empathy''.
Meaning that robots need to have something like a ''pathway of empathy'', where they can ''experience'':

Emotional
contagion
Emotional
empathy
Sympathy
and compassion.
Felt emotion, including envy and Schadenfreude

What a robot that experiences ''Schadenfreude'' would be like was not really clear, but well, we will see...

Humans certainly needs to be connected in order to learn. And in order to understand human development at the deepest levels (take something like ''vovel acquisation by maternal imitation''), Asada suggested that
Building a robot which reproduces such a developmental process seems effective. It will also contribute to a design principle for a robot that can communicate with human beings.
If succesful, such robot models should certainly be able to help us understand more about how humans learn...
We validate the proposed model by examining whether a real robot can acquire Japanese vowels through interactions with its caregiver.

Minoru Asada Lecture. Quebec 2014

Asada ended his talk by stating that todays talk had mostly been focussed on ''empathy''.
But future research should be more about ''designing robot emotions'', according to Asada. After all, language, empathy and motivation are key components in learning.

Eventually, it will also be important that robots can (themselves) express emotions, especially when they shall work more closely together with humans.

4.2. Symposium: Origins of Time.

How do people think about time? In an interesting introduction to the symposium, we were reminded that most people represent time spatially from left to right, or right to left, or from front to back, but that there are other ways to represent time.

E.g. the Pormpuraaw, a remote Australian Aboriginal community, represent time as as going from east to west (See ''Remembrances of time'' by Boroditsky and Gaby). I.e.
Time flows from left to right when one is facing south, from right to left when one is facing north, toward the body when one is facing east, and away from the body when one is facing west.
Time is indeed a complicated thing...

4.2.1. Tyler Marghetis, Department of Cognitive Science, University of California, San Diego.
Talked about ''Linking space and time in the child's mind''.

According to Saint Augustine:
''What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.''
When you ask a 5 year old to explain what a whole year is, they will typically make a huge gesture to indicate a whole year. Indeed, many types of temporal gesture, previously documented only in adults, are present in children (And just as for adults, there might be links between these gestures and experience with reading and artifacts like calendars).

But things do become pretty confusing when we talk about things like the future and the past. Here it kind of make sense to say that things we have seen, is in front of us, and the unattended is behind us? Which will tell us us that the future is behind us?
Indeed, this is exactly how some cultures sees it. Growing up in such a culture might even make such a world-view ''obvious''.

4.3. Symposium: Moral Cognition and Computation.

4.3.1. Fiery Cushman. Harvard University.
Talked about ''Moral Habits''.

So, what is morality then? Cushman writes:
In other words, moral decisions are just another kind of ordinary decision. Yet, there is something unsettling about this conclusion: We often feel as if morality places an absolute constraint on our behavior, in a way unlike ordinary personal concerns
Again, somewhat unsettling, why do we have morality?
This is not surprising; moral rules can satisfy two important demands. The first is self-understanding. One of the most central themes of the last fifty years of research in social psychology is that humans continually attempt to construct consistent models of their own attitudes, beliefs and behavior.
...
The second demand is social coordination. Moral rules serve as social objects; we use them not only to guide our own behavior, but also to express and coordinate normative expectations within social groups. In order to communicate a moral value it helps to make it explicit. In order to apply it clearly and consistently it helps to treat it as inviolable.
Then, what is the difference between moral cognition and non-moral cognition? Cushman writes:
But, we may need to think of non-moral cognition not as a complete blueprint, but instead as an underlying scaffold: A framework of common elements that supports a structure of more unique design.
In ''Moral values and motivations: How special are they?'' Cushman writes:
To a large degree, we find that various aspects of moral value, including the subjective value of moral actions, outcomes, and their integration, are supported by a domain-general cognitive and neural architecture implicated in reward-related processes and economic decision-making.
Again, ''morality'' and ''non-morality might use some of the same cognitive mechanisms:
... similarities between moral and non-moral value. Both motivate us to obtain certain goals or desirable outcomes - like the welfare of sick children or the newest technological gadget and we experience pleasure in both cases when we succeed.
And with these similarities we should perhaps be less surprised, when things go terribly wrong:
Many individuals are also perfectly willing to bargain sacred values for monetary gain in practice (especially when they think no one is watching), as scandal-prone politicians often remind us.
Indeed, morality is an interesting subject. And Cushman certainly gave a great a great introduction to the subject at the Symposium.

Good and Bad

4.3.2. Paul F. Bello. Office of Naval Research, US Navy.
Talked about ''Baby steps towards machine morality''.

In order to really understand morality, you will have to understand things like ''intentionality'', ''coercing'', ''obligations'' etc. Indeed, sometimes, you basicly have to be a mindreader in order to understand whether an agent is behaving morally or not.

Bello writes about moral perfection:
  1. Moral perfection requires acting in the best interest of other moral agents and patients.
  2. Acting in the best interest of an agent requires knowledge of the inner life of that agent, including its potentially irrational dispositions.
  3. Ethically flawed and otherwise irrational human beings are moral agents.
  4. Moral perfection requires knowledge of the inner lives of ethically flawed and otherwise irrational human beings.
Which clearly makes it rather difficult for a machine to act morally.

Luckily, machines might learn a little from humans:
Humans seem to agree on instances of moral exemplars. At a very minimum, we can salvage moral principles in machines by designing them to pattern their own behavior (to a degree) after that of an identified moral authority.
Certainly, humans are capable of some level of morality, without infinite cognitive resources, so there might be a way forward for machine morality... Bello is certainly right when he states that:
Treating ethics in a vacuum under ideal conditions of unlimited cognitive resources and a static world is tantamount to performing a gigantic gedanken-experiment. One that may never have any import to real-world moral doings.
Again, a brilliant and super-interesting talk!

Caring Robots

4.3.3. Josh Tenenbaum. MIT.
Tenenbaums talk about ''Moral Cognition'' took us straight back to the ever popular Trolley Problem:

The Trolley Problem

Well, Tenenbaum warned us that ''We should be a little bit careful about judging someone until we have walked in their moccasins''.

Certainly, moral problems are tricky. Tenenbaum quotes Thomas Nagel in a Cognition article:
Common sense suggests that each of us should live his own life (autonomy), give special consideration to certain others (obligation), have some significant concern for the general good (neutral values), and treat the people he deals with decently (deontology). It also suggests that these aims may produce serious inner conflict.
Luckily, there is probably also an upside to all of these terrible moral dilemmas. They might give us a powerful window into how we actually make decisions...

And (eventually) help us understand a little bit of how we go from sparse noisy observations of moral decisions to a broad moral competence, where we are able to support an infinite range of judgments and decisions (See more here).

Interesting stuff!

4.4. Rumelhart Award Lecture - Ray Jackendoff.

I had prepared for the conference by reading Ray Jackendoff's book ''Thought and Meaning''. So, actually hearing Jackendoffs presentation was a conference highlight for me.

For more about the book, see my book review in section 6.
Video of Jackendoffs Award Lecture can be found here.

Jackendoffs Award Lecture

Jackendoff started out by stating that he would like to know how we encode things like: Indeed, wouldn't we all... :)

But then he pointed out that we don't know how neurons actually encode simple syllables, deep down.
So, there is a long way to go, indeed.

Especially, when we address problems such as how to actually encode something like ''language syntax'' in a neural model, and actually making it work...

After the talk, there was a question from the audience, where one member of the audience wanted to digg a little deeper, and wanted to know what Jackendoff thinks about connectionism.

Certainly, a good question, where my interpretation of Jackendoff's answer was that said that there is a problem with representing symbols in such models...

Well, more to come in the coming years...

5. Impressions from Saturday, July 26th.

5.1. Plenary - Stevan Harnad.

The Symbol Grounding Problem.

Stevan Harnad took the fact that we haven't ''reverse-engineered the brain yet (to tell us what thinking is)'', as a starting point for a discussion about what we actually know about cognition and ''symbol grounding'', and where we go from here.

Indeed, we are a long way from people having life-long penpals, and not being able to tell whether they are AI's or human.

One of the really tricky problems is to get the AI's to understand what they are talking about...
If AI's only know what words or symbols are, as a reference to other words or symbols, then there is never going to be meaning in this.
At least some of the symbols has to be grounded, not in other symbols, but in some some kind of ''sensory motor input''. John Searle's Chinese Room argument deals with such scenarios. If we just look up all our symbols as a reference to other symbols, then we end up with ''symbol manipulation'', not a system that knows or understands.

And you can, of course, still wonder, if ''symbol grounding'' is sufficient to solve the Hard Problem of consciousness (For more, see impressions from my visit to the ''Constructing The World'' conference with David Chalmers).

Indeed (when all the ''small problems'' of cognitive science has been solved, and), when we understand
''Why it feels like something'' (i.e. the Hard Problem)
then we (cognitive scientists and philosophers) can all retire and go home.

But, well, despite amazing progress, that is not where we are.

Interestingly, Stevan Harnad listed, what he called, the last 4 ''cognitive revolutions'':
All different forms of mind extenders. One more spectacular than the other.
And the web revolution is probably going to be even more spectacular, if progress on ''symbol grounding'' can give us much smarter computers ...
But still nothing that really tell us what ''understanding'' and ''being'' really is (the hard problem).

An awesome talk!


The symbol grounding problem (a couple of thoughts...):
In traditional AI symbols are treated in a completely syntactical way. The relation of the symbols to the outside world is rarely discussed.

Still, humans need a contact from a symbol to the outside world, in order for us to make sense of what we are talking about. We say that the symbol is ''grounded''.
For computers, using symbols are fine as long as there a human around to make sense of the symbols, to ''ground'' them.
The computer itself doen't understand anything ...

A computer program, possessed only of internal symbols defined in terms of other symbols, can't ever get down to symbols that really mean something?

You will never develop a concept of an apple from reading or hearing about it.
Concepts requires that you smell, feel and see, taste...

Still, grounding our world in our immediate experience might seem like a ''shaky'' thing to do. But, grounding in the material world, like atoms and electrons, can quicky become an equally ''shaky'' thing to do, especially, when you begin to think about quantum theory and things like that ...

Stevan Harnad

5.2. Symposium: The Future of Human - Robot Interactions.

5.2.1. Minoru Asada. Asada Lab, Osaka University, Japan.
Here we heard a little more about what they are up to at the Asada lab.

According to Asada, human - robot interaction can take many forms:

CB

Asada believes that robots with a human form gives us many advantages.
E.g. the (human) form makes it easier for humans to predict the robots behaviours.

(Perhaps more controversially) Asada also stated that
Our motivation for a human-like appearance is that it gives more intimate contact...
So, many of his latest robots are given a soft exterior, as well as the appearance of a child...
The current CB² robot prototype is 120 cm high, and has a head with 11 degrees of freedom.

5.2.2. Matthias Scheutz. Tufts University.
Matthias Scheutz talked about the ''The future of HRI''.

Much to look forward to: Eventually, we should be able to chat with the robots. On a Tufts news page, Scheutz says:
At the moment, our interaction with social robots is completely one-sided. These devices simply don't have the means to understand our words and gestures. That's something Scheutz wants to change. If we can create devices that seem more humanlike in their response to us, he reasons, they may be well suited for more complex work with people, such as tending to the basic needs of hospital patients or the elderly at home.

Amazon Book Review

6. Book Thoughts.

6.1. Thought and Meaning.

I prepared for the conference by reading Ray Jackendoff's book ''Thought and Meaning''.
An awesome book! Not that it is particular fancy pancy, long or particular difficult to read and understand. No, it just quietly says a lot about thought and meaning. An accomplishment, as thought and meaning are not easy subjects...

Understanding Brains and People.
How is it really that a collection of neurons can give rise to experiences? Exactly how do we become conscious of the world and of ourselves? Indeed, the more we know about the mechanisms behind experiences, the less they look like the way we ''experience'' things.

Sitting in front of a computer, you can wonder how pressing keys on a keyboard make letters appear on the screen. It seems simple, and yet we would have to ask an awful lot of people to actually understand how these letters appear on the screen.
(And) understanding how the brain works is obviously a lot harder than understanding how letters appear on a computer screen.

If you go down to the nitty gritty of it, understanding language is almost beyond comprehension. Just making sense of the soundwaves of language is an incredible computational feat. First the soundwaves must be divided into sounds and words, and then we have to extract some meaning. Not easy...

It almost immidiately follows that it is quite difficult to understand other people. Not only are brains difficult to understand, but brains are emerged in culture. Where we also have lots of rules and regulations. Jackendoff puts it like this: ''One reason that other folks are hard to understand is that they might have a different systems for understanding each other. And one reason they may seem stupid is that they can't understand us very well''.


Thought

Language.
Jackendoff is a linguist, so it is no big surprise that there is a lot about language in the book. Language is a great thing, as you can use it to describe the world with. But as the world is a rather complex thing it almost automatically follows that language is also a pretty complex thing. Even though linguists might disagree about what language exactly is.

Especially funny, an interesting is language philosopher Jerrold Katz's view that language is really an abstract Platonic object. I.e. 21st century American english is eternal and will exist 23.000 years from now...?

To Plato, thinking and language (speech) was closely connected:
Thinking: the talking of the soul with itself.
Jackendoff gives this Platonian version, from the Sophist:
Stranger: Are not thought and speech the same, with this exception, that what is called thought is the unuttered conversation of the soul with herself?
Theaetetus: Quite true.
Stranger: But the stream of thought which flows through the lips and is audible is called speech?
Theaetetus: True.
Sure, thought and speech are close, but not quite the same.
Still, apparently, Jackendoff is very sympathetic to the work of a nineteenth century philosopher called Heymann Steinthal:
The asserted inseparability of thought and speech is an exaggeration, and that man does not think in sounds and through sounds, but rather with and in accompaniment of sounds.
Jackendoff writes: ''Thought is independent of language, and the accompaniment of thought by conscious sounds is just that, an accompaniment''.

Consciousness and the Hard Problem.
Consciousness is obviously the tricky question.

Jackendoff comes up with this little riddle: ''Are you conscious when you are dreaming? On one hand, we want to say that you are unconscious, because you are not experiencing things in the world. On the other hand, you are certainly having experiences: Seeing things, talking to people, maybe even flying''.

Concerning how we get from neural firing and information processing to experiences (Sitting outside in the sun, hearing kids playing, being there), what David Chalmers calls the hard problem, Jakendoff simply says that he doesn't think the question is tractable at this point in the sciences of the mind and brain.
So, he just thinks we should set it aside for now.

Not saying that we will have solved it in 15 years time, or that we will never solve it.
Meanwhile, there is plenty of other stuff to do, such as being more precise about the connections between the brain and experience.
What particular patterns of neural firing and information manipulation are correlated with which particular aspects of experience?

Consciousness

Constructing the World.
And consciousness is, of course, not all. There are lots of things consciousness doesn't know about.

Out there we have a physical world, but how our mind makes a representation of this is completely hidden from consciousness. We can't figure out how it works just by reflecting on our experience. We don't feel that the world out there is actually in there.

And in the middle of all of this we shouldn't forget that thinking has to do with a body, and how we can move that body around in a 3 dimensional world.
Jackendoff mentions proprioception. With damage to proprioceptive brain centers, people might not end up paralyzed, but they just don't know where their limbs are, until they begin looking for them (with their eyes).

Actually, reality out there is something we get when we link the position of our eyes and other sense organs to our internal representaion of what is out there (Sic!).
In the mind there is order and arrangement, but there is no experience of the creation of that order.

In a complicated scene we might have desks, chairs, faces, but there is no experience of putting it all together. Constructing the world isn't really a conscious thing.

(For more, see impressions from my visit to the ''Constructing The World'' conference with David Chalmers).

Free will.
Free will is one of these confusing things that are built into consciousness.
Taken from our ordinary, daily life perspective, Jackendoff assures us that we do have free will. But, if we take a more neuro-scientific perspective, the brain must be doing something to give us the feeling of free will...

Metaphysics.
Eventually we will ask what kind of things there are. Indeed, thats what we do in metaphysics. Are there objects? times? events? numbers?

Indeed, what are people?
Going back to at least Descrates the thinking has been that: Humans have souls, are conscious, are rational, have language and have moral responsibility.
Where the modern world seems to disagree. Science isn't to sure about the soul (Many scientists will tell us that there is no such thing, and only a few will leave room for something unknown, such as Chalmers do with his ''Hard Problem of Consciousness'').

Jackendoff summarizes by saying that in the end we get this message from the modern scientific world: ''There is nothing special about you. You are just a chance product of mindless evolutionary processes, operating in a insignificant corner of the universe. Your life has no meaning.
In fact, there is not even a you, there is just a clump of neurons interacting
''.

Given such views, Jackendoff doesn't think there is any big mystery in public resistance to teaching evolution in schools.
Jackendoff writes: Between such a picture and one in which you are meaningful and even sacred, where it matters what you do, and where there is a God that cares about you, which would you choose...?

Some might even say: ''If science tells me I don't exist, and that there is no right and wrong, then to hell with science''.

Jackendoff doesn't see God as the real issue here. The real issue to him is the existence and importance of a ''Me''.

He says he misses a way to resolve the crisis, where our lives becomes meaningful and sacred by the way we live these lives.

Well, hasn't he almost solved the problem for us, by that sentence alone?

Rationality.
Ultimately, you have to trust your gut about what it right and what is wrong.
Jackendoff certainly shows that the idea that we could be guided by pure rationality is somewhat overrated.

He writes: ''What we experience as rational thinking is necessarily supported by a foundation of intuitive judgment. We need intuition to tell us whether we're being rational''.

What we call rational thinking simply can't take place without a huge complex background of intuitive thinking. Rational thinking isn't an alternative to intuitive thinking, rather it depends on intuitive thinking.

And these days, we are certainly not building everything in our own world on our own rationality.
Going back to the Enlightenment period, the goal was to rebuild our knowledge of the world on rational foundations. You shouldn't trust received wisdom, especially wisdom from the church should not be trusted without further investigation. Everything should be questioned.
Today this approach is rather difficult. We can't do all scientific experiments ourselves, just deciding which received wisdom to trust is a full-time job.

And when at last we do apply our own rationality, as explained above, then that is guided by gut feelings about what the next rational move should be.

Perspectives.
In the land of thought and meaning, finding the right solutions has a lot to do with finding the right perspective.

According to Jackendoff, there is no such thing as one overarching, perspective free truth about the world.
If we take the wrong perspective, we end up saying weird things like that there is no Me, tthat here are no sunsets... etc.

Indeed, questions about the world tend not to converge on one set of answers...
Following his arguments, this is not ''giving up'' (everything is relative, so why bother), but actually a way to sharpen our tools, so that we can do better.

An awesome book.

Amazon review: [1].


Nature vs Nurture

6.2. Nature or Nurture.

I bought Jesse J. Prinz's book ''Beyond Human Nature'' at a stand in the Quebec Conference Center lobby.
And read it, while travelling home from the conference. Appeared to be an interesting input in the nature vs. nurture debate.

Sure, we are a product of biology, but we are also a product of culture and experience. According to Prinz, there is indeed something after genetics and evolution.
Culture and experience might shape our lives much more than biological determinism will have us think...?

Prinz's book takes us from babies dressed up in pink or in blue, and being treated by parents accordingly. To morality in hunter gatherer societies versus agricultural societies.
Where it is moral to be a cannibal, and where can you have slaves?
Indeed, morality might be more learned than we like to think...

Language is what allows us to talk about the whole thing - and Prinz makes a good case for statistical learning as the basis of language - instead of an innate language module...
He convinced me...

Indeed, apparently, there is a lot less innate than biological determinism will have us to think, and a lot more learned than we usually think.

Amazon review: [2].


Girls in Robotics
Girls in Robotics

7. Misc. Conference Folders & Links.


Robotics 2014
ICCM 2015

Die Gesellschaft fuer Kognitionswissenschaft
Wright-Patterson Air Force Base. Cognitive Modeling

Poppy Humanoid










Poppy Youtube video.

Poppy Project
Poppy Project

pypot Library

8. Conclusion.

The end of a wunderbar conference. With many memorable talks.
And lets not forget... (also) many memorable conversations with many great (poster) presenters.

Certainly, I'm already looking forward to my next visit to the CogSci conference!

Posters. CogSci 2014. Quebec City, Quebec, CA





CogSci 2014, Posters.


Posters. CogSci 2014. Quebec City, Quebec, CA




Time to pack up & say goodbye.

Nao Robot

The end of a great conference !


Index

July 2014, Montreal and Quebec pics.
Enactive Cognition Conference (Reading 2012) | Nasslli 2012 | WCE 2013 | CogSci 2012 | CogSci 2013 | Aamas 2014 | Areadne 2014
About www.simonlaub.net | Site Index | Post Index | NeuroSky | Connections | Future Minds | Mind Design | Contact Info
© September 2014 Simon Laub - www.simonlaub.dk - www.simonlaub.net - simonlaub.com
Original page design - September 10th 2014. Simon Laub - Aarhus, Denmark, Europe.