Impressions and Links
from CogSci 2012

(The 34th annual meeting of the Cognitive Science Society).

In Sapporo, Japan. August 1-4, 2012.

I had the great pleasure of taking part in CogSci 2012. Below you will find impressions from the conference, and links for further reading. Wonderful stuff! Certainly, I'm already looking forward to CogSci 2013 in Berlin!

The conference was held in Sapporo, Japan. Sapporo is the fourth-largest city in Japan by population, and the largest city on the island of Hokkaido.
Sapporo is famous for hosting the 1972 Winter Olympics. And, Sapporo is one of the most popular tourist attractions in Japan. In 2006, the annual number of tourists reached 14 million.

During the winter, two million people visits Sapporo for the winter festival to see ice sculptures at the Odori Park and Susukino sites.

Also, in the Odori Park one finds the Sapporo TV Tower built in 1957 (147.2 metre high with an observation deck at a height of 90.38 metres). For more about Sapporo, see my Tokyo Sapporo pics.

But we were there for the CogSci 2012 conference, which was helt in the Sapporo Convention Center.
On the convention center homepage one reads:
The Main Hall can easily accommodate large conferences and is innovatively styled as a multi-purposes venue.
The Conference Hall creates an impressive atmosphere. It is equipped with simultaneous interpretation system for six languages. It has a capacity of 700, and can be used for international conferences, as well as other meetings and congresses.

Impressive stuff. But enough about the surroundings, back to the conference:

CogSci 2012.

CogSci 2012 is the annual meeting of Cognitive society, and the society's first meeting in Asia.
Researchers were asked to attend in order to discuss the latest theories and data from the world's best cognitive science researchers.

And it certainly lived up to expectations. What a mind-blowing week it was!


1. Impressions from August 1st 2012.

Probability, Programs and the Mind:
Building Structured Bayesian Models of Cognition.

Was a full day tutorial on building Bayesian models of cognition, an introduction to a probabilistic language of thought by Noah Goodman and Josh Tenenbaum.

On the tutorials ''homepage'' Probabilistic Models of Cognition one reads:
What is thought? How can we describe the intelligent inferences made in everyday human reasoning and learning? How can we engineer intelligent machines? The computational theory of mind aims to answer these questions starting from the hypothesis that the mind is a computer, mental representations are computer programs, and thinking is a computational process running a computer program.
It will here be assumed that mental representations are like theories:
Pieces of knowledge that can support many inferences in many different situations. Knowledge that capture more general descriptions of how the world works - hence, the programs of the mind are models of the world that can be used to make many inferences...
On top of that probability will be thrown in. Where, probabilistic generative models describe processes which unfold with some amount of randomness, and probabilistic inference describes ways to ask questions of such processes.
All in all - it is about the knowledge that can be represented by probabilistic generative models and the inferences that can be drawn from them.

(Its) thoughts as simulation. Playing around with inductive, abductive and deductive reasoning.
(And) Graded reasoning:

Using the tutorials preferred language Church it looks something like this:
Whats the chance of having lung-cancer if you cough? or a cold (if you cough)?
If any of these are true (lung cancer or cold) you might cough...

In another example one might want to ask if a positive mammogram means that a person has breast cancer:
In this model the chance for breast cancer is 1 %. If breast cancer = true there is 80 % chance you have a positive mammogram, if breast cancer = false, there is still a 9.6 % chance the mammogram comes up true.
So what is the chance of breast cancer given positive mammogram?
Run the program a 100 times for positive mammogram, and apparenly the church query tells us that the chance of breastcancer is then less than 0.1.
There are facts and rules.
Then we can ask - if this is the end-state, what was the start-state or
if this is the start-state, what is the end-state.
And we can take random walks through program executions, and see where we might end up.

1.1 Taming the Bayesian zoo.

Here, we want something to unify the bayesian zoo. Provide a general language for representing a complex set of probabilistic dependencies in a domain:
Something that can deal with the commonsense core, that human thought is structured around a basic understanding of physical objects, intentional agents, and their causal interactions?
Do something to unify bayesian models for decision, memory, visual perception, attention etc.

Such a unified theory of cognition will have to bring together several good ideas about human tought.
That thought is: a) Abstract and Compositional, b) Causal, c) Probabilistic, d) Mental Simulation, e) Enabled and Constrained.

1.2. Earlier models for a grand unified theory of cognition:

Kenneth Craik (1914 - 1945) (wiki):
In 1943 he wrote The Nature of Explanation. In this book he laid the foundation for the concept of mental models, that the mind forms models of reality and uses them to predict similar future events. He was one of the earliest practitioners of cognitive science.
About 1980, Johnson Laird and Gentner and Stevens puiblished books about mental models. Viewing cognition as simulation //but they didn't have all the computational tools to formalize this grand theory the way we have now //.
PDPs (parallel distributed processing) and neural networks came at about the same time //but are not expressive enough //.
Later we saw cognitive architectures like SOAR, ACT_R, EPIC.
//Used for problem solving, task analysis. But not expressive enough to model infants and what babies can do//.
From 2000 onwards, Bayes is a paradigm, but not a grand unified theory.
//Problems with going from Bayes Net (good abstractions, flowcharts) to something runnable//.

1.3. Remarks.

So, here, in these Bayesian Models of Cognition the focus is on simulations. And the idea that, indeed, human reasoning is a simulation process. It is not just mental arithmetic or probabilities.
Indeed, it will be interesting to see how this framework of ideas will develop in the coming years!

Further reading:

- For thoughts on (howto) linking actual neuron behaviour in the brain to these bayesian models, Matt Botvinicks work might be an inspiration.

- For thoughts about learning - It has been suggested that learning is just like adding another Church/Lisp library to your total program. But well, it might be possible to do better - See Charles Kemp for thoughts on (hierarchical) learning (and reasoning).

- The book Bayesian Brain by Kenji Doya was highly recommended as a starting point for further explorations.

2. Impressions from August 2nd 2012.

2.1. Decision making.

First up was a symposium on Computational, Cognitive, and Neural Models of Decision-making biases.
It started with a few (brilliant) words about taking samples (by Nick Chater):

Inference by sampling:
- General and scalable.
- Psychological plausible.
- Predicts human idiosyncrasies.

More samples gives better results - but cost more in terms of time.
A small number of samples introduces anchoring biases (Insufficient adjustment, describes the common human tendency to rely too heavily, or ''anchor'', on one trait or piece of information when making decisions).
It gives terrible decision making, if you take few samples, and the payoff (positive, negative) is very high in some cases (You risk missing these cases, if you only take a few samples).

Then, followed a discussion about cognitive vs. rational principles.
Perception, Attention, Memory vs. Logic, Probability and Decision Theory.

E.g. People care about ratios, not absolute values.
- Where comparisons are made within dimensions (e.g. size of a diamond, cut of diamond).

2.2. Attention in the real world.

Kerstin Sophie Haring gave a presentation on ''The use of ACT-R to develop an attention model for simple driving tasks''.
ACT_R (is a cognitive architecture, which aims to define the basic and irreducible cognitive and perceptual operations that enable the human mind. In theory, each task that humans can perform should consist of a series of these discrete operations). ACT_R was here used to develop an attention model for simple driving tasks (a basic attention loop of driving).
Rather intriguing, all was made available as freeware - So, all can extend it and use it. Tempting!

Tom Foulsham was next with ''Eyes Closed and Eyes Open. How expectations guide fixations in real word search''. We think that a chimney should be in the top of a scene, just like a ceiling fan. Flower pots on the other hand can be in many places.
With a scene preview of (as little as) 57 ms people can refine where to expect a target should be.
So, search consist of an interplay of semantic knowledge and initial cues.

He was followed by Elizabeth Redcay, and a talk about ''Gaze cues in complex, real world scenes''. On, what drives difficulties with joint attention in autism.
I.e. some experts seems to indicate that reduced social attention (to faces and eyes) gives a failure to direct atention to what others are paying attention to. The talk presented some very interesting views on this.

Wayne Gray followed with a talk about ''Cognitive workload and the motor component of visual attention''. Understanding why a workload (memory task) might make people turn slower in a driving task.
(Very) Interestingly, current theories about cognitive control are actually insufficient to account for the data in some of these experiments.

All, brilliant stuff!

2.3. Bayesian Modeling. Methodology.

Momme von Sydow offered some good points on ''Bayesian Logic and Trial-by-Trial Learning''. Followed by Thomas Griffiths on ''Comparing the inductive biases of simple neural networks and Bayesian models'': Yes, for any bayesian model there is an equivalent neural network. Both can reproduce human results, and both approaches are universal.
For more on Bayesian modeling see day 1.

2.4. Embodied Cognition.

In Azadeh Jamalian's presentation, ''Gestures alter thinking about time'', we were told that wednesdays meeting was moved 2 days, where gestures decided whether that meant friday or monday. So, indeed, gestures do change tought. A number of experiments illustrated it.
Sergiu Tcaci Popescu followed with a talk about ''Spontaneous body movements in spatial cognition''. A lovely video demonstrated that mental transformation is easy, if it is followed by a bodily movement... When people do mental rotation most (statistically significant) align their selves accordingly. They rotate their head (and shoulders) with 10 (?) percent of the mental rotation.

2.5. Cognitive Architectures and Computational Modeling.

Terrence Stewart (Waterloo University, Centre for NeuroScience) presented a talk about ''Spaun: A Perception-Cognition-Action Model using Spiking Neurons''. We were introduced to the Nengo system. A software package for simulating large-scale neural systems. It all looked pretty impressive.
Vladislav Veksler presented a talk about ''An integrated Model of Associative and Reinforcement Learning''. The thesis was that you need to integrate various learning mechanisms to cope in a realistic environment.
E.g. reinforcement learning is insufficient when the target changes. Here combinations with associative learning gives much better results. Dynamic and diverse environments demands diverse learning methods.

2.6. Rumelhart Prize Lecture: Peter Dayan.

This year the Rumelhart Prize went to Peter Dayan.
Launching from Rumelharts question: ''If the brain is computer, what kind of computer is it?'' the lecture focussed on reinforcement learning (an area of machine learning in computer science, concerned with how an agent ought to take actions in an environment so as to maximize some notion of cumulative reward).
According to David Marr, one must understand (the brains) information processing systems at three distinct, complementary levels of analysis (This idea is known in cognitive science as Marr's Tri-Level Hypothesis): The computational level, the algorithmic/representational level and the physical level. So, we should look at reinforcement leaning from all of these angles.
The conclusion was that reinforcement learning lives in the center of cognitive science. In:
- Reasoning.
- Nature vs. nurture.
- Learning.
- Behavioural anomalies.
And future work should see reinforcement learning use in working memory, attention, language and beyond.

3. Impressions from August 3rd 2012.

3.1. Keynote Talk: Nancy J. Nersessian.

Nancy J. Nersessians talk (Georgia Institute of Technology) gave some brilliant insights into Building Scientific Cognition, conceptual innovation on the frontiers of science. Building distributed cognitive frameworks, i.e. (in the lab) building the plane while it is flying.
According to Feynman building is an important part of understanding (''What I cannot create, I cannot understand''). In this presentation it was changed to: ''What I cannot build, I cannot understand''.
An experiment that wanted to build a network of brain cells that controls a mechanical arm to produce art was described. In this experiment the process was all about building models, agreeing on them, setting constraints, and hope that this would eventually yield conceptual innovation. Simulation was here seen as a ''Running Litterature Review''.

3.2. Symposium. Neural computations supporting cognition. Rumelhart Prize Symposium in honor of Peter Dayan.

Started with Kenji Doya - ''Reinforcement learning and the Basal Ganglia''. Followed by John O'Doherty on ''Fractionating model-based reinforcement learning, its component neural processes'', goal directed control vs. habitual systems. And Peter Bossaerts on ''The neural process of subjective belief formation in humans''. Many interesting comments. E.g. what to do in situations where knowledge of past events will never help to predict future winnings (Martingale scenarios).
Alexandre Pouget (University of Geneva) gave some insightful (and funny) comments on approximations. Take human optics: Why is human optics so bad?
Well, the limitation for performance is not the optics, but computations in the brain. The computational part makes the end result a lot worse than the optics could ever have done. But, approximations are unavoidable.
See Summary.

3.3. Pragmatics and humour.

Tagiru Nakamura gave a talk about ''The role of the Amygdala in the process of humour appreciation''.
The purpose of this study was to investigate the neuropsychological process and timing of appreciating humor.
According to a standard humor appreciation model (Suls, 1972; Wyer & Collins, 1992; Yus, 2003; Martin, 2006), incongruities in the content of the utterance must be identified and resolved for it to be humorous. The incongruities are typically caused by violation of a set of expectations stored in ''mental schemas'' which are ''formed on the basis of past experience with objects, scenes, or events and consists of a set of (usually unconscious) expectations about what things look like and/or the order in which they occur''.
See paper.
The motivation for appreciating humour comes from a search for relevance:
We propose that activation of amygdala in humor appreciation can be interpreted as the result of detecting the optimal relevance in humorous utterances - the ''aha'' reaction.
Where,
Lesion and neuroimaging studies have shown that the amygdala is involved in an evaluation of motivationally relevant events... We argue that the amygdala is a candidate for relevance-based processing.
The authors found that when a participant judged something humorous, the bilateral amygdala had been significantly activated (at the 4th phase of its processing). But banal (non-humorous) events are also detected by the amygdala, somewhat earlier though.

3.4. Eye tracking. Theory and practice.

In the conference hall, I got a demonstration of the Tobii eye tracking equipment. The equipment, the eye-tracker, samples the position of the user's eyes (on average it samples every 20ms, i.e. at 50Hz) and is characterized by the unobtrusive addition of the eye-tracking hardware to a monitor frame.
The equipment can be used to spot dyslexia and autism in children... And is, obviously, also widely used in advertising to spot what people are actually looking at.
The big version of the equipment could be yours for about 250.000 dollars, and a smaller version you can buy for 30.000 dollars.
Konstantina Garoufi gave a brilliant talk about Using listener gaze to augment speech generation in a virtual 3D environment. How tracking listener gaze can help monitor understanding. Inspection of an object means looking at it for more than 300 ms (Successful gaze: Is inspecting target more than distractors, Unsuccessful gaze: Is inspecting distractors more often than target).

3.5. Moral Cognition.

Jorie Koster-Hale (MIT) gave a talk about ''Thinking in patterns: Using multi voxel pattern analyses to find neural correlates of moral judgement in neurotypical and ASD populations''.
Interesting comments about autists. Turns out, that autists care less (than normal people) about intention of what a person thinks (when an action occurs). Accidental harm vs. intentional harm is more or less the same to them.
Jonas Nagel followed with a talk about ''Force dynamics as a basis for Moral intuitions''.
Force dynamics is a semantic category that describes the way in which entities interact with reference to force. Say there is a principle ''non-interference pattern'' (don't push sentient beings), then the force dynamical pattern, Bob pushes Joe, violates the ''non-interference pattern''.
Alex Wiegman talked about ''Order effect in moral judgements''. How there is evidence that peoples moral intuitions can sometimes be influenced by morally irrelevant factors.
See article.
Kuninori talked about ''Effect of number of victims in moral dilemmas''. In the footbridge dilemma: A trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say ''No''.
Interestingly, logically, ''playing around'' with the number of victims in the trolley and footbridge dilemmas, it turns out that it gets worse to kill more people (in the footbridge dilemma).

4. Impressions from August 4th 2012.

4.1. Keynote Talk: Lawrence W. Barsalou.

According to the Department of Psychology at Emory University (Atlanta, Georgia) homepage Barsalou's research addresses:
The nature of human conceptual processing and its roles in perception, memory, language, and thought. The current theme of his research is that the conceptual system is grounded in the brain's modal systems for perception, action, and internal states.
Specific topics of current interest include the roles of conceptual processing in emotion, self, stress, abstract thought, and contemplative practices.
His research also addresses the role of mental simulation in conceptual processing, the situated and embodied nature of knowledge, the dynamic online construction of conceptual representations, the development of conceptual systems to support goal achievement, and the structure of knowledge.
And what a super interesting talk (about situated conceptualization) it turned out to be!

4.1.1. Concepts and Cognition:

Concepts are basic elements of knowledge. And, concepts are a key component in cognition.
The conceptual system provides representational support across the spectrum of cognitive activities (And here it is speculated that the conceptual system is a system distributed throughout the brain that represents knowledge about the world).
In online processing, as people pursue goals in the environment, rather than starting from scratch when interacting with an entity or event, agents benefit from (concept) knowledge of previous category members.
The conceptual system is also central to offline processing when people represent non-present entities and events in memory, language, and thought.
Concepts helps the cognition that let us go from perceptions to actions:
Concepts are not typically processed in isolation but are typically situated in background settings.
When representing a bicycle, for example, people do not represent a bicycle in isolation but represent it in relevant situations.
Even when people focus attention on a particular entity or event in perception, they continue to perceive the background situation, the situation does not disappear.
Whenever you think of a concept ''dog'' you also think about the situation that you might find this dog in. We represent a lot of background information. I.e. the brain is a situated processing unit.
The cognitive system produces many different situated conceptualizations of bicycle, each tailored to help an agent interact with bicycles in different situations. For example, one situated conceptualization for bicycle might support riding a bicycle, whereas others might support locking a bicycle, repairing a bicycle and so forth. On this view, the concept for bicycle is not a single generic representation of the category. Instead, the concept is the skill or ability to produce a wide variety of situated conceptualizations that support goal achievement in specific contexts.
Concepts might not just reside in episodic or semantic centers. Instead e.g. the concept for tool might be emergent as interplay between a lot of centers processing different properties of tools.

4.1.2. Simulations:

Simulation is a basic process in the brain. And, in summary, a situated conceptualization typically simulates four basic types of information from a particular perspective:
1. Perceptions of relevant people and objects.
2. Actions.
3. Introspections.
4. Settings.

Putting all these together, a situated conceptualization is a multi-modal simulation of a multi-component situation, with each modal component simulated in the respective neural system.
And, surely, a lot of things might be emergent from interplay, also things we usually assume to be modular.

Finally, a situated conceptualization places the conceptualizer in the respective situation, creating the experience of ''being there''.
By re-enacting actions and introspections from a particular perspective, a situated conceptualization creates the experience of the conceptualizer being in the situation. The situation is not represented as detached and separate from the conceptualizer [1].

4.1.3. Subjective realism:

When you imagine something it seems real. A tasty food, or an old emotion.
But think about the buddhist take on this subjective realism:
>>Imagine eating a food, but practice taking out how it feels. Try removing the subjective realism of these experiences<<
Certainly, a nice exercise to investigate how our world (simulation) is constructed.

4.1.4. Emotions:

And when you begin thinking about constructions - I couldn't help thinking about emotions. Even though it wasn't dealt with (that much) in the talk.
Indeed, think about what adding or removing emotions from a scene does to the experience of the scene.
Where, BTW, I find John E. Lairds description of emotions quite appealing (AISB quaterly, No. 134).
Even though it doesn't explain the sensation of emotions:
Theories of emotion propose that an agent continually evaluates a situation and that the result of the evaluation leads to emotion. The evaluation is hypothesized to take place along multiple dimensions, such as goal relevance (is this situation important to my goals?), goal conductiveness (is this situation good or bad for my goals?), causality (who caused the situation?), control (can I change the situation?), and so on. These dimensions are exactly what an intelligent agent needs to compute as it pursues its goals while interacting with an environment.

4.1.5. Future Work:

After this absolutely stunning presentation. It certainly caught my attention when it (later in discussions) was mentioned to me that:
Perhaps the most pressing issue surrounding this area of work is the lack of well specified computational accounts. Our understanding of simulators, simulations, situated conceptualizations and pattern completion inference would be much deeper if computational accounts specified the underlying mechanisms. Increasingly, grounding such accounts in neural mechanisms is obviously important as well, as is designing increasingly sophisticated experiments to assess and develop these accounts [2].
Could anything be more exciting?

4.2. Robotics and Emotions.

After Lawrence W. Barsalou's stunning presentation it was time for a brilliant Symposium: Robotics and Emotions with Yuichiro Anzai, Rolf Pfeifer and Hiroshi Ishiguro.

Yuichiro Anzai's talk was called ''From fungus eaters to emotionally, socially interacting robots''.
Anzai started with a presentation of Masanao Toda (1924 - 2006, Professor at Hokkaido University, Sapporo), one of the truly grand old men of robotics. A visionary who dealt with both rather simple robot designs, like fungus eaters, as well as rather sophisticated humanoids.
See: Toda, M. (1962): Design of a Fungus-Eater, in Behavioral Science. And Toda, M. (1982): Man, robot, and society: Models and speculations.
As early as 1986 Toda was thinking about Human-Robot Interaction (HRI). And worked on sharing (howto) attention, conceptual space and emotion using e.g. gestures.
For Toda a robot could have just about any size. A robot could be a humanoid, but it could also be a ship or a city.
Pretty impressive stuff by the standards of 1986, and obviously a great inspiration for the generations that followed.

Next up was Rolf Pfeifer with ''Do robots need emotions?''.
Many clever comments about cognition as an emergent phenomenon from sensory, motor and interaction processes.
Where we understand by building (the robots), and observe the task distribution between the brain (control), the body (morphology) and the environment.
From thereon maybe it will be possible to get to the heuristics, or design principles, that on the one hand capture theoretical insights about such intelligent (adaptive) behavior, and on the other hand provide guidance in actually designing and building these systems.
Toda wouldn't be happy yet though.
Sure, the interaction between brain, body and environment is a huge improvement. But what about robotic urges, mood states and emotional states? Surely, emotions could improve Human Robot Interactions! Robots could detect human emotional states and behave more appropriately in situations, according to a human emotional state. Robots could be angry or happy when communicating with a human. Afterall, emotions are the fastest way of communication between humans.
All stuff to be investigated further in e.g. in the proposed (1 billion Euro) EU research flagship ''Robot companions for citizens''.
Following Todas vision to the logical conclusion, in the end, robots should not only have emotions and be able to detect emotions, robots should also achieve ''Sentience''.

Rolf Pfeifer was followed by Hiroshi Ishiguro, who had been sitting almost next to me during the previous presentations. Obviously, Ishiguro talked about robotic humanoids, and how they can help us understand what it means to be human.
We were shown the secret video, where his daughter first meet her robot copy (the video ends with his daughter crying in confusion and shock), a visit far into the Uncanny Valley.
As usual, brilliant and thought-provoking stuff.
For more about Ishiguro, see my report: A visit to Ishiguros Robot Lab.

The questions session was quite lively. Some members of the audience thought it was a very scary thing to have a robot get angry with you?
The panel, Rolf Pfeifer, took it a lot calmer: We don't have a robot capable of this yet, so it is rather hypothetical question....
In the end it all follows from the investigation of ''Robot Sentience''...

Surely, we are going to have a lot of glorious fun...
And what a wonderful symposium this was.

4.3. Information, Search and Pattern Recognition.

Yanlong Sun presented a paper on our ''Perception of Randomness'': People tend to think that streaks are rare in random processes. Understanding that the waiting time for streaks are higher, but that there will be streaks in random processes are apparently very difficult for us to really thoroughly understand.
Random is a difficult subject!
Maarten Speekenbrink followed with a talk about ''Change detection under autocorrelation''. Where he compared formal statistical methods with human judgements. Our human heuristics, on how to detect change, i.e. change compared to variability, turns out to work pretty much like (simple) bayesian models. Apparently, evolution has not bothered to make it more advanced than this!

4.4. Group behaviour.

Ipke Wachsmuth talked about ''An operational model of joint attention''. Where joint attention is a step towards shared intentionality, knowing together that they are doing this.
Obviously, there is a lot of timing involved in this, and it turns out that there can max. be 5 sec. between an initiated act and the response (Gaze patterns in interactions between humans).
Yugo Hayashi gave us some insights in the role of the maverick, a group member with a different perspective (in the talk ''The Effect of the Maverick''). Turns out that there is experimental evidence that performance of the group improves when there is a maverick in the group...Not all that surprising imho.

4.5. Memory.

Philip Beaman gave the presentation ''Lexical access across languages''. Background noise - especially speech - substantially reduces your ability to recall. Again, not all that surprising imho.
Isaiah Harbison talked about ''Self terminated vs Experimenter terminated memory search''. Rather interestingly, no difference was found in the number of items retrieved from memory. Participants do not retrieve more items in the open-interval design than in the closed-interval design.
According to the talk:
These results indicate that participants do not in fact terminate search over-quickly in open - relative to closed - interval designs. Furthermore, as participants were able to retrieve the same amount of items in less time, the results suggest that the open-interval design might provide a method to measure not only how memory is searched, but also how efficiently memory can be searched.

4.6. Symposium. What can Cognitive Science say or learn about the Economic Crisis.

Gerd Gigerenzer (Is the author of the book Gut Feelings: The Intelligence of the Unconscious (2007), that have been translated into 18 languages. And he is well known for his investigations of decisions under limited time and information. He has trained U.S. Federal Judges, German physicians, and top managers in decision making and understanding risks and uncertainties) gave a very interesting talk about ''Simple Heuristics for a safer Financial System''.

Gigerenzer first talked about some of the financial models used by banks.
Famously, David Viniar, CFO of Goldman Sachs, reported a 25 sigma event in the financial system several days in a row...
So, what is a 25 sigma event? Well, according to Wikipedia:
- a 3 sigma event is something we only expect to see once a year.
- a 5 sigma event is an event that we only expect to see once every 4,776 years (once in recorded history).
- a 6 sigma event is an event that occur once every 1.5 million years.
- a 25 sigma event is very rare. Something we expect to see once every 10^135 years...
(It is much more likely to guess an AES-256 key in one attempt...).

But, David Viniar reported not just one such event, but several!
Something with the model was clearly wrong.

According to Gigerenzer, these financial models were set up between banks and governments. And had a lot of free parameters, where the banks had been allowed to estimate the values of these free parameters. Not surprisingly, the banks had selected values that were helpful to their businesses, rather than more correct, neutral values. In the end, the models were bound to produce absurd results. And so we ended up with these sigma 25 results.
Gigerenzer suggests:
Heuristics are necessary for good decisions. Complex problems do not require complex solutions. Less is more.
Mervyn King (Bank of England) has suggested that a simple rule like:
Don't use leverage above 10:1.
(Wiki: Leverage, sometimes referred to as gearing in the United Kingdom, or solvency in Australia)
would be much better than the highly complex rules of Basel III (Global regulatory standards on bank capital adequacy, stress testing and market liquidity risk. Scheduled to be introduced from 2013 until 2018. Developed in response to the deficiencies in financial regulation revealed by the late-2000s financial crisis).
Customers (banks and individuals) could also benefit from using simple heuristics, E.g.:
Don't buy financial products you don't understand.
Nick Chater continued with a talk about ''Why didn't the markets self-correct?''. Seeing the Economic Crisis as an information failure.

Well, why do bad apples remain hidden?
We don't have full inspection. And people can declare good apples credible. So, only bad apples are declared (when found). And, as people assume that bad apples correlates with a bad company (If you find one bad apple, you assume that there are probably more) - everyone hides their bad apples.
We end up with an information failure.
Another problem is that people have no idea of absolute risk.
We judge relative risk only! So, when all risk are assessed relatively - People don't notice when absolute risk goes up.
Another information failure.

Finally, Hansjoerg Neth advocated financial literacy (That allows people and society to make informed financial decisions).
Basicly, we need more financial wisdom: Education about the Attitudes, Behaviour and Rules that will help people and society make good financial decisions...

5. Conclusion.

All brilliant stuff! Certainly, I'm already looking forward to CogSci 2013 in Berlin!


-Simon

Simon Laub
www.simonlaub.net

Enactive Cognition Conference (Reading 2012) | Nasslli 2012 | Tokyo, Sapporo trip pics
About www.simonlaub.net | Site Index | Post Index | Connections | Future Minds | Mind Design | NeuroSky | Contact Info
© August 2012 Simon Laub - www.simonlaub.dk - www.simonlaub.net
Original page design - August 20th 2012. Simon Laub - Aarhus, Denmark, Europe.