How Intelligence Happens.

Our inner Worlds.

Amazon review (5 stars out of 5)
of John Duncans book ''How Intelligence Happens''.

April 11th, 2012 by Simon Laub.


Certainly, intelligence is a very interesting and important subject.
Nevertheless, we still have no clear definition of what intelligence exactly is!

However, according to british psychologist Charles Spearman, people who do well on one mental task tend to do well on others. So, Spearman concluded that intelligence is a general cognitive ability that can be measured.
John Duncans book takes off from there, and then goes into these unknown brain lands.

Its a very exciting journey. Where, some parts of the landscape are well described, whereas other parts are new uncharted worlds. Wherever we go, John Duncan is obviously a very knowledgeable and competent guide.
And, in his view, intelligence (organising our behaviour in a logical, goal-directed way), is supported by prefrontal (and parietal) regions of the brain. It follows that the book is very much about giving evidence to support a ''frontal lobe theory of intelligence''.

Sure, (in the book) it would have been nice with some comments about e.g. Louis L. Thurstones theory about intelligence (Intelligence not so much as a single, general ability, but intelligence seen as seven different abilities: Verbal comprehension, reasoning, perceptual speed, numerical ability, word fluency, associative memory, spatial visualization), or Howard Gardners theory about skills and abilities within different areas (visual-spatial intelligence, verbal-linguistic intelligence, bodily-kinesthetic intelligence, logical-mathematical intelligence, interpersonal intelligence, musical intelligence, intra personal intelligence, naturalistic intelligence) etc. But, obviously, this is not possible in such a relatively small book.
Instead, the book focus on the ''frontal lobe view'' of intelligence, based on Spearmans work.
And after reading John Duncans book you are certainly convinced that this is a very important aspect of intelligence!

It all starts with Charles Spearman ->
According to Spearmans ''g factor'' theory, for each task we might undertake, there are two kinds of brain contribution. Performance is a) influenced by the g factor, our general ability to do many things well, and b) the s factors, individual skills and aptitudes relevant for specific activities.
It seems logical that the s factors might map quite well onto brain modules known from various kinds of brain imaging (such as the language system etc). And, in the book (''How Intelligence Happens''), John Duncan makes a convincing case that frontal lobe functions must be a major part of g.

The G Factor:
According to Spearmans ''g factor'' theory - a persons g factor exerts a major influence on what a person (can) do.
It influences how people (can) act, and hence their lives.

In humans, G is a measure of general cognitive ability. For unskilled jobs g correlate about 0.2 or 0.3 with later productivity. For a job of average complexity, the correlation is around 0.5, and for the most complex jobs, it approaches 0.6.
(And btw.) The more people want a particular job, the more that (attractive) job is actually populated by people with high g scores.

Summarized,
Spearmans theory of g:


When Alfred Binet designed the first usable intelligence tests it was done with no real theoretical justification.

What tasks should be included in a valid ''intelligence test'' ?
Memory? Reasoning? Speed? And in what proportions?

Spearmans theory of g says it doesn't really matter.
Any diverse set of tasks will eventually measure g.
The g factor is a construct developed in psychometric investigations of cognitive abilities. It was originally proposed by the English psychologist Charles Spearman. He observed that schoolchildren's grades across seemingly unrelated subjects were positively correlated, and reasoned that these correlations reflected the influence of an underlying general mental ability.
The g factor. See g factor (psychometrics).

In ''How Intelligence Happens'' John Duncan writes:
[p.32] ''Suppose we measure not just academic ability or sensory discrimination, but any mental ability or achievement - speed of decision, ability to remember, ability to solve technical problems, musical or artistic ability - anything at all -
For each one of these abilities, Spearman proposed that there will be two sorts of contribution. One is some contribution from a general factor (or g) in each persons makeup - something the person uses in anything he or she undertakes....Second, there will be contributions from one or more specific factors (s factors) - individual skills...with little or no influence on other activities.
''

Demanding Tasks:
[p. 93] Basicly, ''Anything we do involves a complex, coordinated structure of thought and action.
For any task, this structure can be better or worse - the structure is optimized as we settle into the task.
''
The russian neuropsychologist Alexander Luria and others have suggested that the brains frontal lobes are essential in this process (finding the best coordinated structure of thought and action). And, certainly, litterature on frontal lobe patients show deficits in all sorts of cognitive tests: Perceptions, learning mazes, response etc. etc.

Altogether, it leads John Duncan to the conclusion that frontal lobe functions must be a major part of g.
More specificly: [p. 104] ''Three regions seem to form a brain circuit that comes online for almost any kind of demanding cognitive activity''. For any given demand, this general circuit will be joined with brain areas specific for the particular task
( - The general circuit, or the multiple demand circuit, is described, in the book, as a band of activity on the lateral frontal surface, towards the rear of the frontal lobe, close to the premotor cortex. Plus a band of activity on the medial frontal surface. Plus activity in a region across the middle of the parietal lobe - ).

High g tasks will not necesarily show activity all over the the cerebral cortex, but one would expect patterns of activity on the multiple demand circuits.
It follows: For people with damage to the frontal lobes [p. 110]: ''The bigger the damaged area, the worse the impairment.'' and ''Damage within the multiple demand regions is more important than damage outside them.''

Problem Solving:
[p. 139] ''Problem solving is a question of search, in the knowledge store (of a computer).''
The whole trick is to find the right facts. To divide a given problem into just the right subproblems, and in this way navigate the right path to a solution.
Obviously, when the right right knowledge is not used, it is pretty easy to go wrong: To enter the wrong path, to go around in circles or be completely derailed.

Also, central to human intelligence is abstraction. Here, frontal cells have some quite neat properties [p. 164] : ''Each cells firing have no fixed meaning. Instead its meaning is held only in the context of the current task. When the context change, then the cells activity change too.''
Meaning that, frontal cells can work together with different areas of the brain.
Indeed, many different inputs, from many systems in the brain, converge onto the prefrontal cortex.
[p. 171] ''(in the prefrontal cortex) apparently what cells do is adapt to code just what information belongs in the current task - just those things, whatever they are, that belong in the current focus of attention.''
[p. 172] ''The prefrontal cortex is special not only in the breatdh of its input, but also in the breath of its output to other parts of the brain... As a cognitive enclosure is established, just the information that belongs in this enclosure is coded in the prefrontal cortex. At the same time many other parts of the brain follow suit. Visual cells coding relevant visual events, auditory cells coding relevant sounds... the whole body of work is integrated. The cognitive enclosure is assembled; The mental program is kept on track.''
A mental program is constructed of just the right segments and assembled into just the right structure. Cognitive enclosure, cognitive focus, is essential to effective thought. Just small amounts of information should be assembled to solve a small problem. The right parts.
By systematic solution of focused subproblems, we achieve effective, goal directed thought and behaviour.
The flip side of this is, of course, that sometimes in enclosed thinking, ideas compete for consideration, and essential knowledge sometimes remains dormant and ignored.
[p. 209] ''At any one level of difficulty, it is the people with the lower g scores who struggle most. Experiments link g to a specific limit on mental capacity. The complexity, or the number of cognitive enclosures to be assembled, in a new mental program.''
The flexibility of the frontal system might also explain why this system is sometimes very resistant to damage [p. 178]: ''A system that is characteristically flexible may have a natural defense against damage. When the best cells for some operation are damaged, the system may respond by drafting others into service. In some patients, this seems to work almost prefectly, and large amounts of tissue are lost with little apparent effect. Of course, we have no clue why this should be possible for one person and apparently impossible for another.''

When tasks become familiar (repetitive and automatic), functional MRI experiements show reduced activity accross the multiple demand system. When subjects make much the same response for the same stimuli and events, there is no need to assemble new mental structures etc.

Conclusion:
So, in the end John Duncan concludes: At the hearth of g lies the multiple demand system and its role in assembly of mental program [p. 216]: '' In any task, no matter what its content, there is a sequence of cognitive enclosures, corresponding to the different steps of task performance. For any task, the sequence can be composed well or poorly. In a good program, important steps are cleanly defined and separated, false moves avoided. If the program is poor, the successive steps may blur, become confused or mixed.
In the disorganized behaviour of frontal lobe patients, we see this risk in all tasks, in all what they do.
''
Certainly, it takes constant effort and vigilance to keep thought and behaviour on track...

G and age:
And this g world view is not only for the young and fast. John Duncan asks us to remember that:
[p. 219] ''The younger person is faster and sharper. But the older person has a richer long term-memory of important facts, relationships and potential solutions. As problems are solved, the richer long-term memory gives more knowledge to work with.
We do not store unstructured experience, we store the products of our own thoughts, our own interactions with our world.
The wisdom of age evolve, rather immidiately and directly, from the intelligence of youth. A lifetime of clean, well-defined cognitive enclosures is a lifetime of learning, not just facts, but useful facts.
''

Still, in the end, all we have are models in our brains about what is out there:
[p. 223] ''As we seek for the laws for the universe, we may remember that each law is also just a human idea. Potentially, it is shaped, not just by the universe it describes, but by the mind that conceives it.''.
Indeed, ''perhaps we can never see beyond our own biological boundaries.''...

The book is a wonderful read, with many new pieces of information and thought provoking ideas.
But, obviously, as in all good books, in the end there are just as many questions as there are answers.
In the end we are still faced with hard problems.

The Hard Problem:
[p. 147] ''Philosophers of consciousness call the contents of different experiences their qualia. They refer to the problem of qualia as the hard problem. A hard problem sounds like a challenge that we know is hard but intent to meet;
I would prefer to hear qualia described as the problem that has us so befuddled that we don't even know what kind of problem it is...
''

Amazon: [1].
Wordpress: [2].


-Simon

Simon Laub
www.simonlaub.net


The Human Journey.

The human story is a long one, full of mysteries.
The March 24th 2012 issue of NewScientist gives us some highlights from the long journey:

- 8 to 6 million years ago: Last common ancestor of chimps and humans.
- 6 to 4.2 million years ago: Origin of bipedalism.
- 4 million years ago: Australopithecines appear (brain volume: 400 - 500 cm^3).
- 2.6 million years ago: Oldest known stone tool.
- 2 million year ago: Homo Erectus (brain volume; 850 cm^3).
- 1.6 million years ago: First use of fire. - 600.000 years ago: Homo Heidelbergensis, capable of speech (brain volumen: 1200 cm^3).
- 200.000 years ago: Homo Sapiens (brain volume 1300 cm^3).
- 125.000 years ago: Humans first attempt of leaving Africa.
- 70.000 years ago: Evolution of body louse (Origins of clothing?).
- 50.000 years ago: Human cultural revolution.
- 24.000 years ago: Neanderthals go extinct.

Certainly, much is still shrouded in mystery. But, I definitely enjoyed the issues remarks about e.g. bipedalism, technological advances, language development and spread of humanity around the globe. Lots of good insights:

- There are many good reasons why evolution came up with bipedalism:
Bipedalism leaves the hands free to carry things, and (being taller) better able to spot predators. Actually, there might have been a whole package of advantages. But perhaps the biggest advantage is travel efficiency and travel distance.
I.e. according to NewScientist, one study even suggest that we are naturally adapted to endurance running ...

- Why was technological development so slow? Some of the oldest stone tools, yet found, were found in the Afar region of Ethiopia. They date from 2.6 millions years ago. But, it would be another million years before our ancestors made their next technological breakthrough!
According to the NewScientist article, the early technological advances depended on novel perceptual-motor capabilities, so great cognitive advances was needed. Which goes a long way in explaining why it took so long.
Also important: Modern humans have large populations with lots of people copying, and lots of ways to pass on information.
Our long lives also permit transfer of ideas down the generations. Something which Homo Erectus (maximum lifespan of 30 years) and Neanderthals (maximum lifespan of 40 years) couldn't do.

Besides, life might have been challenging enough without risky experimentation with new tools.

- Robin Dunbar suggest that hominin voices might have evoled to sing by the campfire:
Like birdsong, they would not have carried much specific information, but the activity would have been important for group bonding. Only later came other uses, like language with content, according to Dunbar.

- Travelling around the world came later. Driven by an increase in technological, economic, social and cognitive behaviour.
All part of a period that saw a blossoming of innovation such as the manufacture of complex tools, efficient exploitation of food sources, artistic expression, and symbolic ornamentation.
Genetic mutations might also have made us more adventurous (i.e. take the novelty seeking gene, DRD4-7R, more common in populations away from Africa).

What a trip!


-Simon

Simon Laub
www.simonlaub.net


Index

 

Memories and Engrams.

An Engram is the physical substrate of a particular memory.

Until recently, it was thought that engrams existed only as vast webs of connections. Not in a perticular place, but in distributed neural networks throughout the brain.
But, at least in mice, sometimes memories are confined to a single neuron.

According to Discover Magazine, Alcino Silvo, of the UCLAs Integrative Center for Learning and Memory, and colleagues have shown that they can largely eliminate a mouse's fearful response to a tone the animal has previously learned to associate with an unpleasant electric stimulus.
After the memory elimination, the mouse no longer remembers the lesson it previously learned, that this particulat tone was a prelude to getting zapped.

According to Karl Lashley (1950), there are no special cells reserved for special momories.
An engram is represented throughout a region (of the brain).
But, in 1984, Richard F. Thompson ''demonstrated that after he trained rabbits, and then surgically removed just a few hundred neurons from the interpositus nucleus (a part of the cerebellum), the animals no longer blinked in response to the tone (that they previously blinked in response to).''
I.e. Thomson found an engram encoding the association between the puff of air, the tone, and the eyeblink: Showing for the first time that the destruction of a particular set of neurons could wipe out a memory.

According to Silva: ''Memory is not all that we are, but almost. We are the entire set of memories that we acquire. Every one of these memories changes who we are.''.
So, manipulating memories is a big thing.

More complex memories, like a human recollection of an event, are stored in many different areas of the brain. Here Karl Lashleys belief in memory as distributed, is alive and well.
But some parts of a memory might still be targeted - they do exist in a discrete number of neurons (According to the Discover, April 2012 article).
For treating (say) PTSD, it is not necessary to take away the entire memory, only the part that leaves the patient disabled with fear. A part that might some day be targeted inside the Amygdala...

And there is more: In some very interesting recent studies it has been found that CREB has a well-documented role in neuronal plasticity and long-term memory formation in the brain.
According to Silva, it is now possible (under certain circumstances) to manipule CREB to funnel memories into specific cells.
In mice with the equivalent of Alzheimers disease, CREB has been injected into the animals hippocampus using engineered herpesvirus, with the result that the mice regained their ability to learn...

All, steps on the way to understanding learning and memory in human brains.

For more about memories, see my course notes (pdf) (d).


-Simon

Simon Laub
www.simonlaub.net


Index

 

Inner Worlds - Religion comes as natural to us as language?

The March 17th 2012 issue of NewScientist has an interesting article about religion.

According to Justin L. Barrett, we are born believers, because we are inclined to find religious claims and explanations atractive and easily acquired.
Its an evolutionary byproduct of our ordinary cognitive equipment. It doesn't tell us anything about the truth or otherwise about religious claims, but it might help us see some religious phenomenon in a new light.

I.e. ''From birth, children show certain predilections in what they pay attention to, and what they are inclined to think.''
Indeed, as soon as they are born, babies try to make sense of their world. And their minds show certain tendencies.
One tendency is the ability to recognise the difference between ordinary physical objects and ''agents''.
Physical objects must be contacted in order to move. But agents can move by themselves.
Agents act to attain goals. Agents need not be visisble (in order to function in social groups we must be able to reason about agents we cannot see). Coupled with other cognitive tendencies, such as the search for purpose, make children highly receptive to religion, according to Justin L. Barrett.

When it comes to speculation about the origins of natural things, children are very receptive to explanations that invoke design or purpose. The intuition is that order and design requires an agent to bring it about.
A ball is a physical object. And experiments with 13 month babies suggest that babies finds a ball creating order more surprising than a ball creating disorder... But, paint a face on the ball, and babies can't decide whether it is more surprising if the ball creates order or disorder!

In a series of other studies, small children seem to presume that all agents have superknowledge, superperception and immortality - until they learn otherwise.
''An interpretation of these findings is that young children find it easier to assume, that others know, sense and remember everything - than to figure out precisely who knows, senses and remembers what.
The default position is to assume superpowers until teaching or experience tells them otherwise....Some 3 year olds and 4 year olds simply assume that others have complete, accurate knowledge of the world. And the childrens default position seem to be that others are immortal
....''

Summarized:
''An attraction to agent based explanations, a tendency to explain in terms of design and purpose, an asumption that others have superpowers - Obviously, makes children receptive to the idea, that one or many gods can account for the world around them.''
At least according to Justin L. Barrett.


-Simon

Simon Laub
www.simonlaub.net


Index

Universal Grammar.

Piraha is a language spoken by the Piraha, an indigenous people of the Amazonas, Brazil.
The language has been studied by Daniel Everett (author, academic and former missionary).
The language is interesting, because:
- It has one of the smallest phoneme inventories of any known language (as few as ten phonemes).
- It has an extremely limited clause structure, not allowing for nested recursive sentences,
like ''Mary said that John thought that Henry was fired''.
- Consonants and vowels may be omitted altogether and the meaning conveyed solely through variations in pitch, stress, and rhythm.

Everett claims that the absence of recursion, if real, falsifies the basic assumption of Chomskian Universal grammar. (i.e. that there are properties that all possible natural human languages have).

Certainly, the universality of recursion are falsified by Piraha?
But, in an interview with NewScientist, (March 17th, 2012), Chomsky says he doesn't think this is much of a problem:
''These people are genetically identical to all other humans with regard to language.
They can learn Portuguese perfectly easy, just as Portuguese children do. So, they have the same universal grammar the rest of us have
.''
Apparently, for Chomsky, universal grammar is simply: ''..there is some genetic factor that distinguishes humans from other animals. and that it is language-specific. The theory of that genetic component, whatever it turns out to be, is what is called universal grammar.''

Obviously, such a definition of universal grammar leads to discussions: E.g. Geoffrey Sampson maintains that universal grammar theories are not falsifiable and are therefore pseudoscientific theories. I.e. are the grammatical ''rules'', linguists posit, simply post-hoc observations about existing languages, rather than predictions about what is possible in a language?
Indeed, according to Wikipedia: A growing number of language acquisition researchers argue that the very idea of a strict rule-based grammar in any language flies in the face of what is known about how languages are spoken and how languages evolve over time.

Chomsky himself seems to be very calm about it: Sure, cognitive systems are very complex, and difficult to investigate, so we had hoped to get some insight by language studies (where language is one component of the human cognitive capacity, which happens to be fairly amenable to enquiry). But, perhaps it just all too complex anyway...
Indeed, many components of human nature are just too complicated to be really researchable. According to Chomsky:
''Thats a pretty normal phenomenon. Take, say, physics, which restricts itself to extremely simple questions. If a molecule becomes too complex, they hand it over to chemists. And if it becomes to complex for them, they hand it over to biologists.
And if the system becomes too complex for them, they hand it over to psychologists... and so on, until it ends up in the hands of historians or novelists.
As you deal with more and more complex systems, it becomes harder to find deep and interesting properties....
''


-Simon

Simon Laub
www.simonlaub.net



Index


The Ego Trick | The Ego Tunnel | The Invisible Gorilla | Evolutionary Psychology | Incognito
About www.simonlaub.net | Site Index | Post Index | Connections | Future Minds | NeuroSky | Contact Info
© April 2012 Simon Laub - www.simonlaub.dk - www.simonlaub.net
Original page design - April 10th 2012. Simon Laub - Aarhus, Denmark, Europe.