Impressions and Links from
Areadne 2014

Research in Encoding and Decoding of Neural Ensembles.
Nomikos Conference Centre.
Santorini, Greece.
25 - 29 June 2014.

Santorini, Santorini

I had the great pleasure of taking part in Areadne 2014, the fifth AREADNE conference on research in encoding and decoding of neural ensembles.

Below you will find impressions from the conference, and links for further reading.

The Areadne 2014 conference was held at the Nomikos Conference Centre. Santorini, Greece.
Santorini, Greece

Tried to follow as much of the conference as possible. But, well, these notes are, of course, in no way, shape or form complete...
Rather, these notes were written on conference nights, as my way of keeping track of the events that I attended at the conference. And as a way of storing links and references for future reference.

But enough disclaimers, below, you'll find impressions and links from some of the conference talks and seminars, including links for further reading.

Great stuff indeed. And much (Areadne stuff) to look forward to in the coming years!

1. Introduction.

In the conference program the organizers wrote:
One of the fundamental problems in neuroscience today is to understand how the activation of large populations of neurons gives rise to the higher order functions of the brain including learning, memory, cognition, perception, action and ultimately conscious awareness.
... but it remains largely a mystery how collections of neurons interact to perform these functions..
Recent technological advances have provided a glimpse into the global functioning of the brain.
...(and) these methodological advances have expanded our knowledge of brain functioning beyond the single neuron level.
And we are here to learn more...
First and foremost, this conference is intended to bring scientific leaders from around the world to present their recent findings on the functioning of neuronal ensembles.
In Greek mythology Princess Ariadne (Areadne) enabled Theseus to escape the mythical labyrinth by handing him a thread and telling him how to use it effectively.

Simon Laub. Aread 2014, Thera, Santorini, Greece

2. Impressions from Thursday, June 26th.

2.1. Morning Session.

2.1.1. Gilles Laurent et al., Max Planck Institute for Brain Research. ''Explorations of a Three-layered Visual Cortex''.
The day started with a presentation by Gilles Laurent on how to decipher visual computation in a reptilian cortex.
A three-layer structure (With close evolutionary links to the common ancestor of both mammals and sauropsids. Probably close to the dorsal pallial structure that existed in their common ancestor some 320 million years ago) that offers unique advantages to decipher basic computations of cortical circuits.
Where the group reported preliminary results on components, connectivity and plasticity of intact circuits of the turtle visual cortex (Among many other interesting snippets of info, some rather rather peculiar, it was mentioned that in vitro preparations of the turtle brain remain vital and spontaneously active for several days after tissue extraction...).
Indeed, an interesting start to the day.

2.1.2. Andreas Tolias, University of Basel. ''The structure and function of cortical microcircuits''.
Tolias writes:
Our aim is to understand the rules by which different types of neurons in the neocortex connect to each other and work together to process information.
First, from an anatomical perspective where we are mapping out the detailed wiring diagram of the cortical microcircuit using high-throughput multi-cell patch clamp recording. Second, using electrophysiological and imaging methods we characterize the activity structure of large populations of neurons to understand the nature of the neural code.
Which enables them to record the activity of hundreds of cells (up to 500 neurons) in small volumes of the cortex to characterize microcircuit population activity during visual processing...

And, who knows, move us toward understanding general computing principles in the neocortex:

Hypothesis: Neocortex is comprised of repeating microcircuit motifs that contain numerous cell types which are wired together in a stereotypical manner performing canonical algorithms.

Goal: Discover the canonical computations algorithms that these circuit modules implement
(Not directly said, still: And use these algorithms in new artificial neural net Deep Learning implementations?).


2.2. Afternoon Session.

2.2.1. Carl Petersen, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland. ''Synaptic mechanisms of sensory perception''.
Petersen writes:
to characterize sensory processing in the mouse barrel cortex, a brain region known to process tactile information relating to the whiskers on the snout. Each whisker is individually represented in the primary somatosensory neocortex by an anatomical unit termed a barrel. The barrels are arranged in a stereotypical map, which allows recordings and manipulations to be targeted with remarkable precision
Indeed, interesting.
And even more so, as Petersen moved on:
our experiments provide new insight into sensory perception at the level of individual neurons and their synaptic connections.
More in this Nature article.

Nomikos. Santorini, Greece

3. Impressions from Friday, June 27th.

3.1. Morning Session.

3.1.1. Susumu Tonegawa et al., MIT. ''Engrams for genuine and false memories.''.
According to Richard Semons ''Engram theory'' memories are stored as biophysical or biochemical changes in the brain (and other neural tissue) in response to external stimuli.

Recent studies have found that behavior based on high-level cognition, such as the expression of a specific memory, can be activated in a mammal by physical activation of small subpopulations of brain cells.

Validating the theory:
A recent study conducted by applying channelrhodopsin mediated optogenetics to contextual fear memory of mice validated the engram theory.
Humans are known to form distinct false memories of episodes they never experienced. In order to understand this phenomenon better the authors have investigated a mouse model of false contextual fear memory.

More work (and a better understanding) on this would allow us to manipulate these memories, and, who knows, switch from fear to pleasure conditioning or vice versa? Well, who knows?

3.1.2. Loren M. Frank, Univeristy of California. ''Activation and Reactivation of hippocampal-cortical networks.''.
Memories do, of course, have a profound impact on who we are.
So, again a highly anticipated talk.

The authors write on NCBI (National Library of Medicine):
The hippocampus is critical for forming memories of daily life events, but over time memories can become independent of the hippocampus.
This transition of memory representations from being strongly dependent on the hippocampus to being fully or mostly engrained in cortical networks is termed memory consolidation. According to the influential ''two-stage model'' of consolidation, the first stage occurs during behavior, when the hippocampus rapidly encodes various aspects of the experience via changes of synaptic strengths. In the second stage, during slow-wave sleep and consummatory behaviors, the newly acquired hippocampal information is replayed repeatedly, driving plasticity in neocortex and allowing for the longer-term storage of the memory.
The hippocampus has therefore been referred to as the ''fast learner'', which teaches the cortex, the ''slow learner''.
Here we learned that:
These patterns of coordinated reactivation recapitulate activity patterns seen during behavior and reveal highly specific functional networks of hippocampal-prefrontal neurons. This reactivation is well suited to play an important role in memory storage, memory retrieval and memory-guided decision-making.
NeoCortex       Hippocampus      
Slow plasticity Rapid plasticity

Where we retrieve (from memory) a timecompressed version of the actual events.

3.1.3. Jennifer Raymond, Stanford University. ''Understanding both enhanced and impaired learning with enhanced plasticity: A saturation hypothesis.''.

People have always dreamed of finding a way to speed up learning:
The prospect that learning might be enhanced by enhancing synaptic plasticity has captured the imagination of many, fueled by reports of super cognition in transgenic mice with enhanced synaptic plasticity
Learning is a somewhat complicated thing though. In eLife 2017;6:e20147 the authors write:
Thus, the capacity for new learning is determined by a dynamic interplay between the threshold for synaptic plasticity and the recent history of activity.
Recent events may bring about saturation:
In particular, our results suggest that in mice with a low threshold for associative synaptic plasticity, saturation of the plasticity and occlusion of further learning may occur.
Our results indicate that the recent history of activity in a circuit is critical in determining whether an enhanced plasticity rate or saturation dominates the capacity of a circuit for new learning.
The authors conclude:
we established that the same individual animals with enhanced plasticity can respond to the same behavioral training with either enhanced or impaired learning, depending on the recent history of experience. Thus, the capacity for new learning is determined by a dynamic interplay between the threshold for synaptic plasticity and the recent history of activity.
Indeed, it appears to be a rather complex mechanism. In the authors words:
One can speculate that in each brain area, neural circuits have evolved to optimize the threshold for plasticity to delicately balance the need to prevent inappropriate inputs from triggering and saturating plasticity, while allowing the appropriate inputs to drive learning.

3.2. Afternoon Session.

3.2.1. Ken Britten et al., University of California, Davis. ''Cortical representation of cues used in visually guided steering.''.

When a monkey experience an optic flow, a pattern of motion that results from motion through an environment, we can investigate the relationship of signals in various brain areas, with the percept the animal reports.

Britten writes:
We can ask if the activity in particular areas predicts the animals perception on a trial-to- trial basis.
(and) we can make miniscule perturbations in the activity of these areas as our subject perform the task, and look for systematic changes in the reported percepts.
In the end we might able to:
...formulate biologically realistic, circuit-based models which are consistent with our measurements of the quantitative relationship between neurophysiology and perception.
One experiment especially, reported here, seem to suggest that it might be possible to directly link neurophysiology and perception. According to Britten:
...a few neurons approach the precision of the behavior. This suggests that relatively simple population readout would allow the signals in MST to support active steering behavior.
3.2.2. Hanlin Tang and Gabriel Kreiman et al., University of California, Davis. ''Visual object completion in the human brain.''.

In natural viewing conditions, we often only see parts of an object due to limited viewing angles, poor luminosity or occlusion.

Understanding how this is possible is, of course, a pretty tricky thing, as (in the authors words):
recognition of objects from partial information is a difficult problem for purely feed-forward architectures and may involve significant contributions from recurrent connections
The authors write:
Recognition of whole objects has been successfully modeled by purely bottom-up architectures ...
(but) those models struggle to identify objects with only partial information. Instead, computational models that are successful at pattern completion involve recurrent connections. Different computational models of visual recognition that incorporate recurrent computations include connections within the ventral stream and/or from pre-frontal areas to the ventral stream.
3.2.3. Brian A. Wandell, Stanford University. ''Measuring activity, connections and tissue properties in the living human brain''.

Quite funnily, Wandell started out by saying that ''All models are wrong, but this one is useful''.

Wandel describes fMRI here:
fMRI measures spatial variations in blood oxygenation levels across the brain. The oxygenation levels, in turn, are correlated with neural responses.
He also gives diffusion tensor imaging a go:
A second MR technique, diffusion tensor imaging, provides another valuable measurement of the brain. While fMRI provides information about brain signals, DTI provides insight about the properties of brain structures. DTI has been particularly important in determining which parts of the brain are connected to one another

Diffusion tensor imaging

And, these combinations of structure- and function measurements on awake human brains are, of course, a relatively new thing that has allowed us to clarify our theories about how the brain actually works...

But, there are, of course, still a long way to go before we get it all right....
Jokingly, Wandell asked us to consider whether white matter actually controls gray matter in the brain?

Clearly, the brain is still to a large extent an unsolved mystery...

4. Impressions from Saturday, June 28th.

4.1. Afternoon Session.

4.1.1. EJ Chichilnisky, Stanford University. ''Functional circuitry of the primate retina at cellular resolution''.
Chichilnisky talked about results that begin to reveal a fuller picture of the functional connectivity of the retina, from input to output via the interneuron layer, at cellular resolution.

mapping the flow of signals between the input and output layers at cellular resolution, and using these maps along with closed loop experiments and computational methods to infer interneuron connectivity
Clearly, important stuff, if we ever want to get an idea of what of is actually going on...
An, obviously, also an interesting talk.

Neurons, artistic impression

4.1.2. Tatyana Sharpee et al, Salk Institute for Biological Studies, La Jolla, California, USA. ''Coordinated encoding between cell types in the retina: Insights from the theory of phase transitions''.
In PNAS the authors write:
Information theory has offered explanations for how different types of neurons can maximize the transmitted information by encoding different stimulus features
Rather strangely, the authors described a model that uses some of the ideas that we use, when we describe different phases of matter ( my understanding of what they said..):
Which allows the circuits to operate near a critical point, where neural circuits can maximize information transmission in a given environment while retaining the ability to quickly adapt to a new environment.
4.1.3. Saturdays poster session.
In the afternoon poster session I was particular interested in David Pfau's (On: twitter) poster
- ''Whole-brain region of interest detection''.

In his thesis David writes:
If one could look in to a brain with perfect electrical knowledge, one would see a constant stream of spikes. Somehow from this pattern of spiking, the full repertoire of perception, cognition and behavior emerges. Yet given just this stream of information, without prior knowledge of the biological structure behind it, it is extremely difficult to glean any understanding of the underlying computation. This thesis addresses a number of issues in how to learn from time series data when one does not know the structure of the process that generates it.
Finally we address some of the practical issues in experimental neuroscience today around making the dream of recording the activity of every neuron in a living brain a reality - namely, how to segment the activity of different neurons automatically at the scale now becoming possible in experimental neuroscience.
Here, it was also about picking up the relevant signals:
Advances in neuroscience are producing data at an astounding rate - data which are fiendishly complex both to process and to interpret. Biological neural networks are high-dimensional, nonlinear, noisy, heterogeneous, and in nearly every way defy the simplifying assumptions of standard statistical methods.
understanding the structure of neural populations, from the abstract level of how to uncover structure in generic time series, to the practical matter of finding relevant biological structure in state-of-the-art experimental techniques.

Super interesting stuff.

David about David:
Research scientist at Google DeepMind. We're on a mission to solve artificial general intelligence. My own research interests span artificial intelligence, machine learning and computational neuroscience.
Btw. on David's homepage, David Pfau (''I study brains and intelligence''), there is a ton of great material.
Including great links (all sorts of things). New to me was Anne Ridlers AI tulips (See her twitter account).

Anna Ridlers AI tulips

The (personal) stuff on ''self optimization'' is also quite interesting.
David's personal Github is here.

Santorini Pics. Saturday, June 28th, 2014. Santorini, Greece.

Santorini Greece
Santorini Greece

Santorini Greece
Santorini Greece

5. Impressions from Sunday, June 29th.

5.1. Morning Session.

5.1.1. Eve Marder, Brandeis University. ''Parallel pathways, multiple solutions, degenerate neuromodulation, and robustness of circuit performance.''.
This talk started on a happy note. Here, we was to hear about the crustacean stomatogastric ganglion (STG), a small circuit of about 30 neurons.
Something simple at last!

Before the talk, I read on Scholarpedia that these guys control muscles that dilate and constrict the pyloric region of the stomach in a cyclic three phase rhythm.

After endless complexity, it was really nice to read something like:
because there are only a few neurons in each circuit the entire ''wiring diagram'' for the ganglion has been determined...
Hurray ...

And then the talk started ...

Understanding how the rich STG dynamics arise from the intrinsic properties and the known synaptic connectivity is complicated by the fact that there are more than fifty substances, including amines, amino acids, and neuropeptides, that reach the STG either as neurohormones or from the terminals of descending fibers
Apparently, there really is no end to the complexity of our world.
But, certainly, an interesting talk.

5.1.2. Daniel Margoliash, University of Chicago. ''NeuroDynamics in bird song motor production''.
Birdsong is a rather strange phenomenon.

In the introduction to the talk, one could read that:
Its widespread importance in courtship and territorial behavior has subjected song to strong sexual and natural selection.
A working hypothesis was presented. See article. Here, bird song was described as ''a production in terms of nonlinear dynamical systems behavior of populations of coupled oscillators''...

In the end, the hypothesis should bring us closer to an understanding of how neural activity in the brain is shaped by learning and transformed into song.

5.1.3. Mark Churchland, Columbia University Medical Center. ''Many movements, one trigger''.

On Columbias homepage one reads:
Mark Churchland studies movement and how the brain prepares to act, work that could one day help treat people with movement disorders.
Some of the research is rather philosophical:
If you are thirsty and reach for a glass of water, that is a voluntary movement. A knee-jerk reaction is not. What about grabbing a falling mug, is that voluntary? Philosophers have long debated what kind of actions we freely choose versus those that are reflexive...
The page continues:
Studying how the brain prepares for movement offers a peek at a computational process that is normally hidden from direct observation but that has immediate consequences,
Clearly, interesting stuff!

In the introduction to the talk Churchland writes:
The transition from preparatory to movement-related activity can be modeled as a sudden change in the dynamics of the system. An open question is: what causes this sudden change, and in doing so triggers the movement?
One hypothesis predicts a release of inhibition just before movement onset.

But that is apparently not right, according to Churchland (APS):
No clear subset that might implement an inhibitory gate was observed. Together with previous evidence against upstream inhibitory mechanisms in premotor cortex, this provides evidence against an inhibitory ''gate'' for motor output in cortex.
Interestingly, Churchland writes:
Neural activity in monkey motor cortex (M1) and dorsal premotor cortex (PMd) can reflect a chosen movement well before that movement begins. The pattern of neural activity then changes profoundly just before movement onset
Neural signals are tricky things though.

Churchland writes:
Yet neural signals can also relate to ''internal'' factors: the thoughts and computations that link perception to action.
We characterized a neural signal that occurs during the transition from preparing a reaching movement to actually reaching. This neural signal conveys remarkably accurate information about when the reach will occur, but carries essentially no information about what that reach will be.

The identity of the reach itself is carried by other signals. Thus, the brain appears to employ distinct signals to convey what should be done and when it should be done.

5.2. Afternoon Session.

5.2.1. Surya Ganguli, Stanford University. ''A theory of neural dimensionality, dynamics and measurement''.

In real experiments one can only measure an infinitesimal fraction of the relevant neurons?
Indeed, Ganguli writes:
Namely, how can we record on the order of hundreds of neurons in regions deep within the brain, far from the sensory and motor peripheries, like mammalian hippocampus, or pre- frontal, parietal, or motor cortices, and obtain scientifically interpretable results that relate neural activity to behavior and cognition?
Sure, we could simply wait until we can record more neurons...
There has been a growth in the number of neurons we can simultaneously record for a long time. With a doubling rate of 7.4 years since the 1960's.

Good, but then we will still have to wait about 100 to 200 years to record 4 to 7 orders of magnitude more neurons.

We need a theory of neural measurement that addresses a fundamental question: how and when do statistical analyses applied to an infinitesimally small subset of neurons reflect the collective dynamics of the much larger, unobserved circuit they are embedded in?
Maybe, Ganguli indicates, it is possible to relate such measurements to behaviour and cognition, because:
A natural hypothesis is that for a wide variety of tasks, neural dimensionality is much smaller than the number of recorded neurons because the task is simple and the neural population dynamics is smooth.
Moreover, according to Ganguli, their (new) theory shows that:
...through the theory of random projections, that the number of neurons we need to record to accurately recover dynamical portraits need only grow logarithmically with the neuronal task complexity.


5.2.2. Philip Sabes, University of California. ''Unsupervised learning in parietal cortex: From theory to novel brain-machine interface''.

On Philip Sabes homepage one reads:
The ability to flexibly and adaptively integrate information from a variety of sources is a fundamental feature of brain function, from higher cognition to sensory and motor processing. Even a simple behavior such as reaching to a target relies on the integration of multimodal sensory signals and, moreover, exhibits rapid adaptation in response to changes in these signals.
In this talk we heard about experiments, where (e.g.):
Animals were trained to perform a reaching task under the guidance of visual feedback. They were then exposed to a novel, artificial feedback signal in the form of a non-biomimetic pattern of multielectrode intracortical microstimulation (ICMS). After training with correlated visual and ICMS feedback, the animals were able to perform precise movements with the artificial signal alone.

5.2.3. Closing remarks, see you in 2016, and beyond...
After the closing remarks, it was time to say goodbye.

And wear your new great conference Santorini T-Shirt:

Santorini Greece Conference T-shirt
Santorini Greece Conference T-shirt

And say goodbye to the great island of Santorini.

Thera, Santorini, Greece

Thera, Santorini, Greece

A super great conference !

Nomikos conference center. Thera, Santorini, Greece

June 2014, Santorini Pics.
Enactive Cognition (Reading 2012) | Nasslli 2012 | WCE 2013 | CogSci 2012 | CogSci 2013 | Aamas 2014 | Aspects of NeuroScience
About | Site Index | Post Index | Connections | Future Minds | Mind Design | Contact Info
© September 2014 Simon Laub - - -
Original page design - September 10th 2014. Simon Laub - Aarhus, Denmark, Europe.