...The invited talks are meant to span a broad range of perspectives on deep learning and the brain, and concentrated mostly on visual processing. Visual object recognition is the area most studied in prior deep learning work both in machine learning and cognitive science, and hence makes a natural first focus for a workshop.First up was Tom Griffiths, Berkeley, who talked about ''Combining deep networks and Bayesian inference''.
A human brain can learn high-level abstractions if guided by the signals produced by other humans, which act as hints or indirect supervision for these high-level abstractions.Obviously this has implications for AI research. In Yoshua Bengios words:
AI learning:Next up was, Tomaso Paggio, McGovern Institure for Brain Resaerch at MIT, who had some pretty clear ideas about the future of Machine Learning. According to Paggio, ''Machine Learning has the last 20 years been about supervised learning. The next phase of ML is likely to be much more about unsupervised learning.''...
- Collections of learning agents building on each other's discoveries to build up towards higher-level abstractions.
- Guiding computers just like we guide children.
Much of humankind's remarkable mental aptitude can be attributed to analogical ability - the ability to perceive and use relational similarity...Apparently, it is possible to set up 3 stages in the process of making an analogy.
Analogy is the perception of like relational patterns across different contexts. The ability to perceive and use purely relational similarity is a major contributor - arguably the major contributor - to our species' remarkable mental agility.
With modest surprise, there may be little or no belief change, but conditions that heighten surprise - engaging foresight, or providing striking facts, episodes, or explanations yield dramatic belief revisions as people seek coherence...Jeffrey Loewenstein, University of Illinois, continued with a talk about ''Surprise and Social Influence''.
This plot structure teaches an expectation with initial, repeated events. Then it applies a contrasting event to generate surprise...Phil and Rebecca Maguire, Kildare, Ireland, talked about surprise vs. probability.
The sports minister Svilen Neykov ordered a special review after 4, 15, 23, 24, 35, and 42 were drawn on Sept 6 and again on Sept 10 in consecutive lottery rounds.Well, but what if you get 1000 heads in a row?
The probability of this happening is 4.2 million to one, according to the Bulgarian mathematician Mihail Konstantinov, although he added that such coincidences can happen.
The most likely hypothesis, is the one which describes the data most succinctly, according to Occams RazorIn Maguires words:
The identification of a pattern in supposedly random data suggests the existence of an underlying structure where none was anticipated, a discrepancy that results in an urgent representational updating process.Still, the Bulgarian lottery was apparently not rigged...
that experienced surprise reflects the level of difficulty of constructing or retrieving an explanation for why a surprising outcome may have occurred.I.e. here ''explanatory difficulties'' are taken as a measure of how big the surprise is.
Current theories on eye movements heavily refer to research on reading comprehension, which do not necessarily apply to comprehension of complex graphics and (dynamic animated) pictures.So, who knows, we might have to develop new theoretical approaches for eye movements in learning from graphics to understand the underlying processes.
keeping the gaze on a fixation point may be effortful and require attentional resources.
... fixation control requires attention. If so, then the limited capacity hypothesis predicts that fewer attentional resources should be available for covert attention shifts to a non-central target when people are instructed to maintain central fixation.Sadly, it is all a bit complicated, so we don't seem to end up with any super precise conclusions...?
However, these two experiments do not clarify whether the attentional resources engaged in keeping the eye on the fixation dot affected only reflexive shifts in covert attention or also voluntary shifts...Again, the only thing everyone really seem to agree on is that more research is needed ...
Grounded cognition theories state that conceptual knowledge is closely linked to the current situation and embodied in sensory dimensions.I.e. as we interact with the environment, ''knowledge related to our environment is continually recovered from memory... Which closely links the current situation to reactivated traces in memory''.
A cognitive developmental approach to robotics can help us understand the development of increasingly complex cognitive processes in natural and artificial systems, and how such processes emerge through physical/social interaction.They think that robotic development should be seen as a balance between an ''embedded'' robotic ''nature'' and nurture. And that this will give us (robot) learning and development.
Building a robot which reproduces such a developmental process seems effective. It will also contribute to a design principle for a robot that can communicate with human beings.If succesful, such robot models should certainly be able to help us understand more about how humans learn...
We validate the proposed model by examining whether a real robot can acquire Japanese vowels through interactions with its caregiver.
Time flows from left to right when one is facing south, from right to left when one is facing north, toward the body when one is facing east, and away from the body when one is facing west.Time is indeed a complicated thing...
''What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.''When you ask a 5 year old to explain what a whole year is, they will typically make a huge gesture to indicate a whole year. Indeed, many types of temporal gesture, previously documented only in adults, are present in children (And just as for adults, there might be links between these gestures and experience with reading and artifacts like calendars).
In other words, moral decisions are just another kind of ordinary decision. Yet, there is something unsettling about this conclusion: We often feel as if morality places an absolute constraint on our behavior, in a way unlike ordinary personal concernsAgain, somewhat unsettling, why do we have morality?
This is not surprising; moral rules can satisfy two important demands. The first is self-understanding. One of the most central themes of the last fifty years of research in social psychology is that humans continually attempt to construct consistent models of their own attitudes, beliefs and behavior.Then, what is the difference between moral cognition and non-moral cognition? Cushman writes:
The second demand is social coordination. Moral rules serve as social objects; we use them not only to guide our own behavior, but also to express and coordinate normative expectations within social groups. In order to communicate a moral value it helps to make it explicit. In order to apply it clearly and consistently it helps to treat it as inviolable.
But, we may need to think of non-moral cognition not as a complete blueprint, but instead as an underlying scaffold: A framework of common elements that supports a structure of more unique design.In ''Moral values and motivations: How special are they?'' Cushman writes:
To a large degree, we find that various aspects of moral value, including the subjective value of moral actions, outcomes, and their integration, are supported by a domain-general cognitive and neural architecture implicated in reward-related processes and economic decision-making.Again, ''morality'' and ''non-morality might use some of the same cognitive mechanisms:
... similarities between moral and non-moral value. Both motivate us to obtain certain goals or desirable outcomes - like the welfare of sick children or the newest technological gadget and we experience pleasure in both cases when we succeed.And with these similarities we should perhaps be less surprised, when things go terribly wrong:
Many individuals are also perfectly willing to bargain sacred values for monetary gain in practice (especially when they think no one is watching), as scandal-prone politicians often remind us.Indeed, morality is an interesting subject. And Cushman certainly gave a great a great introduction to the subject at the Symposium.
Which clearly makes it rather difficult for a machine to act morally.
- Moral perfection requires acting in the best interest of other moral agents and patients.
- Acting in the best interest of an agent requires knowledge of the inner life of that agent, including its potentially irrational dispositions.
- Ethically flawed and otherwise irrational human beings are moral agents.
- Moral perfection requires knowledge of the inner lives of ethically flawed and otherwise irrational human beings.
Humans seem to agree on instances of moral exemplars. At a very minimum, we can salvage moral principles in machines by designing them to pattern their own behavior (to a degree) after that of an identified moral authority.Certainly, humans are capable of some level of morality, without infinite cognitive resources, so there might be a way forward for machine morality... Bello is certainly right when he states that:
Treating ethics in a vacuum under ideal conditions of unlimited cognitive resources and a static world is tantamount to performing a gigantic gedanken-experiment. One that may never have any import to real-world moral doings.Again, a brilliant and super-interesting talk!
Common sense suggests that each of us should live his own life (autonomy), give special consideration to certain others (obligation), have some significant concern for the general good (neutral values), and treat the people he deals with decently (deontology). It also suggests that these aims may produce serious inner conflict.Luckily, there is probably also an upside to all of these terrible moral dilemmas. They might give us a powerful window into how we actually make decisions...
If AI's only know what words or symbols are, as a reference to other words or symbols, then there is never going to be meaning in this.At least some of the symbols has to be grounded, not in other symbols, but in some some kind of ''sensory motor input''. John Searle's Chinese Room argument deals with such scenarios. If we just look up all our symbols as a reference to other symbols, then we end up with ''symbol manipulation'', not a system that knows or understands.
''Why it feels like something'' (i.e. the Hard Problem)then we (cognitive scientists and philosophers) can all retire and go home.
In traditional AI symbols are treated in a completely syntactical way. The relation of the symbols to the outside world is rarely discussed.
Still, humans need a contact from a symbol to the outside world, in order for us to make sense of what we are talking about. We say that the symbol is ''grounded''.
For computers, using symbols are fine as long as there a human around to make sense of the symbols, to ''ground'' them.
The computer itself doen't understand anything ...
A computer program, possessed only of internal symbols defined in terms of other symbols, can't ever get down to symbols that really mean something?
You will never develop a concept of an apple from reading or hearing about it.
Concepts requires that you smell, feel and see, taste...
Still, grounding our world in our immediate experience might seem like a ''shaky'' thing to do. But, grounding in the material world, like atoms and electrons, can quicky become an equally ''shaky'' thing to do, especially, when you begin to think about quantum theory and things like that ...
Our motivation for a human-like appearance is that it gives more intimate contact...So, many of his latest robots are given a soft exterior, as well as the appearance of a child...
At the moment, our interaction with social robots is completely one-sided. These devices simply don't have the means to understand our words and gestures. That's something Scheutz wants to change. If we can create devices that seem more humanlike in their response to us, he reasons, they may be well suited for more complex work with people, such as tending to the basic needs of hospital patients or the elderly at home.
Thinking: the talking of the soul with itself.Jackendoff gives this Platonian version, from the Sophist:
Stranger: Are not thought and speech the same, with this exception, that what is called thought is the unuttered conversation of the soul with herself?Sure, thought and speech are close, but not quite the same.
Theaetetus: Quite true.
Stranger: But the stream of thought which flows through the lips and is audible is called speech?
The asserted inseparability of thought and speech is an exaggeration, and that man does not think in sounds and through sounds, but rather with and in accompaniment of sounds.Jackendoff writes: ''Thought is independent of language, and the accompaniment of thought by conscious sounds is just that, an accompaniment''.
July 2014, Montreal and Quebec pics.
Enactive Cognition Conference (Reading 2012) | Nasslli 2012 | WCE 2013 | CogSci 2012 | CogSci 2013 | Aamas 2014 | Areadne 2014
About www.simonlaub.net | Site Index | Post Index | NeuroSky | Connections | Future Minds | Mind Design | Contact Info
© September 2014 Simon Laub - www.simonlaub.dk - www.simonlaub.net - simonlaub.com
Original page design - September 10th 2014. Simon Laub - Aarhus, Denmark, Europe.