Impressions and Links
from AAMAS 2014

(13th International conference on
Autonomous Agents and Multiagent systems).

Marriott rive Gauche, Paris

I had the great pleasure of taking part in Aamas 2014, the leading scientific conference for research in the areas of artificial intelligence, autonomous agents, and multiagent systems.

Below you will find impressions from the conference, and links for further reading.

The Aamas 2014 conference was held at the Marriott Rive Gauche Hotel in Paris, France.
Marriott rive Gauche, Paris

Tried to follow as many talks as possible. But, well, these notes are, of course, in no way, shape or form complete...
Rather, these notes were written on conference nights, as my way of keeping track of the events that I attended at the conference. And as a way of storing links and references for future reference.

But enough disclaimers, below, you'll find impressions and links from some of the conference talks and seminars, including links for further reading.

Great stuff indeed. And much (Aamas stuff) to look forward to in the coming years!

1. Introduction.

1.1. Page Overview.
- Workshops, sessions and keynotes.

Understanding how agents interact with each other and the world has been studied in human psychology, psychiatry, animal learning theory, robotics and control theory, artificial intelligence and neuroscience.
Uniquely, Aamas gives one an opportunity to hear a little bit (rather, a lot) about autonomous agents and multiagent systems from all of these perspectives, at the same conference.

Below, in section (2), you will find impressions and links from the workshops I attended tuesday.
Followed, in section (3 - 6), by impressions and links from sessions, demos and keynotes that I followed wednesday, thursday and friday.

Please notice: These notes don't do justice to the often brilliant presentations that initiated them! So, please read the original presentations to avoid any distortions ...

2. Impressions from Tuesday, May 6th.

2.1. Workshop: Adaptive and Learning Agents (Room: Miles Davis B).

2.1.1. The day started by a talk about ''Difference Evaluation Functions'' by Logan Yliniemi, Oregon State University.

Or, in other words, the credit assignment problem: Determining how much one agent contributes, in a situation that involves many agents.
Always, a good question, indeed...

Here, in an interesting example, we heard a little about Mars Rovers looking at points of interests (I.e. interesting rocks).
The problem to solve: Is it possible to get the rovers to cover all interesting points, instead of having them all looking at basicly the same rocks.

All, clearly relevant, even if you don't find yourself on Mars along with a bunch of rovers looking for interesting rocks...

2.1.2. Next up was Sam Devlin, University of York, who talked about ''Potential Based Reward Shaping''.

A talk centered about finding good reinforcement learning strategies that can shape what options to explore in a given environment (I.e. by choosing a reward strategy, through reward shaping, it is possible to guide the learning).
According to Sam Devlin, a good challenge towards 2025 would be to find ways how to automate the design of such potential reward functions for a given domain.

Clearly, in the land of robots, automation is king...

2.1.3. Devlin was followed by Matthew Taylor, Oregon State University, who talked about ''Exploiting structure and agent centric rewards to promote coordination in large multiagent systems?''.

He started out with a question to the audience:
How do you scale your learning algorithms when dealing with 10.000 agents or more?

Members of the audience suggested that we should:
- Use clusters of agents.
- Hierarchy of agents.

Still, all agreed (sort of) that there is probably not "one size fits all" method.

Still, Taylor suggested that we should take a closer look at
the cleverly named algorithm: The ''Fracked Algorithm''.
(F)actoring action (R)einforcement learning (A)gents in (C)ooperative tas(K) (E)xploiting (D)ifference rewards.

Based on stateless Q-Learning and difference rewards it was argued this could gives some pointers forward in these difficult multi-agents worlds.

Still, a lot remains unsolved, when it comes to these environments with that many agents.
E.g. could something like a ''phase change'' occur in these multi-agent worlds, and are such phenomenons relevant for a detailed description of these multi-agent scenarios?

Well, well...

2.1.4. This was followed by Joost Broekens, Interactive Intelligence Group, Delft, Holland, who talked about ''Grounding Emotions in Reinforcement Learning''.

Human emotions are complex things:
- Grounded in neurobiology.
- Complex feedback signals.

And very helpful when it comes to human-robot communication, as human emotions can help robots learn. I.e. the robots can learn from the emotions human display. Or, in other words, emotions can help shape a robot reward function...

Aamas 2014. Pepper the humanoid robot

Clearly, we should keep in mind that human emotions are complex things.

Still, some emotions are straight forward:
E.g. when a feedback is better that expected (a surprise), an agent becomes happy, and doesn't explore much in its environment (feedback worse than expected leads to increased search).

So, in humans it might be possible (and perhaps more correct) to see emotions, as linked to a change in rewards? I.e. unexpected rewards gives a high feeling of happiness.

It follows that when feelings stay the same we can experience:
- Joy habituation.
- Fear extinction.

Or, in other words, increasing unexpectedness increases the intensity of the given emotion.
So, in humans, we expect emotions and reinforcement (how to behave in the future) to be intimately linked.

In humans, it is also quite clear that emotions are used for communication (giving an evaluation of a situation).
So, if robots are programmed to express emotions (for humans to read) - then, obviously, these emotions should mean the same as they do among humans?

Where it should be noted that:
- We (humans) don't choose which emotions to display (Emotions reflect our internal states, and circumstances).

Still, emotional communication can clearly be very effective, and therefore also an obvious think to look at when we consider human-robot communication.

Indeed, much to look forward to in the coming years!

2.1.5. Chad Crawford, Tandy School of Computer Science, University of Tulsa, talked about ''Evolving effective behaviours to interact with toy-based populations''.

Or, How should I appear (look) to maximize my value?
I.e. in a population where some have beards, and some have glasses - What should one go for?
And how should we interact with people having these tags?

From commodities we know that the value of item is inversely proportional to its scarcity. So, agents will cooperate (be eager to, or, the opposite, not so eager) with other agents based on how rare the cooperation is to them.
Quite telling, and certainly interesting to see how this played out in these toy worlds.

Certainly, gave you a feeling that even simple models could have something to say about our complex human world...

2.1.6. Other talks.

Many other great talks followed this first day at Aamas 2014:

Saad Khan, Orlando University, talked about ''Towards learning movement in dense crowds''.

Which is interesting, as an agent tries to:
-- Maintain a mission / goal.
-- Maintain a social mission (where its movements should not be socially awkward).

E.g. one example could be the Big Dog robot following a soldier through human populations.

When the robot moves, the robot should be able to manage ''micro conflicts'' (conflicts of interest), as all try to follow the most optimal route (for them). Much like the problem of coming up with a plan for moving a wheelchair accross a room full of people.

Logan Yliniemi also returned with another talk, this time about ''Simulation of the introduction of new technologies in Air Traffic Management''.

His motivation was easy enough to understand: Could the human factor be taken out of Air Traffic Management?

But it quicly turned rather dramatic, as we began to take a closer look at the ''Norden Bombing Algorithm''.

Here, a number of airplanes are directed towards a single goal, in order to bomb it.
In the presentation, each agent was controlled by a neural network, and the focus was on keeping a (correct) spacing between planes, and making sure that the target was actually destroyed...

Well, another example of agent modelling...

Finally, after many other presentations, late that day, we were given a short introduction to some of the material on the homepage Focas (Fundamentals of collective adaptive systems).

Indeed, all quite interesting!

So, all in all, a great first day of the conference!

Aamas 2014. Marriott rive Gauche, Paris

3. Impressions from Wednesday, May 7th, 2014.

3.1. Humans and Agents.

3.1.1. Bilal Kartal, University of Minesota, talked about ''User-driven Narrative Variation in Large Story Domains using Monte Carlo Tree Search''.

Clearly, agents should be able to come up with coherent and believable stories...

In their paper, the authors write:
Planning-based techniques are powerful tools for automated narrative generation...
Additionally, we propose a Bayesian story evaluation method to guide the planning towards believable narratives which achieve user-defined goals. Finally, we present an in- teractive user interface which enables users of our framework to modify the believability of different actions, resulting in greater narrative variety.
The talk started with comments about character (agent) intention in the narratives:
Riedl et al. proposed a novel approach to evaluate believability of computer generated narratives by establishing the causal relationship with actions and characters intention and perception of the story world.
and comments about agents and plans in story telling:
the work of Theune et al. on Virtual Storyteller models autonomous agents and assigns them roles within the story by an external plot- agent.
In order to get there we have
- Formalized story domains.
- Means to evaluate stories.
- Means for story optimization htrough online planning.

Given an initial world representation, a story is then a set of consequtive activities.
Steps, a ''story generator'' can follow, in order to come up with a story (see the authors paper for details):

Aamas 2014. Marriott rive Gauche, Paris

That is, we have: Agents, Places, Items and Actions.
Which will then have to bind together to come up with a story.

But we are clearly not finished yet. Stories must also be believable.
So, stories will also have to be controlled by believability heuristics.

An ''Arrest'':
- Is believable in stories where we have agents in the role of inspectors or police officers.
An ''Earthquake'':
- Is not believable in most contexts.

At the end of the talk, a member of the audience would like to know why it was called a story, and not a plan? As rollout of it looks pretty much like a plan?

The presenter seemed to agree that the difference might not be so obvious to pinpoint.

3.1.2. Joshua Jones, Georgia Institute of Technology, talked about ''Story similiary measures for Drama Management with TTD-MDPs''.

The presenter started out by informing us that ''Interactive Drama'' pretty much is another word for ''Video Games'', but it sounds better in grant applications, and the techniques might also be used in training settings...

Interactive Drama. Aamas 2014. Marriott rive Gauche, Paris

Interactive Drama. Aamas 2014. Marriott rive Gauche, Paris

We have 3 goals, that we want to find an acceptable tradeoff between:

We want to:
- Honor authorial intent.
- Allow for player autonomy
  (Taking actions should impact the story)
- Replayability
  (Same actions should not always create same story).

A ''Drama manager'' must make sure that the story unfolds nicely.
The authors write:
...promising approach to this problem is the incorporation of an intelligent Drama Manager (DM) into the simulated environment. The DM can intervene in the story as it progresses in order to (more or less gently) guide the player in an appropriate direction.
DMs are taking actions in the environment, nudging players back on track (according to story line specified by an author).

I.e. an author specifies desirable trajectories, out of the huge, huge set of possible trajectories.

In order to see if nudging is needed we need to be able to calculate a ''distance between stories''. And in order to do that we see stories as vectors, where we look at vector differences.
Turns out that the best measure (in the presenters experience) was to look at the ''max difference between vector values''. That gave the best values for the ''Drama Management'' to work with.
For more details, see the authors paper.

3.1.3. Gale Lucas talked about ''It is only a computer: The Impact of Human-Agent interaction in Clinical Interviews''.

In Clinical and Medical interviews, people are more likely to give ''selfdisclosure'' if there is ''rapport''.
Rapport -> Disclosure.

A virtual interviewer make the user feel unobserved (less observed).
There is an illusion of privacy, which is also a good setting for disclosure.

So, an environment where the user feels unobserved is a good start for creating conditions where disclosure is likely.

The (authors) hypothesis was the tested in real life.
And sure enough, when participants were not observed by humans there was more disclosure (Expressed sadness, more sadness in facial expressions etc).

3.1.4. Zaraida Callejas, University of Grenada, Spain, talked about ''A computational model of social attitudes for a virtual recruiter''.

The presented project was part of EUs Tardis Program:
We are focusing our research on one important aspect, helping the young people in improving their performance in job interviews throughtout interactions with a virtual agents acting as recruiters.
Implemented on an Aria-like platform the objective was to create a program that could either challenge the user, or help create a more comfortable environment, where the user is more at ease.

The Platform:
Is a simulation platform for job interviews, based on the interaction of youngsters with a virtual agents acting as recruters. Those virtual agents are credible, yet tireless interlocutors. You are able to have a realistic socio-emotional interactions with them as many times as you wish. You can modulate their emotionl display and simulate a diverse range of possible interview situations.

Interactive Drama. Aamas 2014. Marriott rive Gauche, Paris

Based on predefined difficult levels, and user anxiety - the system will compute objectives for the current turn (what should happen next).

Anxiety is calculated based on:
- Number of self-references.
- Use of pronouns.
- Variety of vocabulary.
- Preference for negative vs. positive content.

And (ultimately) what kind of body language the system sees:
The virtual recruiters can recognize the gestures and the tone of the voice of the user and on that base to ''decide'' autonomously which action is best- suited in each situation, without following a predefined script. This allows users to train without any social risk.

Interactive Drama. Aamas 2014. Marriott rive Gauche, Paris

Interactive Drama. Aamas 2014. Marriott rive Gauche, Paris

The program will then pose questions to the user (like):
- What do you know about us (friendly).
- What is your knowledge of the Tardis project until now (hostile).

3.2. Social Networks.

3.2.1. Liat Sless talked about ''Forming coalitions and facilitating relationships for completing tasks in social networks''.

Clearly, interpersonel relationships between members affect team performances.
So, how do we organize coalitions for a task?

The authors write:
We assume that a central organizer desires to build coalition structures to carry out a given set of tasks, and that it is possible for this central or- ganizer to create new relationships between agents, although such relationship-building is assumed to incur some cost. Within this model, we investigate the problem of computing coalition struc- tures that maximize social welfare, and the problem of computing core-stable coalition structures
An adapter should try to work towards goals like:
- Maximise the social welfare (in the network)
- Stabilise the network

Obviously, this is a hard problem. an especially so when there are many negative edges in a network.
The authors end up concluding, that:
We provided general results, and established that the problem of finding a coalition structure maximizing the social welfare is tractable only when both k and the number of negative edges are constrained.

Aamas 2014. Marriott rive Gauche, Paris

3.2.2. Alan Tsang et al., University of Waterloo, talked about ''Opinion Dynamics of Sceptical Agents''.

The authors write:
In many settings, agents exhibit skepticism in the presence of people whose beliefs radically different from their own, and they are reluctant to be persuaded by such individuals. We present a model of opinion dynamics where agents are receptive toward other agents that have similar opinions, but remain skeptical of agents holding disparate opinions.
Finally, we show that even skeptical agents are able to come to an early consensus and take co- ordinated action to reach a final opinion in most settings; but, agents in homophilic networks may fail to converge to a single opinion.
Loads of good (and relevant) comments about opinions in the human world:
- Humans can easily end up experiencing ''Cognitive dissonance'' (When we hold contradictionary beliefs). Often, such situations only improves when get rid of our wishfull thinking).
- Your friends are likely to be similar to you, and have same opinions.
- It is often only possible for agents to gain more influence, if they are behave empathic towards they want to influence.

3.3. Argumentation and Negotiation.

3.3.1. Terry Payne, University of Liverpool, talked about ''Negotiating over ontological correspondences with asymmetric and incomplete knowledge''.
The authors write:
As agents are not guaranteed to share the same vocabulary, correspondences (i.e. mappings between corresponding entities in different ontologies) should be selected that provide a (logically) coherent alignment between the agents ontologies...
We formally present an inquiry dialogue and illustrate how agents negotiate by exchanging their beliefs of the utilities of each correspondence...
(Very) informally: Alice asserts a belief - And Bob might accept or reject this belief bases on priori beliefs.
Common pool of beliefs are updated subsequently.

The trick is then how we fit agents together autonomously, based on how their views (and how easily they can be changed).

Aamas 2014. Marriott rive Gauche, Paris

3.3.2. Avi Rosenfeld et al., talked about ''A chat based negotiation Agent''.

''NegoChat: A Chat-Based Negotiation Agent'':
To date, a variety of automated negotiation agents have been created. While each of these agents has been shown to be effective in negotiating with people in specific environments, they lack natu- ral language processing support required to enable real-world types of interactions. In this paper we present NegoChat, the first nego- tiation agent that successfully addresses this limitation.
In a jobapplication scenario, negotiation issues might be :
- Salary
- Job Title
- Social Benefits
- Promotion possibilities
- Number of working hours,

A ''Genius system'' (or, randomizer?) set values for the parameters above.
The recipient can then accept or reject them.

Nego chat then uses bounded rationality to come up with results in he following negotiation:
In Aspiration Adaptation Theory (AAT) issues are addressed based on peoples typical urgency, or order of importance. If an agreement cannot be reached based on the value the human partner demands, the agent retreats, or downwardly lowers the value of previously agreed upon issues so that a ''good enough'' agreement can be reached on all issues.

I.e. The plan would be something along the lines of: The start offer is a sort of anchoring. But then the system tries to incorporate human aspirations, and come up with offers that fulfill these (stated) human aspirations:
Aspiration Adaptation Theory (AAT) is that certain decisions, and particularly our most complex decisions, are not readily modeled based on standard utility theory. For example, assume you need to relocate and choose a new house to live in. There are many factors that you need to consider, such as the price of each possible house, the distance from your work, the neighborhood and neighbors, and the schools in the area. How do you decide which house to buy? Theoretically, utility based models can be used. However, many of us do not create rigid formulas involving numerical values to weigh trade-offs between each of the search parameters. AAT is one way to model this and other similar complex problems.

Aamas 2014. Marriott rive Gauche, Paris

3.4. Keynote speaker.

Ian D Conzin, Department of Ecology and Evolutionary Biology, Princeton, talked about ''Sensory networks and distributed cognition in animal groups''.

See this Youtube video for one version of the talk.

Conzin started out by talking about locust:
Locust plagues involve one in ten people on the planet. But are not properly studied, probably because the plaques occur in poor countries.

Locust travel in huge swarms, but they actually only align to neighbors with 50 centimeters.
If one remove the Locust ability to sense biting behaviours then they no longer align.

But the the talk quickly focussed on the more general problem (See the abstract):
will address how, and why, animals coordinate behavior. In many schooling fish and flocking birds, decision- making by individuals is so integrated that it has been associated with the concept of a ''collective mind''. As each organism has relatively local sensing ability, coordinated animal groups have evolved collective strategies th at allow individuals, through the dynamical properties of social transmission, to access higherorder capabilities at the group leve However we know very little about the relationship between individual and collective cognition.
I investigate the coupling between spatial and info rmation dynamics in groups and reveal that emergent problem solving is the predominant mechanism by which mobile groups sense, and respond to complex environmental gradients. This distributed sensing requires rudimentary cognition and is shown to be highly robust to noise.

Fish swarms are also quite interesting.
(When attacked) Fish have avoidance behaviours, and isolation is dangerous (predators goes for the isolated animals). - So, as it is dangerous to be isolated, this is probaby driving aggregation...

Within the Fish swarm, leaders might have goal directions - but no such so as they end up leaving the group (behind).

According to Conzin, it is important that we don't see the organisms as particles (It is tempting, but gives wrong results...). Indeed, instead, the organisms are probabilistic, decision making agents.
Conzin also thought it could be interesting if someone looked at a swarm as having a connectome (a be controlled ny that).

In swarm simulations, it is all about the ''wisdom of crowds''.
Birds only have noisy estimate of where to go. But colletively their estimates become better.

Swarms. Aamas 2014. Marriott rive Gauche, Paris

4. Impressions from Thursday, May 8th, 2014.

4.1. Humans and Agents II.

Outside, in Paris, a lot of celebrations of La Victoire 8 Mai 1945.
Inside, in the Marriott Conference Center, we continued with the next session about ''Humans and Agents''.

4.1.1. Dan Bohus, Microsoft Research, talked about ''Directions Robot: In the wild experiences and lessons learned''.
The project presented was part of a larger project within Microsoft called ''Situated Interaction''.

A lot of technologies are used in making in this robot.
On the hardware side the project is using a Nao Robot, a point-gray camera, a Kinnect Microphone etc.
And on the software side a number of techniques are used:

- Vision Processing and Speech Recognition is used to help come up with a scene analysis.
- Then the robot system will use a Dialogue Management System to control the interaction with visitors.
- A Speech Synthesis System will talk to the visitors, giving them the directions the robot got from the ''directions backend''.

The authors write:
We introduce Directions Robot, a system we have fielded for studying open-world human-robot interaction. The system brings together models for situated spoken language interaction with directions-generation and a gesturing humanoid robot. We describe the perceptual, interaction, and output generation competencies of this system. We then discuss experiences and lessons drawn from data collected in an initial in-the-wild deployment
The robot is put out in the wild (in a Microsoft Office building):

Here, 565 interactions are initiated by the robot.
Where the robot needed to deal with 2 or more vistors in 59 % of the cases (a situation that makes engagement more difficult).

Aamas. MS Directions Robot. Aamas 2014. Marriott rive Gauche, Paris

- The robot had 18 % incorrect starts
   (people came to close to the robot without actually wanting help from the robot).
- In correct starts, the robot starts conversations based on face location,
side orientation, temporal dynamics.
   Still, in the correct starts, it got 22 % incorrect terminations. As people turn to each other, which made the robot terminate the conversation incorrectly.

Aamas. MS Directions Robot. Aamas 2014. Marriott rive Gauche, Paris

Clearly, engagement decisions are pretty difficult, when we have such kinds of interference and joint reasoning.

In the conversations - 59 % of the questions was about directions in the (Microsoft Research) building.
Still, (in the wild) speech recognition remains a big challenge.
22 % of interactions contained ''out of domain'' utterances, chit chat among the humans etc.

On the production side, synchronized speech and gestures seemed to be picked up pretty well by the humans.

Obviously, it took a lot of effort to get this system up and running.
Many components were ready and available, but debugging parallel systems were not easy.
The system has been 4 years in the making, so far.

4.1.2. Julien Saunier et al., Montpellier SupAgro, talked about ''Mixed Agent / Social dynamics for emotion computation''.

The authors write:
Affective computing is the study and development of systems and devices that can recognise, interpret, process, and simulate human affects. In this context, computational modelling of emotion is a major challenge in order to design believable virtual humans.
...Here we propose to calculate the emotional dynamics within a multi-agent architecture. This mechanism is based on three dynamics: Event, temporal and external (Events impact the emotions depending on the internal state of the agent and its perception of the event).
(As I understood it) Their verification found that their results was kind of on track and consistent with litterature about emotional contagion in groups.

But, but, much more is to come. Terribly interesting, the authors said (and writes) that they will move on and will try to deal with stuff like:
A recent review of psychological studies has shown the existence of moderating factors of emotional contagion, such as social power or gender, which were simplified in this article. From the architecture viewpoint, these moderators should be included in the bodies (for individual moderators) and environment (for social moderators).
Furthermore, we plan to replicate other psychological phenomena such as the impact of emotional contagion on cooperative decision-making, where the interplay with higher cognitive functions is more complex.

4.1.3. Sanmay Das and Allan Lavoie, Washington State in St. Louis, talked about ''The effects of Feedback on Human Behaviour in Social Media: An Inverse ReInforcement Learning Model''.

I.e. how do social interactions change us?

Looking at data from Reddit questions, the authors addressed questions like:
- Does social feedback influence which community users contribute to?
- Is it true that the more positive feedback I receive, the more likely it is that I will contribute to this group again?

- A simple model predicts user behaviour (changes in using the social media).
- Findings could be a basis for studying complex collective dynamics.

The authors write:
Users spend more time in communities where they have received social-psychological feedback, and in communities where they have previously invested more time. While behavior is stochastic, an analogy to humans playing mixed strategies in matrix games provides a simple and effective learning model in this setting.
4.1.4. Patrick Gebhard, Augsburg, talked about ''Exploring Interaction Strategies for Virtual Characters to Induce Stress in Simulated Job Interviews''.

The presented project was about motivating people going for job interviews (it was made under the Tardis umbrella). More precisely, how to tailor / adapt the behaviour of the virtual recruiter.

We basicly have two kinds of virtual recruiters.
An ''understanding'' recruiter, and a ''demanding'' recruiter.

The understanding recruiter will use:     The demanding recruiter will use:
Narrow gestures Spacing gestures
Positive facial expressions Neutral facial expressions
Friendly gaze Dominant gaze
Head tilting Starring gaze
Convey interest Convey neutrality

The virtual scene was made with Scene maker

Scenemaker allow us to make autonomous and scripted behaviour models (for head movements, gaze, breathing etc).

Scenemaker. Aamas 2014. Marriott rive Gauche, Paris

Preliminary results: The demanding character is really perceived by humans as being very dominant. Whereas the friendly recruiter is perceived to be really friendly.

Impact on user bahaviours:
Humans give longer utterances when faced with the demanding recruiter.
It is thought/speculated that this is a sort of backfighting mechanism to push away the demanding recruiter.

4.2. Emotions.

4.2.1. Yu Ding et al., Institut Mines-Telecom, Paris, talked about ''Laughter Animations''.

Laughter is an essential communicative signal in human-human communication.

We have 14 laughter phonemes. And for each phoneme we have actions for lip, jaw, eybrow, and head movements. I.e. laughter involves a lot besides mere sounds. There is a lot of body - and shoulder motion going on as well.
Indeed, the whole thing is very complex, as some actions might be dependent on preceeding, and current laughter phonomes (say head- and eye-brow).

Still, he is pretty pleased with the simulation his team has come up. Indeed, it was so good, that participants/users expressed disappointment when the model did not laugh when they told jokes.

Read more here.

4.2.2. Chung-Cheng Chiu et al., USC Institute for Creative Technologies, talked about ''Gesture Generation with low dimensional Embeddings''.

A great talk about gestures for virtual characters. The authors write
Work that lays a preliminary foundation toward building a comprehensive gesture controller. The critical next step is to increase the expressiveness of the gesture controller so that the mapping learned by the speech-annotation mapping pro- cess can realize expressive gestures more tightly coupled to the uttered content.

The gestures can be captured by motion capture, or handcoded in great detail.
Both methods are very costly.

Here we want to predict (or produce) gesture types from speech.

Gestures. Aamas 2014. Marriott rive Gauche, Paris

A motion library stores a number of these gestures.
And we can produce (complex) gestures by combining them.
Indeed, it looked pretty cool.

4.3. Humans and Agents III.

4.3.1. Samhar Mahmoud et al., Kings College, London, talked about ''Multi agent system for recruiting patient for clinical trials''.

Recruitment is usually done through
- Advertising.
- Human Recruiters.
- Practioneers.

Which is all pretty time consuming and costly.
Here we try to replace a human recruiter with a alert system (loads of legal issues, privacy concerns that has to be dealt with before this is possible in real life scenarios).

The authors writes that such a system should be possible:
we propose a multi-agent architecture that helps ease the process of recruiting patients for clinical trials. This paper presents a results from a deployment of the architecture, showing that it succeeds in recruiting a sufficient number of patients for multiple clin- ical trials.
All pretty straight forward apparently, until we come to the legal issues...

4.4. Keynote speaker.

Michael Luck, Department of Informatics. Kings College, London, talked about ''From Agents to Electronic Order''.

In the talk preview one could read:
Trust, reputation, norms and organisations are all relevant to the effective operation of open and dynamic multiagent systems.
Inspired by human systems, yet not constrained by them, these concepts provide a means to establish a sense of order in computational environments (and mixed human-machine ones).

In this talk I will review previous work across a range of areas in support of the need to develop theories and systems that provide the computational analogue of common social coordination mechanisms used by humans...
Luck started out by quoting Aaron Sloman: ''Computer Science is a sub-branch of Artificial Intelligence''.
Indeed, computing today has a lot to do with interactions, it is not just about numerical computations.

Interestingly, the concepts we use today to organize human societies, and bring them into play, might also help us describe agent (human and robot) societies.

Luck thinks (agent) societies can be described by 3 dimensions.

X - axis : Describes motivations of agents.
Y - axis : Describes norms and their enforcement in the society.
Z - axis : Describes trust. Willingness to cooperate with others.

Trust. Aamas 2014. Marriott rive Gauche, Paris

Societies and organisations should be concerned with:
- The extent to which individual agents are willing to comply with norms.
- The effort expended to ensure norm compliance through enforcement and the severity of sanctions.

Individual agents should be concerned with:
- The achievement of individual objectives.
- The extent to which other agents are willing to perform any task resulting from interactions.
Agents find themselves in worlds where:
X-axis represents motivations:
- Increase represents prevalence of malicious motivations, indicating that agents are more likely to defect if they see more utility in alternatives.
Y-axis represents organisations, norms and their enforcement:
- Increase indicates prevalence of stricter norms and enforcement
(Can constrain agent motivations malicious behaviour and prevent if intended).
Z-axis represents trust:
- Increase indicates increase in trust that agents place in others and increase in willingness to cooperate with others.
In simulations we would expect to see that ''Norm fails in the long run'' (1 million runs).

Agents can only punish neighbors. Just as agents can only learn from neighbors.
Still, we can assume that there could be such things as ''global behaviours''.

In the end of the talk, Luck called for an (to) Electronic Order (e-order):
- No more distinction between natural and artificial systems
(We now work within a mixed system).
-So We need a science of electronic order.
But there is, of course, work ahead:
- To determine appropriate norms.
- To facilitate norm emergence. Digital guarding:
- Detect anomalies in behaviour.


5. Demos.

Lots of great demos Wednesday and Thursday during the lunch break:

I especially liked:

An Interactive Virtual Audience Platform for Public Speaking Training. by Mathieu Chollet, Giota Sratou, Ari Shapiro and more.

Here one could practice public speaking (say, presenting a talk), and see the results on a virtual crowd (See paper).
A TV screen would show you the (virtual) audience dose off, looking at each other and looking distracted, if ones talk weren't going particular well.

If you tried to talk to the whole audience (not just the friendly guy on the first row on the left) it helped engage more of the audience, as did speaking in an engaging way (instead of just going fast, or slow, or being monotonous).
Wonderful stuff. And probably a very helpful tool as well!

Public speaking. Aamas 2014. Marriott rive Gauche, Paris

Thursday also had some great demos.

I especially enjoyed ''Adding BDI agents to the MatSim traffic simulator'' by Qinyu Chen (Computer Science and Information technology, RMIT, Australia) - a pretty cool traffic simulator.

MatSim is a mature and powerful traffic simulator, used for large scale traffic simulations, primarily to assess likely results of various infrastructure or road network changes.
Where MatSim is designed for finding stable traffic patterns over many iteration days.

Here MatSim was coupled with the BDI system Gorite to provide additional functionality within MatSim.
I.e. BDI agents receives relevant information (percepts) from the environment (either MatSim info, or information introduced from elsewhere), and determines what to do.
Actions (driving to a certain location) are then fed into MatSim

It all looked very convincing.

Looked at many other demos. Many of them was pretty cool.
But enough about the demos at Aamas 2014 here...

Aamas 2014. Marriott rive Gauche, Paris

6. Impressions from Friday, May 9th.

6.1. Blue Sky Ideas.

(Room: Parc Montsouris B).

6.1.1. The day started by a talk about ''Challenges for Multi-Agent Coordination Theory Based on Empirical Observations.'' by Victor Lesser, University of Massachusetts, Amhearst.

The authors write:
The study of coordination and cooperation among agents has been at the heart of the multi-agent field since its inception. Since this early work, significant research progress and understanding about the nature of coordination has been made. Especially important has been the development of distributed constraint optimization (DCOP) and decentralized Markov decision processes (DEC-MDPs) frameworks over the last decade. These formal frameworks allow researchers to understand not only the inherent computational complexity of coordination problems, but also how to build optimal or near-optimal coordination strategies for a wide variety of multi-agent applications.
I.e. DCOP amd DEC-MDP have provided a significant progress in understanding coordination.
Still, there is no formal theory to explain observations:
- Implicitz vs. Explicit coordination.
- Approximate vs. Detailed Coordination.
- Frequency of coordination
(You can get near optimal solutions with less coordination. But with coordination at just the right time. Communication can actually be a distraction for agents doing work locally...).
- Phase transistions
(Might lead to ''Self Organization''? Could be introduced by changing the reward to the agents?).

Sill, existing formal frameworks lack a general understanding of coordination.
In the end of the talk it was speculated that possible answers might ''The frequency of interaction''?

6.1.2. Luis Antunes et al., LabMag, Lisbon, Portugal, talked about ''The Geometry of Desire.'' by

The authors write:
Desire is the key connection to the agents creator, and the ultimate source of behaviour...
Agents continuously adapt their desires by means of both their intrinsic motivations, as well as a mimetic mechanism (as described in Rene Girardss theory). Agents acquire new goals not through fitness or novelty but out of mechanisms such as envy, imitation and competition...
I.e. Agents make choices baseed on utility, imitation, value sharing etc.

Interestingly, desire for a given choice is a key connection to an agents creator.
Desires makes the agent move forward. But desires also hinder adaptability and emergence of novel social behaviours. A (simple) self can not create desires on its own. Instead it will adapt the desires of others (?), and make these desires its own.
Only when the self becomes truly autonomous can it have its own desires...

And, obviously, a fitness function for happiness becomes more difficult to calculate when the agent sets its own desires.

In the end the authors conclude:
Still, even if desires might not be a very rational concept, we want to treat the subject in rational way. According to Herbert Simon:
''Anything that gives us more information allows us to become more rational''.

6.1.3. Frank Dignum et al., University of Utrecht, talked about ''From autistic to social agents.''

The authors argued that we don't pay enough attention to the social settings of agents:
We dont function as rational agents with the addition of some ''sociality'' modules to make us aware of other people. Rather we are social at the base and this sociality pervades all our reasoning, motivation, and any other aspect of our behavior.
They continue:
As said before, just stating that agents are social because they have a communication language or can be programmed to work in a team does not make the agents social.
There are (at least) two issues that need to be investigated to accomplish this: 1. Allow for social motivations. i.e. motivations to reach a social rather than a practical goal.
2. Recognize that all actions have both practical and social effects that have to be modeled and accounted for.
So, what are the fundamental aspects of social individuals?
- Individual (purpose and meaning of action) (Weber, 1900s)
- Socialist (Durkheim, 1910s)
- Textualists (foundations of social theory) (Habermas, 1960s)
- Social Practice (Social reality shaped by practice) (LaTour/Rekwitz, 1990s)

(Agents) motives can be considered as being the core of ''energizing'' subsequent action. Besides the biological (homeostatic) motives, such as hunger and need for sleep (which are, in fact, not very salient in most of the social situations), McClelland distinguishes four motives.
McClelland distinguishes four motives:
- Achievement
- Affiliation motives.
- Power motives.
- Avoidance motives.
Identity - Agents want to be the same (belong), and yet be different...
And in life you want a balance between variance and consistency (same - not the same).

Still, humans are, of course, pretty advanced agents:
(Humans have) social filters (on the inputs, to decide what is taken into account), and use social rules to decide output. Clearly, these filters are not present in simpler agents.

According to the authors:
In general, motives are primary drivers that are always considered when a trigger arrives from the environment. Values are cognitive components that are considered when a cognitive choice has to be made about the course of action to follow.

6.1.3. Michael Rovatsos, Edinburgh, talked about ''Multiagent Systems for Social Computation.''

The author writes:
Massive advances in network connectivity and increased affordability of computer hardware have recently led to a flurry of web-based applications that mediate interaction within human collectives. This has, in turn, led to an in- creased interest in ''collective intelligence'' applications.
Ridesharing applications where algorithms support travellers in collaborative route planning while also managing congestion in the urban areas involved; healthcare systems that monitor patients and their clinical treatment plans while prioritis- ing use of staff time and resources based on long-term data analysis; software development platforms that allow compa- nies to outsource production to teams of freelancers,
Different from earlier systems, as these systems:
Embody a multi-perspective notion of hybrid man-machine intelligence, where the capabilities of humans and compuational artefacts complement each other (rather than ma- chines imitating human intelligence as in traditional AI).
Importantly, such systems are continually co-designed by programmers and end users through human and machine contributions.
In the future it is likely that many more sites will combine human- and machine intelligence. E.g. in healthcare, it takes many stakeholders to create a plan for a patient, and it is likely that the best way to do this will through a combination of human and machine intelligences...?

It is all about orchestration of the right kind of computation.
- Verification - Verify a system is computing the right function.
- Recruitment - Identify who can perform the computation.
- Incentivisation - Ensure user participation by incentives.

Massive online collaboration is a reality.
- But combined with artificial intelligences isn't.
- Agent frameworks might help in this area.

And what is going to computed? Well, take tripadviser as a simple example of what is to come.
More at

6.1.4. Samarth Swarup et al., BioInformatics, Virginia Tech, talked about ''Computational Epidemiology as a Challenge Domain for Multiagent Systems.''
Many problems are socially contagious. Smoking spreads through peer influence / Accepted social norms. And are therefore rather ''policy resistant''.
Obesity might also be a socially contagious thing.

Found the modelling of information through personal conversations especially interesting:
We model and examine the spread of information through personal conversations in a simulated socio-technical network that provides a high degree of realism and a great deal of captured detail. To our knowledge this is the first time information spread via conversation has been modeled against a statistically accurate simulation of peoples daily interactions within a specific urban or rural environment

6.2. ACM - SIGAI Talk.

Michael P. Wellman, Strategic Reasoning Group, University of Michigan, ''Putting the Agent in Agent-Based Modeling''.

Agent modelling is useful in many areas, from social systems, trafic systems to economics etc.

And, sometimes the models don't need to be very complicated to provide great insights.
Still, (complex) modelling exists within areas like:
Political Science.
The various areas might have little in common, except perhaps that these areas use models, where the notion of ''agent'' is beneficial.

Obviously, it would interesting if we could explain some multiagent phenomenon through general models?
It would also be nice to have some rules for selecting (relevant) agent behaviours.
Should we guided by:
- Plausible heuristics?
- Rationality assumptions (Game theory)?
- Historical observations of agent behaviour?
- History of system outcomes?
- Evolutionary stability concerns?
- Result of reinforcement learnings?
- Others?
Still, agents are in use, and is already influencing the world.
- Realistic agent based modellng calls for serious agent models.
- Boundaries between ABM (Agent Based Modeling) and MAS (Multi Agent Systems) are probably unnecessary.
- Agent behaviour assumptions are pivotal...
Questions from the audience initiated some final thoughts from Wellman:
In the end we only care about the plausibility of the overall simulation
(SubParts within the model can be replaced).
Agents could run agent simulations themselves to decide what to do. So, where should we stop this complexity?
Here Michael Wellman suggested that ''we dont restrict ahead of time what is relevant''.

A good answer. And a great talk.

Aamas 2014, Marriott rive Gauche, Paris

And the end of a great conference !

Simon Laub. Aamas 2014, Marriott rive Gauche, Paris

La Ville Lumiere, Paris Pics.
Enactive Cognition Conference (Reading 2012) | Nasslli 2012 | WCE 2013 | CogSci 2012 | CogSci 2013 | Aspects Of NeuroScience 2017
About | Site Index | Post Index | Connections | Future Minds | Mind Design | Contact Info
© October 2014 Simon Laub - - -
Original page design - October 20th 2014. Simon Laub - Aarhus, Denmark, Europe.