Impressions and Links from

The 27th International Joint Conference on Artificial Intelligence
and 23rd European Conference on Artificial Intelligence
Stockholm, Sweden. July 16 - 19, 2018.

Stockholmsmässan, Stockholm, Sverige

I had the great pleasure of taking part in Ijcai, Ecai 2018 (The 27th International Joint Conference on Artificial Intelligence and 23rd European Conference on Artificial Intelligence. Stockholm, Sweden. July 16 - 19, 2018).

Below you will find impressions from the conference, and links for further reading.

The Ijcai, Ecai 2018 conference was held at the Stockholmsmässan, in Stockholm, Sweden.
Ijcai 2018

Tried to follow as many talks as possible. But, well, these notes are, of course, in no way, shape or form complete...
Rather, these notes were written on conference nights, as my way of keeping track of the events that I attended at the conference. And as a way of storing links and references for future reference.

But enough disclaimers, below, you'll find impressions and links from some of the conference talks and seminars, including links for further reading.

Great stuff indeed. And much (AI stuff) to look forward to in the coming years!


1. Introduction.

Stockholmsmässan, Stockholm, Sverige

Ijcai, Ecai 2018.
Stockholmsmässan, Stockholm, Sweden.
16 - 19 July 2018.

Stockholmsmässan, Stockholm, Sweden.
Stockholmsmässan, Stockholm, Sverige

1.1. Page Overview.
- Sessions and Keynotes.

Below, in section (2 - 5), you will find impressions and links from sessions, demos and keynotes that I followed Monday, Tuesday, Wednesday & Thursday.
In section (6) you will find a photo montage from this years World Computer Chess Championship (Icga 2018, part of this years Ijcai conference). Section (7) gives a few impressions from the exhibition hall at Ijcai 2018. Section (8) shows some of the interesting posters, upcoming events etc. I spotted at Ijcai. Section (9) wraps up the week.

Please notice: These notes don't do justice to the often brilliant presentations that initiated them!
So, please read the original presentations to avoid any distortions ...

Xiaoi Robots, Shanghai, China

2. Impressions from Monday, July 16th.

2.1. ''Victoria'' - track.

2.1.1. Xiaojuan Ma, Hong Kong University of Science and Technology.
Talked about ''Human-Engaged AI''. And, fascinating, ''what strategies AI systems can employ to engage users in an appropriate manner''?
Sure, a robot might try to get a humans attention through social signals such as gaze, body and head orientation, nodding etc. But things begin to be ''interesting'' when we go from submisssive robots, to robots who tell humans that ''I haven't finished yet, turn to me, and follow my instructions ...'''.
Well, well....
Or what about chairs that tell users, when they (humans) don't sit correctly on the chair...?
How should such chairs address their human users?
2.1.2. Jun Zhu, Tsinghua Lab of Brain and Intelligence.
Talked about ''Probabilistic Machine Learning''.
Where noisy or ambiguous data makes it harder for intelligent systems to operate in the real world. Still, a combination of deep neural networks and probabilistic modeling, Bayesian Deep Learning (BDL), here in the form of a probabilistic programming library named ZhuSuan, might give a promising way forward.
2.1.3. Christopher Amato, Northeastern University, Boston.
Talked about ''Decision-Making Under Uncertainty in Multi-Agent and Multi-Robot Systems''.
Where things quickly become pretty complex in the real world. I.e. drones might only have one downward facing camera, that allow them to reason about where they are. Add to that concerns about battery-life, the position of other drones etc. and it quickly becomes clear that even relatively simple surveillance tasks are actually pretty complex.
Sure, (off-line) deep learning with a lot of examples will solve everything might solve many of these problems, but it still might not be the full answer here. Especially, as efficient online learning remains a challenge.
Again: Interesting!

Ijcai 2018, Stockholm

Tencent Ai Lab

Ijcai 2018, Stockholm

2.2. Interactive, Collaborative Robots, by Danica Kragic.

Danica Kragic (Royal Institute of Technology, Stockholm, Sweden) talked about ''Interactive, Collaborative Robots: Challenges and Opportunities''.

Despite of recent progress, robots are still:
Largely preprogrammed for their tasks, and most commercial robot applications still have a very limited ability to interact and physically engage with humans.
In other words, something as simple as cutting bread is still challenging for robots. And there are still no robots that can wash the dishes.
Something as simple as a good grasp (to quickly take something in your hand and hold it firmly) is not simple, if you are a robot...

Most robots today, still, only have 2 fingers (grippers) to work with. Many robots generate motion strategies without any sensory feedback (for simple manipulation tasks), which obviously only get them so far.

Still, adding visual and haptic feedback will probably allow robots to better understand object shapes, scene properties and in-hand manipulation. But it is of course still not easy for robots to understand how pushing an object will change the scene in cluttered environments.

Learning and collaboration might come easy to humans. But robots are obviously not quite there yet, even though robots that are better in integrating motor and sensory channels might eventually give us much smarter robots... Still, there is a lot to learn...

But, certainly, a great talk about recent progress.

2.3. AI and Software Engineering.

2.3.1. Jian Li et al. , Chinese University of Hong Kong.
Talked about ''Code Completion with Neural Attention and Pointer Networks''.
Today, software tools, IDEs,
provide a set of helpful services to accelerate software development. Intelligent code completion is one of the most useful features in IDEs, which suggests next probable code tokens, such as method calls or object fields, based on existing code in the context. Traditionally, code completion relies heavily on compile-time type information to predict next tokens.
But there has also been some success for dynamically-typed languages, where researchers have treated these languages as natural languages, and trained code completion systems with the help of large codebases (e.g. GitHub):
In particular, neural language models such as Recurrent Neural Networks (RNNs) can capture sequential distributions and deep semantics.
Here the authors propose their own system, based on improvements on existing techniques. A system which turns out to be quite effective.
2.3.2. Hui-Hui Wei et al. , Novel Software Technology, Nanjing University.
Talked about ''Positive and Unlabeled Learning for Detecting Software Functional Clones with Adversarial Training''.
Where ''clone detection'' is a vital component in todays software development, in order to avoid having copies of the same code in several places of ones program.
Detecting pieces of codes with similar functionality (Ususally, created by reused code after copying, pasting or modification of existing code), but looking (slightly) different is not so easy though.
Here, the authors suggest that clone detection task could be formalized as a Positive-Unlabeled problem, and indicate that they had some success with it.
Again: Interesting!
2.3.3. Krzysztof Krawiec et al. , Poznan University of Technology, Poland.
Talked about ''Counterexample-Driven Genetic Programming: Stochastic Synthesis of Provably Correct Programs''.
Or..., can we find a program that matches these input-output examples?
Here it is suggested that
Genetic programming is an effective technique for inductive synthesis of programs from tests, i.e. training examples of desired input-output behavior. (But) Programs synthesized in this way are not guaranteed to generalize beyond the training set...
The authors then describe an improvement over such a (too) ''simplistic'' Genetic Algorithm approach. An improvement, that synthesizes correct programs fast, using few examples.
In the authors word:
This work may pave the way for effective hybridization of heuristic search methods like GP with spec-based synthesis.
For more about the context for this work, see also the Sygus Competition.

3. Impressions from Tuesday, July 17th.

3.1. The Moral Machine Experiment, by Jean-François Bonnefon.

Jean-François Bonnefon (Toulose School of Economics) talked about The Moral Machine (a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars).

The Moral Machine Experiment

E.g. should an ''ethical'' car sacrifice its own passengers to save (more) pedestrians in case of an emergency?
People tend to say yes.
But also indicate that they wouldn't like to buy such a car...
Of course...

Take the test here.

3.2. Sentiment Analysis and Argument Mining.

3.2.1. Yuxiang Zhang et al. , College of Computer Science and Technology, Tianjin, China.
Talked about ''Text Emotion Distribution Learning via Multi-Task Convolutional Neural Network''. Where the authors uses a convolutional neural network for text emotion analysis.

Often, such classifiers will try to classify texts into categories like joy, surprise, disgust, sadness, anger or fear. But it is all a little bit tricky, of course, as a single sentence can evoke multiple emotions with different intensities. But here a solution, with a convolutional neural network for text emotion analysis, seem to perform favorably compared to other proposed approaches.
3.2.2. Xin Li et al. , Chinese University of Hong Kong.
Talked about ''Aspect Term Extraction with History Attention and Selective Transformation''. Where ''Aspect term extraction'' is all about detecting users opinion and locating opinion indicators.
Classification models based on e.g. SVM's have had some success, but the authors new framework seems to be able to categorize texts more accurately.
Clearly, a pretty interesting result.

Ijcai 2018, Stockholm

Tencent gets go-ahead to test autonomous cars in Shenzhen

China wants to bring artificial intelligence to its classrooms to boost its education system

3.3. Model-free, Model-based, and General Intelligence, by Hector Geffner.

Hector Geffner (Universitat Pompeu Fabra, Barcelona) talked about Model-free, Model-based, and General Intelligence.

Talk: Model-free, Model-based, and General Intelligence by Hector Geffner

Geffner started this super interesting talk by introducing us to ''learners'' and ''solvers'' (and planners, which are a particular type of solvers), addressing the similarities and differences between learners and solvers, and the challenge of integrating them.

''Learners'' (deep learners and deep reinforcement learners) have been part of most AI success stories in recent years (image understanding, speech recognition, games etc).

Learners require a lot of training (along with an error-function, in the case of supervised learning) in order to become experts within a certain field. But once they have been trained, the learners are very fast.
Whereas, solvers deal with new problems, and always have to think a little bit (longer) before they are able to solve a problem.


In both deep learning (DL) and deep reinforcement learning (DRL), training results in a function f that has a fixed structure (given by a deep neural network).
In both DL and DRL, the most common algorithm for minimizing the error function is stochastic gradient descent where the parameter vector θ is modified incrementally by taking steps in the direction of the gradient.

Reinforcement learning

Reinforcement learning. An agent takes an action in an environment, which gives a reward that is fed back to the agent.

Solvers take a convenient description of a particular model instance and automatically compute its solution. For a classical planner, the inputs are classical planning problems and the output is a plan that solves the problem.
Learners require training, which is often slow, but then are fast; solvers can deal with new problems with no training but after some deliberation. Solvers are thus general, domain independent as they are called...
... provided a suitable representation of the problems; learners need experience on related problems.
Psychologist Daniel Kahneman refers to this as ''slow and fast thinking''.
Where one system is fast and automatic and the other is slow and general:

fast associative unconscious effortless parallel automatic heuristic specialized
slow deliberative conscious effortful serial controlled systematic general

Both systems have limitations, where:
A key restriction of learners relying on neural networks is that the size of their inputs x is fixed. This implies that learners cannot emulate solvers even over specific domains.
Still, the speed of the learners is obviously a good thing. Just as explanation and accountability through models (solvers) is a good thing.
So, we need both.

And, moving forward, we need to better integrate these two systems, within one system:
The solutions to some of the bottlenecks faced by each approach, that often have to do with representational issues, may lie in the integration that is required to resemble the integration of Systems 1 and 2 in the human mind.
An awesome talk!

For an ambitious version of how to build a whole brain, see my CogSci 2013 page

3.4. Evolution of the Contours of AI.

3.4.1. Wofgang Bibel, Darmstadt University of Technology.
Talked about ''A Scientific Discipline (Once) Named AI''.
Where he argued that it is an disadvantage that we now have many independent scientific efforts striving for the same solutions (that used to be just considered AI), as it results in an enormous redundancy and hinders synergy.

In order to avoid this splintering, he suggests that AI should get back to its roots, and again be an IPsI science (informational, psychological and intellectual), that covers all of the subfields:
A similarly fundamental importance as Physics or Biology. To have a name for talking about it, we used ''IPsI-science'' (or ''Intellectics'' suggested by the author more than three decades ago). While Physics is concerned with matter and Biology with life, IPsI-science deals with IPsI-stuff (ie. informational, psychological and intellectual stuff).
Of course ...!

3.5. Angry Birds - Competition.

Angry Birds Competition at Ijcai 2018

According to the dictionary: ''Bird'', modern slang meaning ''young woman'' is from 1915, and probably arose independently of ''burd'', c.1300, meaning, maiden, young girl.

But, surely, the organizers didn't have this in mind when they assigned rooms to these two events...?

Angry Birds Competition at Ijcai 2018
Angry Birds Competition at Ijcai 2018

According to the organizers at ''AI''
(Angry Birds AI Competition):
The long term goal is to build an intelligent Angry Birds playing agent that can play new levels better than the best human players.
This is a very difficult problem as it requires agents to predict the outcome of physical actions without having complete knowledge of the world, and then to select a good action out of infinitely many possible actions. This is an essential capability of future AI systems that interact with the physical world.
The Angry Birds AI competition provides a simplified and controlled environment for developing and testing these capabilities.
Angry Birds Competition at Ijcai 2018

4. Impressions from Wednesday, July 18th.

4.1. Markets without Money, by Nicole Immorlica.

Nicole Immorlica (Northwestern University, Microsoft Research) talked about Maximizing the Social Good: Markets without Money:

Nicole Immorlica, Markets without Money

In a traditional markets, people are paid to produce valuable resources.
Resources are sold at an appropriately high price, guaranteeing that the buyers had high value for them.

However, in many settings there might be alternatives to money.
Social media might give ''fame'', as ''payment''. We might want to sell certain positions (Say, entrance to a certain school), not to those who are willing to pay the most, but to those who are willing to risk the most.

Interestingly, whatever algorithm we will end up using (within a certain public domain), it is problably important that the algorithm we use can be ''explained to a 5th grader'', if we want public trust.

Still, sounds logical, but it will of course still be interesting to see how many domains of our lives will eventually end up being money free. Well, we will see...

Hyperbolic deep-learning. A nascent and promising field

Ijcai 2018, Stockholm

Ijcai 2018, Stockholm

4.2. Intelligible Intelligence & Beneficial Intelligence, by Max Tegmark.

Max Tegmark (MIT) talked about Intelligible Intelligence & Beneficial Intelligence.

Max Tegmark at Ijcai. For more, see Max Tegmark on Wikipedia

Talking about the future of AI, 3 things becomes interesting: For details, see the AI Summit.

Starting on the ''steering'' part, it is clearly not easy to trust AI's that we don't understand.
So, the more we understand how AI's work, the more likely it becomes, that we will be able to steer it,
at least a little bit.

So, we probably need to understand what is going in e.g. the deep neural nets (that are part of many AI programs):
Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They've mastered the ancient game of Go.
Actually, a bit puzzling, as there are orders of magnitude more mathematical functions than possible networks to approximate them. And yet deep neural networks somehow get the right answer...
According to Tegmark, the answer is that the Universe is governed by a tiny subset of all possible functions/laws.
When written down mathematically, functions that are rather simple.
Meaning that deep neural networks don't have to approximate any possible mathematical function, only a tiny subset of them.
For more, see ''Link Between Deep Neural Networks and the Nature of the Universe'' and ''Why does deep and cheap learning work so well?''.

In the talk, Tegmark didn't go that far, but maybe we do indeed live in a world, where algorithms rule, and where understanding comes in the forms of other algorithms? Perhaps, even understandable algorithms... For more, see ''Solomonoff Inductions''.

Nevertheless (whether or not we live in a world of algorithms), today we still don't have understandable/explainable AI, and the public's trust in AI is still rather hesistant...

So, what does this tell us about our (humanity's) destination?
And what can we do to improve the situation?

Tegmark mentioned various ''AI safety'' initiatives that will help ensure that artificial intelligence will remain safe and beneficial. I.e. the ''AI Safety Research program'', funded by Elon Musk, has given over $2 million in funding to various ''AI safety''-projects.
See more here.


Max Tegmark at Ijcai. For more, see the Universes of Max Tegmark, MIT press
Max Tegmark at Ijcai. For more, see the Universes of Max Tegmark, MIT press

4.3. Natural Language Processing and Computer Vision.

4.3.1. Rui Yan, Institute of Computer Science and Technology, Peking University.
Talked about ''Chitty-Chitty-Chat Bot: Deep Learning for Conversational AI''.

Conversations are often a very complex thing.

Come to think of it, often, we probably don't even fully understand all of the conversations we take part in... I.e. can we, always, precisely explain why we respond the way we do..? Probably not...

A good conversation is all about exhanging information.
We want to learn something when we speak to someone.

And, all of this is obviously not easy to integrate into an AI system:
To build a conversational system with moderate intelligence is challenging, and requires abundant dialogue data and interdisciplinary techniques...
Generally speaking, we have two types of conversational AI. One is focussed on helping the user with a particular task. Buying a ticket, reserving a seat etc. Other chatbots deal with conversations in (more) open domains, where the hope is that they will entertain us or give us emotional companionship.
Ideally, the intelligent conversational AI should be able to output grammatical, coherent responses that are diverse and interesting.
Which is a difficult task, even for humans...

Most standard chatbot systems today presume that only humans will take the initiative in conversations, meaning that computers only need to ''respond'' to the best of its capability. Giving the chatbot a ''passive'' role. But this is clearly not how it works in human-human conversations, where both participants can take the initiative.

So, well, there are still many problems to solve in this exciting area, before we have chatbots that can pass the Turing Test.
4.3.2. Jing Lu et al., Human Language Technology Research Institute, University of Texas at Dallas.
Talked about ''Event Coreference Resolution: A Survey of Two Decades of Research''.

Looking at a headline like:
Nelson Mandela had died aged 95 in Johannesburg.
The world has lost a great man.
It might be easy for us to see that these two sentences, and events, are related. But clearly not so easy for an AI program to see:
Event coreference resolution consists of grouping together the text expressions that refer to real-world events (also called event mentions) into a set of clusters such that all the mentions from the same cluster correspond to a unique event.
The tricky thing is that:
Event mentions are more diverse syntactic objects, including not only noun phrases, but also verb phrases, sentences and arguably whole paragraphs...
Making this a very difficult problem. But, rather amazingly, some results have been achieved.
Hopefully, with more to come in the future.

4.4. Panel: The AI Strategy for Europe.

Cécile Huet from the Robotics Sector, European Commission, talked about the AI strategy for Europe.

Why we should fund robotics research & innovation, a couple of headlines: An excellent talk, giving us some hope that Europe is still hanging in there!

An updated Sistine Chapel ceiling...

Indeed, already looking forward to the European Robotics Week.

5. Impressions from Thursday, July 19th.

5.1. Victoria - Early Career.

5.1.1. Cynthia Matuszek, University of Maryland, USA.
Talked about ''Grounded Language Learning: Where Robotics and NLP Meet''.

Matuszek started by telling us that words are like icebergs.
And, interestingly:
Physically embodied agents offer a way to learn to understand natural language in the context of the world to which it refers.
Human language does not exist in isolation; it is learned, understood, and applied in the physical world in which people exist.
Connecting symbols (linguistic tokens and learning) to real-world percepts and actions is at the core of giving meaning to these symbols.

Clearly, not an easy task...
Language-using robots must learn how words are grounded in the noisy, perceptual world in which a robot operates... Still, natural language systems can benefit from the rich contextual information provided by sensor data about the world.
A super interesting talk!
5.1.2. Anca Dragan, University of Berkeley, USA.
Talked about Optimizing Robot Action for and around people.

Indeed, Should Robots be Obedient?
Following the order that a human gives -- seems like a good property for a robot to have. But, we humans are not perfect and we may give orders that are not best aligned to our preferences. We show that when a human is not perfectly rational then a robot that tries to infer and act according to the human's underlying preferences can always perform better.
Self-driving cars that always ''play-nice'' might never be able to switch lanes on a motorway...
So, sure, self-driving car shouldn't learn to be rude, but they should be able to ''influence'' other drivers by their behaviour (i.e. ''make room for me'').
And, autonomous cars also need to be able to make good compromises between ''comfort'' vs. ''efficiency''.

Clearly, autonomous cars still have a lot to learn...

5.2. On Machines and Humans.

5.2.1. Spyridon Samothrakis, Institute of Analytics and Data Science, Essex, U.K.
Talked about ''Viewpoint: Artificial Intelligence and Labour''.

Both the first industrial revolution, from 1760 up to 1840, and the second, from 1840 up to 1914, brought a significant change in working hours...
First, going from 180 days per year and 11 hours per day, to almost full year working and a 69-hour week and after the first revolution.
Starting from 1870 data shows a consistent drop of working hours. A trend that lasted until 1980 (when we observe a rise in working hours again, especially in the new world).
This tendency of decreasing work hours from 1870 led people, like Keynes, to predict a four-hour working day (i.e. a 20-hour working week). But that, of course, never materialized.

Society is being reshaped, and it is therefore obviously hard to predict what will actually happen in the coming years. Still, people haven't stopped been interested in having ''The Good Life''.
And people certainly still want their luxeries - including Fame and Fortune.

Automation will, of course, give ''money''- and ''social''- power. Even if you have to work hard to be part of that new industrial system.

Given our arguments above, we think the logical outcome of improved automation is an increase in working hours, a continuation of an existing trend.

Maybe we should take a closer look at how we make (good) decisions...

5.3. Research Excellence Award 2017 - Andy Barto.

In this talk, Andrew Barto gave a brilliant introduction to some of the key ideas and algorithms of reinforcement learning.

Where reinforcement learning is:
A computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment.
Some of the key ideas can be found in Sutton and Barto's RL book.

Here, we got some interesting thoughts about Harry Klopf's ''Hedonistic Neuron''.
Sutton and Barto write:
Klopf recognized that essential aspects of adaptive behaviour were being lost as learning researchers came to focus almost exclusely on supervised learning. What was missing, according to Klopf, were the hedonic aspects of behaviour, the drive to achieve some result from the environment, to control the environment towards desired ends and away from undesired ends. This is the esential idea of trial-and-error learning.
And reinforcement learning is therefore obviously (also) relevant in disciplines like machine learning, neuroscience, behaviorist psychology etc.:

Reinforcement learning

It might be possile to trace some of the ideas back to Edward L. Thorndike (1911):
Satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened.
Which brings us to ''learning with a critic'':
Where the ''critic'' returns the reinforcement, which evaluates the quality of the action (in the environment) taken by the agent depending on its actual state.
Some reinforcement learning milestones:
A great talk.

5.4. Research Excellence Award 2018 - Jitendra Malik.

Malik has worked on many different topics in computer vision, computational modeling of human vision, computer graphics, the analysis of biological images etc. And Jitendra Malik is also a part of Facebook's AI Research team.

The brains visual system has many tasks.

It must be able to do: So, no wonder that computer vision is (also) difficult.

The Visual Cortex

When we recognize e.g. a bird, we do not only understand what shape it has, we also have idea about what kind of texture the bird has.

In order to understand what action a ''thing'' is going to take, we need to understand movement and goal.
Action = movement + goal.

Indeed, the human visual system is, of course, pretty smart.
A child doesn't need 1.000 examples of Zebra's in order to recognize a Zebra.
Children already have a lot (visual) knowledge that can be used in order understand what a Zebra is.

So, artificial visual systems still have a long way to go.
Still, a very inspiring talk.

5.4. McCarthy & Computers and Thought Awards.

The conference ended with the McCarthy & Computers and Thought Awards.

And 2 new great speeches (by Milind Tambe and Stefano Ermon).
Milind Tambe (USC Center for Artificial Intelligence in Society, conducting research in Artificial Intelligence to help solve the most difficult social problems facing our world), got the John McCarthy Award for using AI to tackle difficult societal problems in homeland security, policing, wild life conservation etc.
While Stefano Ermon got the Computers and Thought Award.

Which was the end of a great conference!

Except for a few extra chats in the lobby about machine learning, deep learning and the next generation of AI...

Wallenberg AI, Autonomous Systems and Software Program. Linkoeping, Sweden

Ijcai. Stockholm, Sweden. 2018.

6. The 2018 World Computer Chess Championship,
ICGA 2018 (at Ijcai, Stockholm).

The World Computer Chess Championship 2018

The World Computer Chess Championship 2018
The World Computer Chess Championship, Stockholmsmässan, July 2018

The World Computer Chess Championship 2018

Komodo won this years World Computer Chess Championship 2018 after a play-off with GridGinkgo.

The World Computer Chess Championship, Stockholmsmässan, July 2018
The World Computer Chess Championship, Stockholmsmässan, July 2018

7. The exhibition hall, Stockholmsmässan,
IJCAI 2018.

Alibaba Group, Hangzhou, Zhejiang, China

Alibaba Group, Hangzhou, Zhejiang, China

For more, see Agents in Traffic and Transportation (ATT 2018)

For more about ''AI and transportation'', see ATT 2018 (Agents in Traffic and Transportation).

Watson, IBM

Centre for Applied Autonomous Sensor Systems (AASS)

The Centre for Applied Autonomous Sensor Systems (AASS), focus on the perceptual and cognitive capabilities of autonomous systems.


8. Misc. posters, upcoming events, etc.

The AIME 2019 conference will be held in Poznan, Poland on June 26-29, 2019
Nanjing University AI research

AI and VR, Taichung, Taiwan 2018
Journal of Artificial Intelligence Research

9. Conclusion.

The end of a wunderbar conference. With many memorable talks.
And lets not forget... (also) many memorable conversations with many great (poster) presenters.

Obviously, I'm already looking forward to my next visit to the Ijcai conference!

The Swedish Academy, Prize Awarder for the Nobel Prize in Literature

Certainly, there is a Nobel prize here, somewhere ...

Pictures from Stockholm, July 16-19, 2018.

Conference Venue.
Stockholmsmässan, Stockholm, Sverige.

Time to pack up & say goodbye.
And, perhaps, meet again in Macau in 2019...

IJCAI 2019 in Macau


Aamas 2014 | Areadne 2014
Nasslli 2012 | WCE 2013 | CogSci 2013 | CogSci 2014
About | Site Index | Post Index | NeuroSky | Connections | Future Minds | Mind Design | Contact Info
© August 2018 Simon Laub - - -
Original page design - August 10th 2018. Simon Laub - Aarhus, Denmark, Europe.