I had the great pleasure of taking part in Humanoids 2024 (The 2024 IEEE-RAS International Conference on Humanoid Robots).
Tried to follow as much as much possible, But, well, these notes are, of course, in
no way, shape or form complete...
Rather, these notes were written on conference nights, as my way of
keeping track of the events of the day. And as a way of storing links and references for future reference.
Below you will find impressions from the conference, and links for further reading.
Humanoids 2024. Exhibition, competitions and talks.
Booster supports text-to-speech,
ASR (automatic speech recognition) and it supports YOLO.
And much more...
IIT – Istituto Italiano di Tecnologia.
And a ''vacuum cleaner''...
See: ''Robot Dog Cleans Up Beaches With Foot-Mounted Vacuums'' [1].
''The experience of being, or having, a self—contained within our bodies and able to act in the world—comes naturally to all of us as human beings, along with a feeling of being the same self from day-to-day and of seeing others as also being selves''.
''...The authors suggest the possibility of generating in robots some of the processes which contribute to the >>sense of self<< in humans''.
''...Current research studies suggest that in humans a sense of self develops as the brain's best explanation of its sensory experience, and its own role in generating those sensory signals. A robot, being a physically embodied actor, is a suitable platform to test those theories''.
''...For instance, by age 4, children have a sense of themselves as existing through time, and of other people as also having selves. These aspects of self are beginning to be investigated in robots by creating memory systems for robots that are similar to human autobiographical memory. However, this work is at an early stage; current robots do not have awareness of themselves as persisting from day to day, nor are they aware of others (humans or robots) as being selves'' [2].
Inria: ''Equipping machines with autonomous perception and action capabilities, capable of evolving and interacting in changing environments, regardless of their particularities'' [3].
Talos: ''If it has to lift its foot, Talos will calculate the optimum position for all of the joints of its 32 motors in order to identify the optimum movement [4].
Amphiteatre. Prouvé Convention Centre, Nancy.
Projections:
''The global market for humanoid robots is estimated to grow by 70% per year through 2035'' [5].
''Predicted: By 2027 more than 10,000 humanoid robots will be shipped per year''
[6].
''Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control'' [7].
''The perception-action cycle is this continuous flow of information and action between the brain and the world around it. On and on it goes: Sense, predict, act, adjust'' [8].
Wandercraft. Hands-free walk exoskeleton [9]. Paris 2024: ''Kevin Piette carried the Olympic Torch'' (with the help of Exoskeleton) [10].
Fourier Robotics [11].
''Fourier is working closely with companies and organizations to
explore the potentials of humanoid robotics in various fields''. Video: [12].
''Security Robot Dogs provide the ultimate advanced security - agility, intelligence & HD cameras for night vision & thermal imaging'' [15]. Ghost Robotics [16].
TOrque-controlled humanoid RObot (TORO) is an advanced humanoid robot used as a research platform to study bipedal walking and autonomous behaviors combining manipulation and locomotion (Creator: DLR, German Aerospace Center). See DLR internet pages for a short introduction to the walking and multi-contact balancing strategies used for TORO [21], [22].
Aie Robotics [23].
Is a developer of humanoid robots designed for widespread use, aiming to be a leader in the field. The company focuses on developing robots resembling and mimicking the human form by leveraging the research and technology developed, enabling clients to make humanoid robots more accessible and applicable to everyday use. South Korea [24].
PAL Robotics [25].
TALOS is an advanced humanoid designed to perform complex tasks in research and industrial settings...
''Assistive robots are designed to help people with disabilities or limitations in their daily activities. These robots can support people with tasks such as mobility, communication, and self-care, among others. There are different types of assistive robotic devices, each designed to address specific needs and challenges'' [26].
Musculoskelatal Humanoids.
Jouhou Systems Kougaku (JSK) Laboratory.
And then, a
very interesting talk by Kento Kawaharazuka [27],
JSK Robotics Lab (University of Tokyo):
''History and Future of Tendon-driven Musculoskelatal Humanoids'' [28].
We have developed various humanoids, such as those for daily life support and disaster rescue, musculoskeletal humanoids that mimic the human body, and flying humanoids. In this keynote talk, I will focus on tendon-driven musculoskeletal humanoids and introduce the series of humanoids we have developed, such as Kenta, Kotaro, Kojiro, Kenzoh, Kenshiro, Kengoro, and Musashi [29].
Kenta. A whole-body tendon-driven humanoid with flexible spine [30].
Jouhou System Kougaku Laboratory.
''Robots Can Multi-task Too: Integrating a Memory Architecture and LLMs for Enhanced Cross-Task Robot Action Generation''.
Our proposed dual-layered architecture features two LLMs, utilizing their complementary skills of reasoning and following instructions, combined with a memory model inspired by human cognition. Our results show a significant improvement... demonstrating the potential of integrating memory with LLMs for combining the robot's action and perception for adaptive task execution [40].
Robot Software:
On Inria's (French national institute for research in digital science and technology) [41] homepage it says:
Today, the idea of robots being capable of completely autonomous movement in a complex, real-life environment such as a hospital seems inconceivable...
...
Our aim is to develop software that can be adapted to any robot, combining enhanced environmental perception, rapid decision-making and machine learning. In other words, we want to lay the algorithmic foundations for a genuine artificial intelligence of motion [42].
Another lookup on Inria's homepage gives:
Pinocchio, the software that brings robots to life...
...
A humanoid robot travelling from point A to point B will create intermittent contacts with its environment...
...
This is where model predictive control comes in, and more specifically Pinocchio, a calculation engine that is highly effective when it comes to describing complex interaction phenomena [43].
Where more about Pinocchio can be found here: [44], [45].