
Tech • IA • Crypto
Le secteur de la robotique humanoïde et de l’intelligence artificielle franchit une étape majeure avec l’arrivée simultanée de nouveaux robots avancés, matériaux innovants et modèles d’IA capables d’adaptation et d’autonomie inédites.
Agibbot dévoile un écosystème robotique complet à Shanghai
Lors de sa conférence 2026, Agibbot a présenté une architecture unique baptisée « un corps robotique, trois intelligences » qui intègre mouvement, manipulation et interaction dans un système unifié. Cette approche vise à dépasser les démonstrations ponctuelles pour déployer à grande échelle des robots capables d’apprentissage et d’adaptation en temps réel grâce à la collecte continue de données terrain.
Robots humanoïdes et collaboration humaine
Le modèle A3, un humanoïde de 173 cm et 55 kg construit avec des matériaux légers (magnésium, titane), affiche une autonomie de 10 heures et un échange rapide de batterie en 10 secondes. Il fonctionne en flotte coordonnée jusqu’à 100 unités avec une précision au centimètre près grâce à la technologie ultra-wideband, destiné à des environnements comme le retail et l’éducation. Le G2 Air se distingue par son rôle d’assistant collaboratif : un bras mobile compact avec 7 degrés de liberté, capable d’opérer aux côtés des humains à une vitesse de 1,5 m/s dans des espaces restreints.
Manipulation robotique de haute précision
L'Omnihand 3 Ultra T apporte plus de 22 degrés de liberté et intègre un système tendon-driven avec capteurs tactiles tridimensionnels et caméra dans la paume, réduisant le délai de réaction sous 0,3 seconde. Des versions plus robustes complètent cette main, notamment l’Omni Picker 3 capable d’une force de préhension de 140 newtons et destiné à durer plus d’un million de cycles.
Autonomie et mobilité tout terrain
Le quadrupède D2 Max se pose comme le premier robot autonome de niveau 3 tout-terrain, conçu pour des missions critiques comme la sécurité, l’agriculture et les secours. Il évolue sans contrôle à distance, augmentant considérablement son champ d’application en conditions réelles.
Système de collecte de données MIGO indépendant des robots
Ce dispositif révolutionnaire permet à des humains de générer des données multimodales synchronisées (vision, mouvement, tactile) sans utiliser de robot spécifique, réduisant les coûts et décuplant la quantité et la diversité des données pour l’apprentissage des IA.
Fondations IA multi-modales et auto-apprentissage
Huit modèles alimentent cet écosystème : BFM imite les mouvements humains à partir d’une simple vidéo, GCFM transforme texte, audio et vidéo en mouvement contextuel, goto planifie l’exécution de tâches complexes, GE2 génère des environnements virtuels stratégiques, tandis que SOP permet aux flottes de robots d’apprendre en continu. Le modèle Weda Omni fusionne visions, sons, langage et actions pour des communications naturelles.
Nouveaux matériaux pour muscles artificiels auto-réparants
Des chercheurs coréens ont créé un muscle robotique innovant à base d’élastomère diélectrique et ferrofluide à phase transitionnelle. Ce muscle se reconfigure en temps réel sous l’effet de chaleur ou champs magnétiques, peut changer de fonctions et se réparer seul en cas de dommage, avec une performance récupérée à 91% après plusieurs cycles. Ce matériau ouvre la voie à des composants robotiques plus durables, flexibles et recyclables.
Robots humanoïdes dépassent les records humains au marathon de Pékin
Plus de 100 équipes ont aligné des robots autonomes sur la course de 21 km, avec des performances spectaculaires. Le gagnant a achevé la course en 50 minutes et 26 secondes, battant le record humain établi un mois plus tôt. Des innovations comme des jambes longues adaptées à la biomécanique humaine et des systèmes de refroidissement liquide issus des smartphones ont permis cette prouesse, mise en avant par le soutien massif des autorités chinoises.
Vers un cerveau robotique généraliste avec PI 0.7
La startup Physical Intelligence a présenté PI 0.7, un modèle d’IA multimodal capable d’exécuter des tâches inédites grâce à la « généralisation compositionnelle ». En combinant texte, images et contexte, ce cerveau robotique adapte ses compétences à différents robots et environnements sans nécessiter d’entraînement spécifique pour chaque nouvelle tâche. Les tests montrent des exploits tels que l’usage d’appareils inconnus ou le pliage de linge sans données d’apprentissage dédiées.
Un virage vers des robots plus adaptables, autonomes et physiques
L’ensemble de ces avancées signale une convergence entre capacités physiques accrues, matériaux capables d’auto-réparation et IA polyvalentes. Cette synergie annonce une nouvelle génération de robots conçus non seulement pour des démonstrations mais pour une déploiement réel vaste, dans des environnements divers et complexes.
Ces développements illustrent une accélération sans précédent dans la robotique, portée par des innovations techniques, une intégration renforcée et une intelligence artificielle de plus en plus generaliste. Cette conjoncture marque un tournant vers des machines capables d’apprendre, de s’adapter physiquement et intellectuellement, ouvrant la voie à des applications industrielles, sociales et humaines inédites.
Agibbot rolled out a full new wave of humanoid robots and AI models built for actual deployment. Researchers created an artificial muscle that can reshape itself and heal after damage. Humanoid robots in Beijing just ran a half marathon faster than human world record pace. And a new robot brain called Pi 0.7 is already showing signs of handling tasks it was never specifically trained to do. This space is moving fast now. And some of these updates are honestly kind of wild. So, let's start with Agibbot. At their 2026 partner conference in Shanghai, Ajibbot introduced a full stack of new robotic systems and AI models, all built around what they call a one robotic body, three intelligences architecture. The idea here is pretty straightforward, though the execution is not. Instead of treating movement, manipulation, and interaction as separate problems, they're building a unified system where all three are tightly integrated and continuously improved through realworld data. And that matters because robotics has been stuck for years in this phase where companies could demonstrate impressive capabilities. Though actually deploying robots at scale in real environments remained a completely different challenge. Agibbot is clearly trying to close that gap. One of the most interesting platforms they showed is the A3 humanoid. It stands about 173 cm tall and weighs just 55 kg. Built using lightweight materials like magnesium, titanium, and TPU, the power to weight ratio sits at 0.218 kW per kg, which is pretty high for something designed to operate for up to 10 hours straight. It also supports a 10-second battery swap, which is a small detail, though it matters a lot in real deployments where downtime becomes a problem. What makes this robot different is not just the hardware. It's designed for coordinated multi-root systems using ultra wideband positioning to synchronize up to 100 robots at once with centimeter level accuracy. So, you're not just looking at a single humanoid performing tasks, you're looking at fleets that can coordinate in real time. They also packed it with interaction features like full direction microphone arrays and tactile sensing in the shoulders which suggests this thing is being positioned for environments like retail, education and entertainment where human interaction is part of the job. Then there's the G2 Air which is a very different kind of system. This one is more focused on human robot collaboration. It's a compact mobile manipulator with a single arm, 7 degrees of freedom, a payload of about 3 kg, and a reach of roughly 750 to 800 mm. It can move at speeds of at least 1.5 m/s and operate in spaces under 800 mm wide with zero radius turning. That might sound like a spec sheet, though the key idea here is that it's designed to work alongside humans rather than replace them. It can perform structured tasks in retail, logistics, hospitality, and light industrial workflows while also collecting data during operation. That's a big deal because traditionally data collection and task execution have been separate processes. Here, they're combined into one continuous loop. So, every time the robot does something, it's also learning. Then you've got the Omnihand 3 Ultra T, which is probably one of the most technically dense components in this whole lineup. This is a dextrous robotic hand with over 22 degrees of freedom, plus an additional three in the wrist. It uses a tendon-driven system, weighs around 500 g, and has a load to weight ratio of 10:1. It also includes full hand three-dimensional tactile sensing and an integrated palm camera with a response time under 0.3 seconds. This is the kind of hardware that starts to close the gap between robotic manipulation and human level dexterity. And alongside it, they introduced variants like the Omni Picker 3 Gripper, which can apply 140 newtons of force and is rated for 1 million cycles, and the Omnihand 3 Light for more rugged environments. And if all of this is heading where it looks like it's heading, then the real advantage is not just watching it happen. It's knowing how to actually use AI while the shift is still early. Outskill is sponsoring today's video. And it's an AI focused education platform built around one simple idea, helping people actually apply AI in real work instead of just consuming updates about it. They've already reached over millions of people across more than 100 countries. And right now, they're running a free two-day live AI mastermind this weekend. 16 hours total from 10:00 a.m. to 7 pm Eastern Standard Time on Saturday and Sunday. The focus is very practical. You're learning how to build AI agents that can plan, execute, and report, automate workflows that keep running in the background, connect tools like Sheets and Notion into useful systems, and use AI in ways that actually save time and increase your earning potential. They're also including a bunch of bonuses like an AI prompt bible, a personalized AI toolkit builder and an AI profit road map, plus instant access to their AI survival hackbook just for registering. So, if you want to stay ahead of where this space is going instead of playing catchup later, go check it out. Link is in the description. All right, now back to the video. Now, on the mobility side, there's the D2 Max, which they describe as the first allterrain level three autonomous quadriped robot. This thing is designed for missionritical scenarios like inspection, security, emergency response, agriculture, and logistics. The key shift here is autonomy. Instead of being remotely controlled, it operates as an intelligent system that can navigate and perform tasks on its own. And then there's something that might be even more important long term, which is the MIGO system. This is a body-free data collection platform. Instead of relying on robots to gather training data, it allows humans to capture multimodal data directly using tools like MIGO Gripper and MIGO View. That data includes vision, motion, and tactile inputs, all synchronized and processed through their MIGO engine. So basically, they're decoupling data generation from robotic hardware, which massively reduces cost and increases scalability. And all of this hardware is paired with eight foundational AI models that power the three intelligence layers. On the locomotion side, you've got models like BFM, which allows robots to imitate human movements from a single demonstration or short video, even in noisy environments. And then GCFM, which can take inputs like text, audio, or video, and convert them into real time contextaware motion. On the manipulation side, they introduce things like goto, which uses something called action chain of thought to plan and execute long tasks, and GE2, which creates interactive virtual environments for strategy testing. There's also Genie Sim 3.0, which can generate digital twins of real world environments from natural language, and SOP, which allows robot fleets to continuously learn from real world deployment. And then for interaction, there's Weda Omni, which is a multimodal model that combines vision, audio, language, and action to enable more natural human robot communication. So when you step back, what Agabot is building here is not just a robot. It's a full ecosystem, including operating systems like LinkUS, personality and memory systems like Link Soul, no code behavior tools like Linkcraft, and a full development pipeline through Genie Studio. So every time the robot does something, it's also learning. Then you've got the Omnihand 3 Ultra T, which is probably one of the most technically dense components in this whole lineup. This is a dextrous robotic hand with over 22 degrees of freedom, plus an additional three in the wrist. It uses a tendon-driven system, weighs around 500 g, and has a loadtoe ratio of 10:1. It also includes full hand three-dimensional tactile sensing and an integrated palm camera with a response time under 0.3 seconds. This is the kind of hardware that starts to close the gap between robotic manipulation and human level dexterity. And alongside it, they introduced variants like the Omni Picker 3 Gripper, which can apply 140 newtons of force and is rated for 1 million cycles, and the Omnihand 3 Light for more rugged environments. Now on the mobility side, there's the D2 Max, which they describe as the first allterrain level 3 autonomous quadriped robot. This thing is designed for missionritical scenarios like inspection, security, emergency response, agriculture, and logistics. The key shift here is autonomy. Instead of being remotely controlled, it operates as an intelligent system that can navigate and perform tasks on its own. And then there's something that might be even more important long term, which is the MIGO system. This is a bodyfree data collection platform. Instead of relying on robots to gather training data, it allows humans to capture multimodal data directly using tools like MIGO gripper and MIGO view. That data includes vision, motion, and tactile inputs, all synchronized and processed through their MIGO engine. So basically, they're decoupling data generation from robotic hardware, which massively reduces cost and increases scalability. And all of this hardware is paired with eight foundational AI models that power the three intelligence layers. On the locomotion side, you've got models like BFM, which allows robots to imitate human movements from a single demonstration or short video, even in noisy environments. And then GCFM, which can take inputs like text, audio, or video, and convert them into real time contextaware motion. On the manipulation side, they introduced things like goto, which uses something called action chain of thought to plan and execute long tasks, and GE2, which creates interactive virtual environments for strategy testing. There's also Genie Sim 3.0, which can generate digital twins of real world environments from natural language, and SOP, which allows robot fleets to continuously learn from real world deployment. And then for interaction, there's Weda Omni, which is a multimodal model that combines vision, audio, language, and action to enable more natural human robot communication. So when you step back, what Agabot is building here is not just a robot. It's a full ecosystem, including operating systems like LinkUS, personality and memory systems like Link Soul, no code behavior tools like Linkcraft, and a full development pipeline through Genie Studio. and they're already deploying hundreds of robots across real projects. Now, at the same time, on a completely different front, researchers at Soul National University just developed something that could fundamentally change how robots are built at the material level. They created a new type of artificial muscle using a dialectric elastimemer actuator combined with a phase transitional ferrofluid material. At room temperature, this material behaves like a solid. Though when exposed to heat or magnetic fields, it becomes fluid-like and that allows the internal electrode structure of the actuator to be reshaped even after it's already been built. That's a big shift because traditional actuators are fixed. Once you manufacture them, they can only perform one type of motion. This new system can actually reconfigure itself in real time. The electrodes inside can split, merge, and move in three dimensions while the device is operating. So, a single actuator can switch between different functions like bending, expanding, or even bridging electrical circuits. And on top of that, it's self-healing. If part of the electrode is damaged, the surrounding material can liquefy and reconnect the circuit, allowing the system to keep functioning instead of failing completely. They tested the recyclability as well. And even after multiple reuse cycles, the system maintained about 91% recovery in performance. So now you're looking at robotic components that can adapt, repair themselves, and be reused instead of discarded. That has implications for everything from industrial robots to wearable devices and flexible electronics. Then you've got another signal coming out of China. And this one is more visible. At the Beijing Half Marathon, humanoid robots didn't just participate, they actually outperformed human runners. Last year, the best robot finished the race in about 2 hours and 40 minutes, which was nowhere near human performance. This year, things change dramatically. More than 100 teams participated, up from around 20 the previous year. Nearly half of the robots navigated the two 1 kilometer course autonomously without remote control. And the top performers didn't just improve slightly, they beat human world record level times. The winning robot developed by Honor finished in 50 minutes and 26 seconds. That's faster than the human halfmarathon world record set by Jacob Kaplimo just a month earlier. Another robot from the same group reportedly crossed in 48 minutes and 19 seconds, depending on how the event scoring was calculated. These robots were designed with features like long legs around 90 to 95 cm to match elite human biomechanics along with liquid cooling systems adapted from smartphone technology. And while running itself might seem like a niche application, the underlying improvements translate into things like better structural reliability, thermal management, and motion control. All of which matter in industrial settings. China is clearly pushing hard in this space with government support, infrastructure investment, and public demonstrations like this one, including events like the Spring Festival Galla where humanoid robots performed complex martial arts routines. Though, there's still a gap between these demonstrations and real industrial deployment, especially when it comes to dexterity, perception, and handling complex environments. And that's where the last piece comes in. A startup called physical intelligence just introduced a model called PI0.7 which is starting to look like an early version of a generalpurpose robot brain. Instead of training robots for specific tasks one at a time, this model is trained on a mix of data from different robots, human demonstrations, and autonomous interactions. And it uses multimodal prompts, meaning it can take text instructions, visual inputs, and contextual parameters all at once. What's interesting is that it shows early signs of something called compositional generalization. That basically means it can take skills it already learned and recombine them to solve new problems it hasn't seen before. So instead of needing a new data set for every single task, it can adapt. In testing, it was able to do things like use unfamiliar kitchen appliances or perform tasks like folding laundry without having any specific training data for those exact actions. And with more structured guidance like step-by-step instructions or visual sub goals, its performance improved even further. It also generalizes across different robots and environments better than previous systems, which suggests that we're moving toward models that can transfer knowledge instead of being locked into specific hardware setups. There are still limitations. It needs detailed guidance for complex multi-step tasks. And there's no standardized benchmarking yet. So independent validation is still an open question. Though the direction is clear. Instead of building separate models for every task, the field is moving towards systems that can learn once and apply that knowledge across many different situations. And when you connect that with everything else happening from Agabot's full stack deployment systems to self-healing artificial muscles and robots outperforming humans in physical tasks, it starts to form a pretty clear trajectory. Robots are becoming more capable physically, more adaptable at the material level, and more general in their intelligence. And those three layers are starting to align at the same time. Also, if you want more content around science, space, and advanced tech, we've launched a separate channel for that. Links in the description. Go check it out. Anyway, that's it for this one. If you've been following robotics closely, you probably see where this is heading. Let me know what you think. Thanks for watching, and I'll catch you in the next one.