RIEM News LogoRIEM News

Articles tagged with "reinforcement-learning"

  • Sweater-wearing humanoid robot gets brain upgrade to clean, cook solo

    1X Technologies has introduced Redwood, an advanced AI model powering its humanoid robot NEO, designed to autonomously perform complex household tasks such as laundry, door answering, and home navigation. Redwood is a 160 million-parameter vision-language model that integrates perception, locomotion, and control into a unified system running onboard NEO Gamma’s embedded GPU. This integration enables full-body coordination, allowing NEO to simultaneously control arms, legs, pelvis, and walking commands, which enhances its ability to brace against surfaces, handle higher payloads, and manipulate objects bi-manually. Redwood’s training on diverse real-world data, including both successful and failed task demonstrations, equips NEO with strong generalization capabilities to adapt to unfamiliar objects and task variations, improving robustness and autonomy even in offline or low-connectivity environments. Complementing Redwood, 1X Technologies has developed a comprehensive Reinforcement Learning (RL) controller that expands NEO’s mobility and dexterity for navigating real home environments. This controller supports fluid

    robothumanoid-robotAI-modelrobotics-autonomymotion-controlmobile-manipulationreinforcement-learning
  • Chinese firm eases humanoid, legged robot development with new suite

    EngineAI Robotics, a Shenzhen-based Chinese firm, has launched EngineAI RL Workspace, an open-source, modular reinforcement learning platform tailored specifically for legged robotics development. This comprehensive suite includes dual frameworks—a training code repository and a deployment code repository—that together provide an end-to-end solution from algorithm training to real-world application. The platform is designed to enhance development efficiency through reusable logic structures, a unified single-algorithm executor for both training and inference, and decoupled algorithms and environments that enable seamless iteration without interface changes. The EngineAI RL Workspace integrates the entire development pipeline with four core components: environment modules, algorithm engines, shared toolkits, and integration layers, each independently encapsulated to facilitate multi-person collaboration and reduce communication overhead. Additional features include dynamic recording systems for capturing training and inference videos, intelligent version management to maintain experiment consistency, and detailed user guides to support rapid onboarding. At CES 2025, EngineAI showcased humanoid robots like the SE01, a versatile 5.

    roboticshumanoid-robotsreinforcement-learninglegged-robotsrobot-developmentAI-in-roboticsmodular-robotics-platform
  • Chinese firm achieves agile, human-like walking with AI control

    Chinese robotics startup EngineAI has developed an advanced AI-driven control system that enables humanoid robots to walk with straight legs, closely mimicking natural human gait. This innovative approach integrates human gait data, adversarial learning, and real-world feedback to refine robot movement across diverse environments, aiming to achieve more energy-efficient, stable, and agile locomotion. EngineAI’s lightweight humanoid platform, the PM01, has demonstrated impressive agility, including successfully performing a frontflip and executing complex dance moves from the film Kung Fu Hustle, showcasing the system’s potential for fluid, human-like motion. The PM01 robot features a compact, lightweight aluminum alloy exoskeleton with 24 degrees of freedom and a bionic structure that supports dynamic movement at speeds up to 2 meters per second. It incorporates advanced hardware such as an Intel RealSense depth camera for visual perception and an Intel N97 processor paired with an NVIDIA Jetson Orin CPU for high-performance processing and neural network training. This combination allows the PM01 to interact effectively with its environment and perform intricate tasks, making it a promising platform for research into human-robot interaction and agile robotic assistants. EngineAI’s work parallels other Chinese developments like the humanoid robot Adam, which uses reinforcement learning and imitation of human gait to achieve lifelike locomotion. Unlike traditional control methods such as Model Predictive Control used by robots like Boston Dynamics’ Atlas, EngineAI’s AI-based framework emphasizes adaptability through real-world learning, addressing challenges in unpredictable environments. While still in the research phase, these advancements mark significant progress toward next-generation humanoid robots capable of natural, efficient, and versatile movement.

    robothumanoid-robotAI-controlgait-controlreinforcement-learningrobotics-platformenergy-efficient-robotics
  • Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners - Robohub

    The 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025), held from May 19-23 in Detroit, recognized outstanding contributions in the field with awards for best paper, best demo, and distinguished dissertation. The Best Paper Award went to the team behind "Soft Condorcet Optimization for Ranking of General Agents," led by Marc Lanctot and colleagues. Several other papers were finalists, covering topics such as commitments in BDI agents, curiosity-driven partner selection, reinforcement learning for vehicle-to-building charging, and drone delivery systems. The Best Student Paper Award was given to works on decentralized planning using probabilistic hyperproperties and large language models for virtual human gesture selection. In addition, the Blue Sky Ideas Track honored François Olivier and Zied Bouraoui for their neurosymbolic approach to embodied cognition, while the Best Demo Award recognized a project on serious games for ethical preference elicitation by Jayati Deshmukh and team. The Victor Lesser Distinguished Dissertation Award, which highlights originality, impact, and quality in autonomous agents research, was awarded to Jannik Peters for his thesis on proportionality in selecting committees, budgets, and clusters. Lily Xu was the runner-up for her dissertation on AI decision-making for planetary health under conditions of low-quality data. These awards underscore the innovative research advancing autonomous agents and multiagent systems.

    robotautonomous-agentsmultiagent-systemsdronesreinforcement-learningenergy-storageAI
  • Tesla’s Optimus robot takes out trash, vacuums, cleans like a pro

    robotTeslaOptimusAIautomationhumanoid-robotreinforcement-learning
  • Watch humanoid robots clash in a tug of war, pull cart, open doors

    robothumanoidreinforcement-learningcontrol-systemforce-awareloco-manipulationCMU
  • Robot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto

    robotmachine-learningadaptable-robotsroboticsartificial-intelligenceautonomous-machinesreinforcement-learning
  • Shlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award

    robotautonomous-agentsmulti-agent-systemsdecision-makingreinforcement-learningresearch-awardAI