Articles tagged with "artificial-intelligence"
How Much Energy Does AI Use? The People Who Know Aren’t Saying
The article discusses the opaque nature of energy consumption data related to AI, particularly large language models like ChatGPT. OpenAI CEO Sam Altman claimed that an average ChatGPT query uses about 0.34 watt-hours of energy, roughly equivalent to a high-efficiency lightbulb running for a couple of minutes. However, experts criticize this figure for lacking transparency and context, such as whether it includes energy used for training models, server cooling, or image generation. OpenAI has not provided detailed disclosures explaining how this number was calculated, leading to skepticism among researchers like Sasha Luccioni from Hugging Face, who emphasizes the need for more comprehensive environmental transparency in AI. The article highlights a broader issue: most AI models in use today do not disclose their environmental impact, with 84% of large language model traffic in May 2025 coming from models with zero environmental disclosure. This lack of transparency hampers efforts to accurately assess AI’s carbon footprint, especially as AI usage grows rapidly. Misleading
energyartificial-intelligenceAI-energy-consumptioncarbon-emissionsenvironmental-impactenergy-transparencyclimate-changeNvidia’s AI empire: A look at its top startup investments
Nvidia has dramatically expanded its influence in the AI sector by significantly increasing its investments in AI startups since the rise of ChatGPT and other generative AI services. The company’s revenue, profitability, and stock price have surged, enabling it to participate in 49 AI funding rounds in 2024 alone—up from 34 in 2023 and 38 combined over the previous four years. This surge includes investments made both directly and through its corporate venture capital arm, NVentures, which also ramped up activity from 2 deals in 2022 to 24 in 2024. Nvidia’s stated goal is to grow the AI ecosystem by backing startups it views as “game changers and market makers.” Among Nvidia’s most notable investments are several high-profile AI startups raising rounds exceeding $100 million. These include OpenAI, where Nvidia participated in a massive $6.6 billion round valuing the company at $157 billion, and Elon Musk’s xAI, which raised $6
robotAI-startupsautonomous-drivingNvidia-investmentshigh-performance-GPUsartificial-intelligenceself-learning-systemsAll3 launches AI and robotics to tackle housing construction - The Robot Report
All3, a London-based company, has emerged from stealth mode to introduce an AI- and robotics-driven building system aimed at addressing the growing housing shortage in Europe and North America amid a severe skilled labor deficit. The company’s vertically integrated approach combines AI-powered custom building design, automated manufacturing, and robotic assembly, primarily using structural timber composites. This system streamlines construction processes from initial design to final build, enabling faster development, significant cost reductions, and improved sustainability and affordability. All3’s technology is particularly suited for complex urban brownfield sites, where irregular shapes and limited access pose challenges to traditional construction methods. The construction industry has historically underinvested in innovation, spending less than 1% of revenues on R&D compared to 4.5% in sectors like automotive, resulting in reliance on outdated, labor-intensive processes. Europe alone faces a shortage of 4.2 million construction workers, a gap expected to widen as many skilled workers retire. All3’s CEO, Rodion Shish
roboticsartificial-intelligenceconstruction-technologyautomationbuilding-materialssustainable-housingAI-in-construction100-lane expressway for light: China's optical chip hits record speeds
Chinese researchers at the Shanghai Institute of Optics and Fine Mechanics (SIOM) have developed an ultra-high-parallel optical computing chip capable of a theoretical 2,560 tera-operations per second (TOPS) at a 50 GHz optical clock rate. Unlike conventional optical processors that use a single wavelength of light, this chip employs a 100-wavelength architecture, effectively creating a "100-lane expressway" for data transmission. This is achieved through soliton microcomb sources that split a continuous laser into over a hundred distinct spectral channels, allowing massive parallelism without increasing clock speed or chip size. The chip offers low insertion loss, wide optical bandwidth, and fully reconfigurable routing, making it suitable for applications such as image recognition, real-time signal processing, and artificial intelligence (AI). The design's high parallelism and energy efficiency position it as a promising alternative to traditional GPUs, particularly for AI workloads that require numerous identical operations. Its low latency and power efficiency also make it attractive
energyoptical-chiphigh-speed-computingartificial-intelligencephotonic-technologylow-latency-processingedge-devicesPrismaX launches with $11M to scale virtual datasets for robotics foundation models - The Robot Report
PrismaX, a San Francisco-based startup founded in 2024 by Bayley Wang and Chyna Qu, has launched with $11 million in funding to address key challenges in the physical AI and robotics industry related to data quality, model development, and scalability. The company is developing a robotics teleoperations platform aimed at creating a decentralized ecosystem that incentivizes the collection and use of high-quality visual datasets. PrismaX’s approach focuses on establishing fair use standards where revenue generated from data powering AI models is shared with the communities that produce it, thereby tackling issues of data scarcity, bias, and affordability that have hindered robotics advancements. The platform is built around three foundational pillars: data, teleoperation, and models. PrismaX plans to validate and incentivize visual data to scale robotics datasets comparable to text data, define uniform teleoperation standards to streamline operator access and payments, and collaborate with AI teams to develop foundational models that enable more autonomous robots. This integrated approach aims to create a “data flywheel
roboticsartificial-intelligenceteleoperationdata-scalabilityautonomous-robotsrobotics-foundation-modelsdecentralized-technologyWeek in Review: WWDC 2025 recap
The Week in Review covers major developments from WWDC 2025 and other tech news. At Apple’s Worldwide Developers Conference, the company showcased updates across its product lineup amid pressure to advance its AI capabilities and address ongoing legal challenges related to its App Store. Meanwhile, United Natural Foods (UNFI) suffered a cyberattack that disrupted its external systems, impacting Whole Foods’ ability to manage deliveries and product availability. In financial news, Chime successfully went public, raising $864 million in its IPO. Other highlights include Google enhancing Pixel phones with new features like group chat for RCS and AI-powered photo editing, and Elon Musk announcing the imminent launch of driverless Teslas in Austin, Texas. The Browser Company is pivoting from its Arc browser to develop an AI-first browser using a reasoning model designed for improved problem-solving in complex domains. OpenAI announced a partnership with Mattel, granting Mattel employees access to ChatGPT Enterprise to boost product development and creativity. However, concerns about privacy surfaced with
robotAIautonomous-vehiclesdriverless-carsmachine-learningartificial-intelligenceautomationHyundai Motor Group & Incheon International Airport to Deliver Next-Level Convenience with AI-Powered EV Charging Robots - CleanTechnica
Hyundai Motor Group and Incheon International Airport Corporation (IIAC) have entered a strategic partnership to deploy AI-powered electric vehicle (EV) automatic charging robots (ACRs) at Incheon International Airport. This collaboration, formalized through a Memorandum of Understanding, aims to enhance convenience, safety, and operational efficiency by integrating Hyundai’s advanced robotics and AI technologies with the airport’s infrastructure. The airport will serve as a demonstration site to verify usability and gather user feedback, supporting the airport’s transformation into an “Aviation AI Innovation Hub” amid its ‘Incheon Airport 4.0 Era’ expansion. The ACR technology has received safety certifications from Korea (KC) and the European Union (CE), underscoring its reliability and quality. Hyundai Motor Group plans to leverage its Robotics LAB experience, including prior demonstration projects like the ‘robot-friendly building’ initiative in Seoul, to expand ACR services beyond airports to other transportation hubs such as seaports and railways. The partnership also includes
roboticsartificial-intelligenceelectric-vehiclesEV-chargingsmart-airportmobility-solutionsHyundai-Motor-GroupMeta’s new AI helps robots learn real-world logic from raw video
Meta has introduced V-JEPA 2, an advanced AI model trained solely on raw video data to help robots and AI agents better understand and predict physical interactions in the real world. Unlike traditional AI systems that rely on large labeled datasets, V-JEPA 2 operates in a simplified latent space, enabling faster and more adaptable simulations of physical reality. The model learns cause-and-effect relationships such as gravity, motion, and object permanence by analyzing how people and objects interact in videos, allowing it to generalize across diverse contexts without extensive annotations. Meta views this development as a significant step toward artificial general intelligence (AGI), aiming to create AI systems capable of thinking before acting. In practical applications, Meta has tested V-JEPA 2 on lab-based robots, which successfully performed tasks like picking up unfamiliar objects and navigating new environments, demonstrating improved adaptability in unpredictable real-world settings. The company envisions broad use cases for autonomous machines—including delivery robots and self-driving cars—that require quick interpretation of physical surroundings and real
roboticsartificial-intelligencemachine-learningautonomous-robotsvideo-based-learningphysical-world-simulationAI-modelsMeta’s V-JEPA 2 model teaches AI to understand its surroundings
Meta has introduced V-JEPA 2, a new AI "world model" designed to help artificial intelligence agents better understand and predict their surroundings. This model enables AI to make common-sense inferences about physical interactions in the environment, similar to how young children or animals learn through experience. For example, V-JEPA 2 can anticipate the next likely action in a scenario where a robot holding a plate and spatula approaches a stove with cooked eggs, predicting the robot will use the spatula to move the eggs onto the plate. Meta claims that V-JEPA 2 operates 30 times faster than comparable models like Nvidia’s, marking a significant advancement in AI efficiency. The company envisions that such world models will revolutionize robotics by enabling AI agents to assist with real-world physical tasks and chores without requiring massive amounts of robotic training data. This development points toward a future where AI can interact more intuitively and effectively with the physical world, enhancing automation and robotics capabilities.
robotartificial-intelligenceAI-modelroboticsmachine-learningautomationAI-agentsUS unleashes smart rifle scopes that shoot enemy drones on their own
The US Army has begun deploying the SMASH 2000L, an AI-enabled smart rifle scope developed by Israeli defense firm Smart Shooter, designed to counter small unmanned aerial systems (sUAS). This advanced fire control system integrates electro-optical sensors, computer vision, and proprietary target acquisition software to detect, lock on, and track small aerial targets such as quadcopters or fixed-wing drones. The system only permits the rifle to fire when a guaranteed hit is calculated, effectively eliminating human error in timing and enabling soldiers to engage drones with high precision. The SMASH 2000L was recently demonstrated during Project Flytrap, a multinational live-fire exercise in Germany, where US soldiers successfully used it mounted on M4A1 carbines. The SMASH 2000L is a lighter, more compact evolution of earlier SMASH variants already in use by NATO partners and combat forces, weighing about 2.5 pounds and fitting standard Picatinny rails. It offers real-time image processing
robotartificial-intelligencesmart-rifle-scopesdrone-defensemilitary-technologycomputer-visionautonomous-targetingNVIDIA Isaac, Omniverse, and Halos to aid European robotics developers - The Robot Report
At the GPU Technology Conference (GTC) in Paris, NVIDIA announced new AI-driven tools and platforms aimed at advancing robotics development, particularly for European manufacturers facing labor shortages and sustainability demands. Central to this initiative is NVIDIA Isaac GR00T N1.5, an open foundation model designed to enhance humanoid robot reasoning and skills, now available on Hugging Face. Alongside this, the company released Isaac Sim 5.0 and Isaac Lab 2.2, open-source robotics simulation frameworks optimized for NVIDIA RTX PRO 6000 systems, enabling developers to better train, simulate, and deploy robots across various applications. NVIDIA’s approach for the European robotics ecosystem revolves around a “three-computer” strategy: DGX systems and GPUs for AI model training, Omniverse and Cosmos platforms on OVX systems for simulation and synthetic data generation, and the DRIVE AGX in-vehicle computer for real-time autonomous driving processing. This scalable architecture supports diverse robotic forms, from industrial robots to humanoids. Several European robotics companies are actively integrating NVIDIA’s stack—Agile Robots uses Isaac Lab to train dual-arm manipulators, idealworks extends Omniverse Blueprints for humanoid fleet simulation, Neura Robotics collaborates with SAP to refine robot behavior in complex scenarios, Vorwerk enhances home robotics models with synthetic data pipelines, and Humanoid leverages the full NVIDIA stack to significantly reduce prototyping time and improve robot cognition. Overall, NVIDIA’s new tools and collaborative ecosystem aim to accelerate the development and deployment of smarter, safer robots in Europe, addressing critical challenges such as labor gaps and the need for sustainable manufacturing and automation solutions.
roboticsartificial-intelligenceNVIDIA-Isaacrobot-simulationautonomous-robotsindustrial-robotsAI-driven-manufacturingSam Altman thinks AI will have ‘novel insights’ next year
In a recent essay, OpenAI CEO Sam Altman outlined his vision for AI’s transformative impact over the next 15 years, emphasizing the company’s proximity to achieving artificial general intelligence (AGI) while tempering expectations about its imminent arrival. A key highlight from Altman’s essay is his prediction that by 2026, AI systems will likely begin generating “novel insights,” marking a shift toward AI models capable of producing new and interesting ideas about the world. This aligns with OpenAI’s recent focus on developing AI that can assist scientific discovery, a goal shared by competitors like Google, Anthropic, and startups such as FutureHouse, all aiming to automate hypothesis generation and accelerate breakthroughs in fields like drug discovery and material science. Despite this optimism, the scientific community remains cautious about AI’s ability to create genuinely original insights, a challenge that involves instilling AI with creativity and a sense of what is scientifically interesting. Experts like Hugging Face’s Thomas Wolf and former OpenAI researcher Kenneth Stanley highlight the difficulty of this task, noting that current AI models struggle to generate novel hypotheses. Stanley’s new startup, Lila Sciences, is dedicated to overcoming this hurdle by building AI-powered laboratories focused on hypothesis generation. While it remains uncertain whether OpenAI will succeed in this endeavor, Altman’s essay offers a glimpse into the company’s strategic direction, signaling a potential next phase in AI development centered on creativity and scientific innovation.
AIartificial-intelligencescientific-discoverymaterial-scienceenergy-innovationAI-agentsnovel-insightsArtificial Intelligence Models Improve Efficiency of Battery Diagnostics - CleanTechnica
The National Renewable Energy Laboratory (NREL) has developed an innovative physics-informed neural network (PINN) model that significantly enhances the efficiency and accuracy of diagnosing lithium-ion battery health. Traditional battery diagnostic models, such as the Single-Particle Model (SPM) and the Pseudo-2D Model (P2D), provide detailed insights into battery degradation mechanisms but are computationally intensive and slow, limiting their practical use for real-time diagnostics. NREL’s PINN surrogate model integrates artificial intelligence with physics-based modeling to analyze complex battery data, enabling battery health predictions nearly 1,000 times faster than conventional methods. This breakthrough allows researchers and manufacturers to non-destructively monitor internal battery states, such as electrode and lithium-ion inventory changes, under various operating conditions. By training the PINN surrogate on data generated from established physics models, NREL has created a scalable tool that can quickly estimate battery aging and lifetime performance across different scenarios. This advancement promises to improve battery management, optimize design, and extend the operational lifespan of energy storage systems, which are critical for resilient and sustainable energy infrastructures.
energybattery-diagnosticsartificial-intelligenceneural-networkslithium-ion-batteriesbattery-healthenergy-storageWhat Happens When AI, EVs, and Smart Homes All Plug In at Once? - CleanTechnica
The article from CleanTechnica discusses the growing challenges faced by the electric distribution grid as artificial intelligence (AI), electric vehicles (EVs), and smart homes increasingly demand more energy. It highlights that much of our energy consumption is invisible, powering everything from data centers and AI systems to e-mobility and smart home technologies. According to a 2025 study by the National Electrical Manufacturers Association (NEMA), US electricity demand is expected to rise by 50% by 2050, driven largely by a 300% increase in data center energy use and a staggering 9,000% rise in energy consumption for electric mobility and charging. The International Energy Agency warns that the rapid expansion of data centers could strain local power networks, risking more frequent blackouts if grid upgrades do not keep pace. The article emphasizes that the current grid infrastructure is ill-equipped to handle this surge in demand without significant investment and modernization. Utilities like CenterPoint Energy are proactively investing billions in grid improvements to meet future needs, anticipating substantial increases in peak electricity usage. Technological innovations, such as smart grid automation and advanced protection devices, offer promising solutions to enhance grid resilience and reliability. These technologies help manage energy fluctuations, improve efficiency, and reduce service interruptions, positioning the grid to better support the evolving energy landscape shaped by AI, EVs, and smart homes.
energyelectric-gridelectrificationdata-centersartificial-intelligenceenergy-consumptionsmart-homesAutonomous cars that 'think' like humans cut traffic risk by 26%
Researchers at the Hong Kong University of Science and Technology (HKUST) have developed a novel cognitive encoding framework that enables autonomous vehicles (AVs) to make decisions with human-like moral reasoning and situational awareness. Unlike current AV systems that assess risks in a limited pairwise manner, this new approach evaluates multiple road users simultaneously, prioritizing vulnerable pedestrians and cyclists through a concept called “social sensitivity.” The system ranks risks based on vulnerability and ethical considerations, allowing AVs to yield or stop for pedestrians even when traffic rules permit movement, and anticipates the impact of its maneuvers on overall traffic flow. Tested in 2,000 simulated traffic scenarios, the framework demonstrated a 26.3% reduction in total traffic risk, with pedestrian and cyclist risk exposure dropping by 51.7%, and an 8.3% risk reduction for the AVs themselves. Notably, these safety improvements were achieved alongside a 13.9% increase in task completion speed. The system’s adaptability allows it to be tailored to different regional driving norms and legal frameworks, enhancing its potential for global implementation. This breakthrough addresses critical limitations in current autonomous driving technology, promising safer streets and more socially responsible AV behavior in complex, real-world environments.
robotautonomous-vehiclesartificial-intelligencetraffic-safetyhuman-like-decision-makingsocial-sensitivityrisk-assessment1X's NEO humanoid gains autonomy with new Redwood AI model
1X Technologies has unveiled Redwood, a new AI model designed to enhance the autonomy of its NEO humanoid robot for home environments. Redwood enables NEO to perform tasks such as laundry, answering doors, and navigating familiar spaces by leveraging real-world training data collected from 1X’s EVE and NEO robots. Key capabilities include generalization to handle task variations and unfamiliar objects, learned behaviors like hand selection and retrying failed grasps, and advanced whole-body, multi-contact manipulation that allows coordinated locomotion and manipulation, including bracing and leaning during tasks. Redwood supports mobile bi-manual manipulation, enabling NEO to move and manipulate objects simultaneously, and operates efficiently on NEO’s onboard embedded GPU. The system also integrates with an off-board language model for real-time voice control, interpreting user intent from speech and conversational context. At the 2025 NVIDIA GTC event, 1X showcased NEO in a nearly continuous teleoperated demo, highlighting Redwood’s potential as one of the first end-to-end mobile manipulation AI systems specifically designed for biped humanoid robots. Eric Jang, VP of AI at 1X, emphasized the model’s role in scaling robotic assistance for household chores. Additionally, CEO Berndt Børnich discussed the broader mission of addressing labor shortages with robotics, the challenges of designing safe and compliant home robots, regulatory hurdles, and societal perceptions of humanoid robots.
robothumanoid-robotartificial-intelligencemobile-manipulationrobotics-AIhome-automationembedded-GPUHow Do Robots See?
The article "How Do Robots See?" explores the mechanisms behind robotic vision beyond the simple use of cameras as eyes. It delves into how robots process visual information to understand their environment, including determining the size of objects and recognizing different items. This involves advanced technologies and algorithms that enable robots to interpret visual data in a meaningful way. Boston Dynamics is highlighted as an example, demonstrating how their robots utilize these vision systems to navigate and interact with the world. The article emphasizes that robotic vision is not just about capturing images but involves complex processing to enable perception and decision-making. However, the content provided is incomplete and lacks detailed explanations of the specific technologies or methods used.
roboticscomputer-visionBoston-Dynamicsrobot-sensingmachine-perceptionartificial-intelligencerobotics-technologyMIT teaches drones to survive nature’s worst, from wind to rain
MIT researchers have developed a novel machine-learning-based adaptive control algorithm to improve the resilience of autonomous drones against unpredictable weather conditions such as sudden wind gusts. Unlike traditional aircraft, drones are more vulnerable to being pushed off course due to their smaller size, which poses challenges for critical applications like emergency response and deliveries. The new algorithm uses meta-learning to quickly adapt to varying weather by automatically selecting the most suitable optimization method based on real-time environmental disturbances. This approach enables the drone to achieve up to 50% less trajectory tracking error compared to baseline methods, even under wind conditions not encountered during training. The control system leverages a family of optimization algorithms known as mirror descent, automating the choice of the best algorithm for the current problem, which enhances the drone’s ability to adjust thrust dynamically to counteract wind effects. The researchers demonstrated the effectiveness of their method through simulations and real-world tests, showing significant improvements in flight stability. Ongoing work aims to extend the system’s capabilities to handle multiple disturbance sources, such as shifting payloads, and to incorporate continual learning so the drone can adapt to new challenges without needing retraining. This advancement promises to enhance the efficiency and reliability of autonomous drones in complex, real-world environments.
dronesautonomous-systemsmachine-learningadaptive-controlroboticsartificial-intelligencemeta-learningTiny quantum processor outshines classical AI in accuracy, energy use
Researchers led by the University of Vienna have demonstrated that a small-scale photonic quantum processor can outperform classical AI algorithms in machine learning classification tasks, marking a rare real-world example of quantum advantage with current hardware. Using a quantum photonic circuit developed at Italy’s Politecnico di Milano and a machine learning algorithm from UK-based Quantinuum, the team showed that the quantum system made fewer errors than classical counterparts. This experiment is one of the first to demonstrate practical quantum enhancement beyond simulations, highlighting specific scenarios where quantum computing provides tangible benefits. In addition to improved accuracy, the photonic quantum processor exhibited significantly lower energy consumption compared to traditional hardware, leveraging light-based information processing. This energy efficiency is particularly important as AI’s growing computational demands raise sustainability concerns. The findings suggest that even today’s limited quantum devices can enhance machine learning performance and energy efficiency, potentially guiding a future where quantum and classical AI technologies coexist symbiotically to push technological boundaries and promote greener, faster, and smarter AI solutions.
quantum-computingphotonic-quantum-processorartificial-intelligenceenergy-efficiencymachine-learningquantum-machine-learningsupercomputingBeewise brings in $50M to expand access to its robotic BeeHome - The Robot Report
Beewise Inc., a climate technology company specializing in AI-powered robotic beekeeping, has closed a $50 million Series D funding round, bringing its total capital raised to nearly $170 million. The company developed the BeeHome system, which uses artificial intelligence, precision robotics, and solar power to provide autonomous, real-time care to bee hives. This innovation addresses the critical decline in bee populations—over 62% of U.S. colonies died last year—threatening global food security due to bees’ essential role in pollinating about three-quarters of flowering plants and one-third of food crops. BeeHome enables continuous hive health monitoring and remote intervention by beekeepers, resulting in healthier colonies, improved crop yields, and enhanced biodiversity. Since its 2022 Series C financing, Beewise has become a leading global provider of pollination services, deploying thousands of AI-driven robotic hives that pollinate over 300,000 acres annually for major growers. The company has advanced its AI capabilities using recurrent neural networks and reinforcement learning to mitigate climate risks in agriculture. The latest BeeHome 4 model features Beewise Heat Chamber Technology, which eliminates 99% of lethal Varroa mites without harmful chemicals. The new funding round, supported by investors including Fortissimo Capital and Insight Partners, will accelerate Beewise’s technological innovation, market expansion, and research efforts to further its mission of saving bees and securing the global food supply.
roboticsartificial-intelligenceautonomous-systemsenergyagriculture-technologymachine-learningclimate-technologyOxipital AI and Schmalz extend partnership for automated picking - The Robot Report
Oxipital AI and J. Schmalz GmbH have extended their partnership to integrate Oxipital AI’s advanced machine vision technology with Schmalz’s mGrip robotic fingers and vacuum end-of-arm tooling (EOAT). This collaboration aims to deliver next-generation robotic grasping solutions that improve operational efficiency, reduce labor dependence, and ensure consistent, safe, and profitable production, particularly in the food and beverage industry. Oxipital AI, originally founded as Soft Robotics, has shifted its focus from soft robotic grippers to AI-enabled machine vision systems, exemplified by its recent release of the VX2 Vision System designed for food-grade inspection and picking. Schmalz, a global leader in vacuum industrial automation and ergonomic material handling since 1910, benefits from this partnership by expanding the applicability of its tooling solutions to more complex manufacturing processes. The integration of Oxipital AI’s vision technology enhances Schmalz’s robotic grasping capabilities, enabling more capable and higher-performing picking solutions. Both companies emphasize their shared focus on robotic automation and digitalization, with Schmalz leveraging acquisitions and new technologies to strengthen its offerings in packaging, food, and pharmaceutical industries. The partnership was highlighted at the recent Automate event, signaling ongoing collaboration and innovation in automated picking systems.
roboticsartificial-intelligencemachine-visionrobotic-pickingautomationend-of-arm-toolingindustrial-roboticsChina's AI lab unveils RoboBrain 2.0 model for next-gen humanoid robots
China’s Beijing Academy of Artificial Intelligence (BAAI) has unveiled RoboBrain 2.0, a new open-source AI model designed to serve as the “brain” for next-generation humanoid robots. This model introduces significant advancements in spatial intelligence and task planning, enabling robots to perceive distances more accurately and break down complex tasks into simpler steps. Compared to its predecessor released just three months earlier, RoboBrain 2.0 delivers a 17% increase in processing speed and a 74% improvement in accuracy. The model is part of BAAI’s broader Wujie series, which also includes RoboOS 2.0, a cloud platform for deploying robotics AI, and Emu3, a multimodal system for interpreting and generating text, images, and video. BAAI’s initiative is a key component of China’s ambition to become a global leader in robotics AI. The institute collaborates with over 20 leading companies and seeks to expand partnerships to accelerate innovation in embodied intelligence. Alongside BAAI, other Chinese institutions like the Beijing Humanoid Robot Innovation Centre are advancing the field, exemplified by their development of the Tien Kung humanoid robot and the Hui Si Kai Wu AI platform, which aspires to become the “Android of humanoid robots.” The recent BAAI Conference attracted over 100 international AI researchers and 200 industry experts, highlighting strong engagement from major Chinese tech firms such as Baidu, Huawei, and Tencent. Additionally, BAAI announced a strategic partnership with the Hong Kong Investment Corporation to foster talent development, technological progress, and investment in China’s AI ecosystem.
roboticshumanoid-robotsartificial-intelligenceRoboBrain-2.0spatial-intelligencetask-planningrobotics-AI-modelsSuperpowers, sea drones, strategy: How the Indo-Pacific is re-arming
The article discusses escalating military tensions and strategic realignments in the Indo-Pacific region amid China's growing assertiveness, particularly around Taiwan. The United States, Japan, Australia, and the Philippines are deepening their military cooperation through a quadrilateral security group dubbed the "Squad," which functions as a Pacific counterpart to NATO. This bloc aims to enhance deterrence and maintain regional stability by synchronizing defense investments, expanding joint maritime patrols—especially within the Philippines’ exclusive economic zone—and condemning China’s coercive actions in the East and South China Seas. The Squad’s efforts underscore a collective response to China’s increasing military buildup and aggressive maneuvers. Taiwan is also advancing its asymmetric defense capabilities by developing home-made kamikaze sea drones to counter potential Chinese aggression. U.S. Indo-Pacific Command chief Admiral Samuel Paparo highlighted that China’s recent military exercises near Taiwan are more than routine drills, describing them as rehearsals for possible conflict. He emphasized the urgency of accelerating technological and operational advancements, including artificial intelligence and hypersonic weapons, to meet modern threats swiftly. Paparo’s warnings reflect broader U.S. concerns about a potential Chinese attempt to seize Taiwan, possibly by 2027, and the need for rapid, innovative defense responses to maintain regional security.
robotmilitary-dronesdefense-technologyIndo-Pacific-securityautonomous-sea-dronesartificial-intelligencehypersonic-weaponsTrump signs orders to encourage flying cars, counter drone threats
President Donald Trump signed a series of executive orders aimed at accelerating the development and deployment of advanced aviation technologies, including drones, flying taxis (electric vertical takeoff and landing vehicles or eVTOLs), and supersonic commercial jets. The orders direct the Federal Aviation Administration (FAA) to enable routine beyond-visual-line-of-sight drone operations, deploy AI tools to expedite waiver reviews, and update integration roadmaps for drones in national airspace. Additionally, the FAA is tasked with lifting the longstanding ban on supersonic flights over U.S. land, citing advancements in noise reduction and aerospace engineering that make such travel safe and commercially viable. Trump also initiated a pilot program for eVTOL projects focusing on medical response, cargo transport, and urban air mobility. To address national security concerns, the administration established a federal task force to monitor drone activity near sensitive locations like airports and large public events, aiming to enforce laws against misuse and mitigate risks posed by disruptive drone technology. The orders emphasize reducing reliance on foreign-made drones, particularly from China, by prioritizing U.S.-manufactured drones and promoting exports to allied countries. These initiatives build on prior efforts to integrate commercial drones and unmanned aircraft systems (UAS) into various sectors, with the broader goal of fostering high-skilled job growth, enhancing emergency response capabilities, and maintaining American leadership in global aviation.
dronesflying-carseVTOLsupersonic-jetsaerospace-engineeringartificial-intelligenceurban-air-mobilityRobot Talk Episode 124 – Robots in the performing arts, with Amy LaViers - Robohub
robotroboticsperforming-artsartificial-intelligenceautomationmachine-designdanceCybernetix Ventures raising $100M fund for robotics and physical AI - The Robot Report
roboticsinvestmentautomationartificial-intelligencestartupstechnologyventure-capitalCongressional Robotics Caucus relaunches to help U.S. industry - The Robot Report
roboticsCongressional-Robotics-CaucusU.S.-industryautomationmanufacturingartificial-intelligenceeconomic-competitivenessTop 10 robotics developments of May 2025 - The Robot Report
robotroboticsautomationhumanoid-robotsmobile-robotsartificial-intelligencemanufacturingRobot Talk Episode 123 – Standardising robot programming, with Nick Thompson - Robohub
robotprogrammingroboticsartificial-intelligenceautonomous-machinessoftware-developmentpodcastRecapping Robotics Summit & Expo 2025
roboticsautomationhumanoid-robotsrobotics-innovationrobotic-systemsartificial-intelligenceROSRobot Navigates With The 5 Senses
robotnavigationsensory-systemroboticstechnologyartificial-intelligenceSmart facade moves like living organism to cool buildings in Germany
smart-facadeenergy-efficiencyadaptive-technologyartificial-intelligencephotovoltaic-modulesbuilding-technologyfiber-reinforced-materialsChina’s marathon-winning humanoid moves from track to factory floor
robothumanoidautomationproductivitylogisticsartificial-intelligenceelectric-robotNVIDIA accepts Ekso Bionics into its Connect program - The Robot Report
robotexoskeletonmobilityartificial-intelligencerehabilitationhuman-enhancementmedical-technologyĐội xe khai thác mỏ tự động lớn nhất thế giới
robotIoTenergyautomationelectric-vehiclesmining-technologyartificial-intelligenceRobot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto
robotmachine-learningadaptable-robotsroboticsartificial-intelligenceautonomous-machinesreinforcement-learningAmsterdam Begins Deftpower Smart Charging Trial
smart-chargingelectric-vehiclesenergy-managementIoTartificial-intelligencedemand-responseAmsterdamRobot see, robot do: System learns after watching how-tos
robotartificial-intelligencemachine-learningimitation-learningroboticstask-automationvideo-trainingRobot Talk Episode 120 – Evolving robots to explore other planets, with Emma Hart
robotroboticsartificial-intelligenceevolutionary-computationautonomous-machinesrobot-designcontrol-systemsRobot Talk Episode 119 – Robotics for small manufacturers, with Will Kinghorn
robotautomationmanufacturingroboticsartificial-intelligencetechnology-adoptiondigital-transformationYour guide to Day 2 of the 2025 Robotics Summit & Expo
robotroboticsrobotaxiartificial-intelligenceautomationtechnologyexpoDeepSeek upgrades its AI model for math problem solving
AImath-problem-solvingDeepSeektechnology-upgradesmachine-learningartificial-intelligenceeducation-technologyOpenAI explains why ChatGPT became too sycophantic
OpenAIChatGPTAI-behaviorsycophancyartificial-intelligencetechnology-ethicsuser-experienceMeta says its Llama AI models have been downloaded 1.2B times
MetaLlama-AIartificial-intelligencedownloadstechnology-newsmachine-learningAI-modelsMeta previews an API for its Llama AI models
MetaLlama-AIAPIartificial-intelligencetechnologymachine-learningsoftware-developmentMeta launches a standalone AI app to compete with ChatGPT
MetaAI-appChatGPTartificial-intelligenceLlamaConMeta-AIsocial-mediaMeta needs to win over AI developers at its first LlamaCon
MetaLlamaConAI-developersgenerative-AIopen-modelstechnology-conferenceartificial-intelligenceAnthropic co-founder Jared Kaplan is coming to TechCrunch Sessions: AI
AnthropicJared-KaplanTechCrunch-SessionsAItechnology-conferenceartificial-intelligenceUC-BerkeleyOpenAI is fixing a ‘bug’ that allowed minors to generate erotic conversations
OpenAIChatGPTminorscontent-moderationuser-safetyartificial-intelligenceerotic-content