RIEM News LogoRIEM News

Articles tagged with "machine-learning"

  • US scientists use machine learning for real-time crop disease alerts

    Purdue University researchers are leveraging advanced AI and machine learning technologies to transform agriculture and environmental management. Their innovations include real-time crop disease detection using semi-supervised models that identify rare diseases from limited data, enabling faster outbreak responses and reduced chemical usage. These AI tools are designed to run efficiently on low-power devices such as drones and autonomous tractors, facilitating on-the-ground, real-time monitoring without relying on constant connectivity. Additionally, Purdue scientists are using AI to analyze urban ecosystems through remote sensing data and LiDAR imagery, uncovering patterns invisible to the naked eye to improve urban living conditions. In agriculture, AI is also being applied to enhance crop yield predictions and climate resilience. For example, machine learning ensembles simulate rice yields under future climate scenarios, improving accuracy significantly. Tools like the “Netflix for crops” platform recommend optimal crops based on soil and water data, aiding farmers and policymakers in making informed, data-driven decisions. Furthermore, Purdue developed an AI-powered medical robot capable of swimming inside a cow’s stomach to

    robotAIagriculture-technologymachine-learningmedical-robotscrop-disease-detectionenvironmental-monitoring
  • Watch: Figure 02 robot achieve near-human package sorting skills

    Figure AI’s humanoid robot, Figure 02, has demonstrated significant advancements in package sorting, achieving near-human speed and dexterity by processing parcels in about 4.05 seconds each with a 95% barcode scanning success rate. This marks a 20% speed improvement over earlier demonstrations despite handling more complex tasks involving a mix of rigid boxes, deformable poly bags, and flat padded envelopes. Key to this progress is the upgraded Helix visuomotor system, which benefits from a six-fold increase in training data and new modules for short-term visual memory and force feedback. These enhancements enable the robot to remember partial barcode views, adjust grips delicately, and manipulate flexible parcels by flicking or patting them for optimal scanning. The improvements highlight the potential of end-to-end learning systems in dynamic warehouse environments, where the robot can adapt its sorting strategy on the fly and even generalize its skills to new tasks, such as recognizing a human hand as a signal for handing over parcels without additional programming

    roboticshumanoid-robotpackage-sortingmachine-learningforce-feedbackvisual-memoryautomation
  • Week in Review: WWDC 2025 recap

    The Week in Review covers major developments from WWDC 2025 and other tech news. At Apple’s Worldwide Developers Conference, the company showcased updates across its product lineup amid pressure to advance its AI capabilities and address ongoing legal challenges related to its App Store. Meanwhile, United Natural Foods (UNFI) suffered a cyberattack that disrupted its external systems, impacting Whole Foods’ ability to manage deliveries and product availability. In financial news, Chime successfully went public, raising $864 million in its IPO. Other highlights include Google enhancing Pixel phones with new features like group chat for RCS and AI-powered photo editing, and Elon Musk announcing the imminent launch of driverless Teslas in Austin, Texas. The Browser Company is pivoting from its Arc browser to develop an AI-first browser using a reasoning model designed for improved problem-solving in complex domains. OpenAI announced a partnership with Mattel, granting Mattel employees access to ChatGPT Enterprise to boost product development and creativity. However, concerns about privacy surfaced with

    robotAIautonomous-vehiclesdriverless-carsmachine-learningartificial-intelligenceautomation
  • Motional names Major president, CEO of self-driving car business

    Laura Major was appointed president and CEO of Motional, a leading autonomous vehicle company, in June 2025 after serving as interim CEO since September 2024. She succeeded Karl Iagnemma, who left to lead Vecna Robotics. Major has been with Motional since its founding in 2020, initially as CTO, where she spearheaded the development of the IONIQ 5 robotaxi, one of the first autonomous vehicles certified by the Federal Motor Vehicle Safety Standards, and created a machine learning-first autonomous driving software stack. Her leadership emphasizes leveraging AI breakthroughs and partnership with Hyundai to advance safe, fully driverless transportation as a practical part of everyday life. Before Motional, Major built expertise in autonomy and AI at Draper Laboratory and Aria Insights, focusing on astronaut, national security, and drone applications. She began her career as a cognitive engineer designing decision-support systems for astronauts and soldiers and later led Draper’s Information and Cognition Division. Recognized as an emerging leader by

    robotautonomous-vehiclesAImachine-learningroboticsself-driving-carsautomation
  • Meta V-JEPA 2 world model uses raw video to train robots

    Meta has introduced V-JEPA 2, a 1.2-billion-parameter world model designed to enhance robotic understanding, prediction, and planning by training primarily on raw video data. Built on the Joint Embedding Predictive Architecture (JEPA), V-JEPA 2 undergoes a two-stage training process: first, self-supervised learning from over one million hours of video and a million images to capture physical interaction patterns; second, action-conditioned learning using about 62 hours of robot control data to incorporate agent actions for outcome prediction. This approach enables the model to support planning and closed-loop control in robots without requiring extensive domain-specific training or human annotations. In practical tests within Meta’s labs, V-JEPA 2 demonstrated strong performance on common robotic tasks such as pick-and-place, achieving success rates between 65% and 80% in previously unseen environments. The model uses vision-based goal representations, generating candidate actions for simpler tasks and employing sequences of visual subgoals for more complex tasks

    roboticsAIworld-modelsmachine-learningvision-based-controlrobotic-manipulationself-supervised-learning
  • Meta’s new AI helps robots learn real-world logic from raw video

    Meta has introduced V-JEPA 2, an advanced AI model trained solely on raw video data to help robots and AI agents better understand and predict physical interactions in the real world. Unlike traditional AI systems that rely on large labeled datasets, V-JEPA 2 operates in a simplified latent space, enabling faster and more adaptable simulations of physical reality. The model learns cause-and-effect relationships such as gravity, motion, and object permanence by analyzing how people and objects interact in videos, allowing it to generalize across diverse contexts without extensive annotations. Meta views this development as a significant step toward artificial general intelligence (AGI), aiming to create AI systems capable of thinking before acting. In practical applications, Meta has tested V-JEPA 2 on lab-based robots, which successfully performed tasks like picking up unfamiliar objects and navigating new environments, demonstrating improved adaptability in unpredictable real-world settings. The company envisions broad use cases for autonomous machines—including delivery robots and self-driving cars—that require quick interpretation of physical surroundings and real

    roboticsartificial-intelligencemachine-learningautonomous-robotsvideo-based-learningphysical-world-simulationAI-models
  • Meta’s V-JEPA 2 model teaches AI to understand its surroundings

    Meta has introduced V-JEPA 2, a new AI "world model" designed to help artificial intelligence agents better understand and predict their surroundings. This model enables AI to make common-sense inferences about physical interactions in the environment, similar to how young children or animals learn through experience. For example, V-JEPA 2 can anticipate the next likely action in a scenario where a robot holding a plate and spatula approaches a stove with cooked eggs, predicting the robot will use the spatula to move the eggs onto the plate. Meta claims that V-JEPA 2 operates 30 times faster than comparable models like Nvidia’s, marking a significant advancement in AI efficiency. The company envisions that such world models will revolutionize robotics by enabling AI agents to assist with real-world physical tasks and chores without requiring massive amounts of robotic training data. This development points toward a future where AI can interact more intuitively and effectively with the physical world, enhancing automation and robotics capabilities.

    robotartificial-intelligenceAI-modelroboticsmachine-learningautomationAI-agents
  • MIT teaches drones to survive nature’s worst, from wind to rain

    MIT researchers have developed a novel machine-learning-based adaptive control algorithm to improve the resilience of autonomous drones against unpredictable weather conditions such as sudden wind gusts. Unlike traditional aircraft, drones are more vulnerable to being pushed off course due to their smaller size, which poses challenges for critical applications like emergency response and deliveries. The new algorithm uses meta-learning to quickly adapt to varying weather by automatically selecting the most suitable optimization method based on real-time environmental disturbances. This approach enables the drone to achieve up to 50% less trajectory tracking error compared to baseline methods, even under wind conditions not encountered during training. The control system leverages a family of optimization algorithms known as mirror descent, automating the choice of the best algorithm for the current problem, which enhances the drone’s ability to adjust thrust dynamically to counteract wind effects. The researchers demonstrated the effectiveness of their method through simulations and real-world tests, showing significant improvements in flight stability. Ongoing work aims to extend the system’s capabilities to handle multiple disturbance sources, such as shifting payloads, and to incorporate continual learning so the drone can adapt to new challenges without needing retraining. This advancement promises to enhance the efficiency and reliability of autonomous drones in complex, real-world environments.

    dronesautonomous-systemsmachine-learningadaptive-controlroboticsartificial-intelligencemeta-learning
  • Flexible soft robot arm moves with light — no wires or chips inside

    Engineers at Rice University have developed a flexible, octopus-inspired soft robotic arm that operates entirely through light beams, eliminating the need for wires or internal electronics. This innovative arm is powered by a light-responsive polymer called azobenzene liquid crystal elastomer, which contracts when exposed to blue laser light and relaxes in the dark, enabling precise bending motions. The arm’s movement mimics natural behaviors, such as a flower stem bending toward sunlight, allowing it to perform complex tasks like obstacle navigation and hitting a ball with accuracy. The control system uses a spatial light modulator to split a laser into multiple adjustable beamlets, each targeting different parts of the arm to flex or contract as needed. Machine learning, specifically a convolutional neural network trained on various light patterns and corresponding arm movements, enables real-time, automated control of the arm’s fluid motions. Although the current prototype operates in two dimensions, the researchers aim to develop three-dimensional versions with additional sensors, potentially benefiting applications ranging from implantable surgical devices to industrial robots handling soft materials. This approach promises robots with far greater flexibility and degrees of freedom than traditional rigid-jointed machines.

    soft-roboticslight-responsive-materialsazobenzene-liquid-crystal-elastomermachine-learningflexible-robot-armremote-control-roboticsbio-inspired-robotics
  • Tiny quantum processor outshines classical AI in accuracy, energy use

    Researchers led by the University of Vienna have demonstrated that a small-scale photonic quantum processor can outperform classical AI algorithms in machine learning classification tasks, marking a rare real-world example of quantum advantage with current hardware. Using a quantum photonic circuit developed at Italy’s Politecnico di Milano and a machine learning algorithm from UK-based Quantinuum, the team showed that the quantum system made fewer errors than classical counterparts. This experiment is one of the first to demonstrate practical quantum enhancement beyond simulations, highlighting specific scenarios where quantum computing provides tangible benefits. In addition to improved accuracy, the photonic quantum processor exhibited significantly lower energy consumption compared to traditional hardware, leveraging light-based information processing. This energy efficiency is particularly important as AI’s growing computational demands raise sustainability concerns. The findings suggest that even today’s limited quantum devices can enhance machine learning performance and energy efficiency, potentially guiding a future where quantum and classical AI technologies coexist symbiotically to push technological boundaries and promote greener, faster, and smarter AI solutions.

    quantum-computingphotonic-quantum-processorartificial-intelligenceenergy-efficiencymachine-learningquantum-machine-learningsupercomputing
  • Beewise brings in $50M to expand access to its robotic BeeHome - The Robot Report

    Beewise Inc., a climate technology company specializing in AI-powered robotic beekeeping, has closed a $50 million Series D funding round, bringing its total capital raised to nearly $170 million. The company developed the BeeHome system, which uses artificial intelligence, precision robotics, and solar power to provide autonomous, real-time care to bee hives. This innovation addresses the critical decline in bee populations—over 62% of U.S. colonies died last year—threatening global food security due to bees’ essential role in pollinating about three-quarters of flowering plants and one-third of food crops. BeeHome enables continuous hive health monitoring and remote intervention by beekeepers, resulting in healthier colonies, improved crop yields, and enhanced biodiversity. Since its 2022 Series C financing, Beewise has become a leading global provider of pollination services, deploying thousands of AI-driven robotic hives that pollinate over 300,000 acres annually for major growers. The company has advanced its AI capabilities using recurrent neural networks and reinforcement learning to mitigate climate risks in agriculture. The latest BeeHome 4 model features Beewise Heat Chamber Technology, which eliminates 99% of lethal Varroa mites without harmful chemicals. The new funding round, supported by investors including Fortissimo Capital and Insight Partners, will accelerate Beewise’s technological innovation, market expansion, and research efforts to further its mission of saving bees and securing the global food supply.

    roboticsartificial-intelligenceautonomous-systemsenergyagriculture-technologymachine-learningclimate-technology
  • XRobotics’ countertop robots are cooking up 25,000 pizzas a month

    XRobotics, a San Francisco-based startup, has developed the xPizza Cube, a compact countertop robot designed to automate key pizza-making tasks such as applying sauce, cheese, and pepperoni. The machine, roughly the size of a stackable washing machine, can produce up to 100 pizzas per hour and is adaptable to various pizza styles, including Detroit and Chicago deep dish. Leasing at $1,300 per month over three years, the robot aims to save pizza makers 70-80% of the labor time involved in repetitive tasks, helping both small pizzerias and large chains improve efficiency without requiring a full overhaul of their kitchen processes. Unlike previous ventures like Zume, which attempted to fully automate pizza production and ultimately failed, XRobotics focuses on assistive technology that integrates into existing kitchens. After initial challenges with a larger, more complex robot, the company pivoted to a smaller, more affordable model launched in 2023, which has since produced 25,000 pizzas monthly. The startup recently raised $2.5 million in seed funding to scale production and expand its customer base. With plans to enter the Mexican and Canadian markets, XRobotics remains committed to the pizza industry, leveraging the large market size and the founders’ personal passion for pizza.

    roboticsautomationfood-technologymachine-learningrestaurant-technologypizza-makingkitchen-robotics
  • Solid-state battery breakthrough promises 50% more range in one charge

    Researchers from Skolkovo Institute of Science and Technology (Skoltech) and the AIRI Institute have achieved a significant breakthrough in solid-state battery technology by using machine learning to accelerate the discovery of high-performance battery materials. Their innovation could enable electric vehicles (EVs) to travel up to 50% farther on a single charge while improving safety and battery lifespan. The team employed graph neural networks to rapidly identify optimal materials for solid electrolytes and protective coatings, overcoming a major hurdle in solid-state battery development. This approach is orders of magnitude faster than traditional quantum chemistry methods, enabling quicker advancement in battery design. A key aspect of the research is the identification of protective coatings that shield the solid electrolyte from reactive lithium anodes and cathodes, which otherwise degrade battery performance and increase short-circuit risks. Using AI, the team discovered promising coating compounds such as Li3AlF6 and Li2ZnCl4 for the solid electrolyte Li10GeP2S12, a leading candidate material. This work not only enhances the durability and efficiency of solid-state batteries but also paves the way for safer, more durable, and higher-performing EVs and portable electronics, potentially reshaping the future of energy storage.

    energysolid-state-batterybattery-materialselectric-vehiclesmachine-learningneural-networksenergy-storage
  • US scientists develop real-time defect detection for 3D metal printing

    Scientists from Argonne National Laboratory and the University of Virginia have developed a novel method to detect defects, specifically keyhole pores, in metal parts produced by 3D printing using laser powder bed fusion. Keyhole pores are tiny internal cavities formed when excessive laser energy creates deep, narrow holes that trap gas, compromising the structural integrity and performance of critical components such as aerospace parts and medical implants. The new approach combines thermal imaging, X-ray imaging, and machine learning to predict pore formation in real-time by correlating surface heat patterns with internal defects captured via powerful X-rays. This method leverages existing thermal cameras already installed on many 3D printers, enabling instant detection of internal flaws without the need for continuous expensive X-ray imaging. The AI model, trained on synchronized thermal and X-ray data, can identify pore formation within milliseconds, allowing for immediate intervention. Researchers envision integrating this technology with automatic correction systems that adjust printing parameters or reprint layers on the fly, thereby improving reliability, reducing waste, and enhancing safety in manufacturing mission-critical metal parts. Future work aims to expand defect detection capabilities and develop repair mechanisms during the additive manufacturing process.

    3D-printingmetal-additive-manufacturingdefect-detectionmachine-learningthermal-imagingX-ray-imagingmaterials-science
  • Autonomous trucking developer Plus goes public via SPAC - The Robot Report

    Plus Automation Inc., a developer of autonomous driving software for commercial trucks, is going public through a merger with Churchill Capital Corp IX, a special purpose acquisition company (SPAC). The combined company will operate as PlusAI, with a mission to address the trucking industry’s driver shortage by delivering advanced autonomous vehicle technology. Founded in 2016 and based in Santa Clara, California, Plus has deployed its technology across the U.S., Europe, and Asia, accumulating over 5 million miles of autonomous driving. Its core product, SuperDrive, enables SAE Level 4 autonomous driving with a three-layer redundancy system designed specifically for heavy commercial trucks. Plus achieved a significant driver-out safety validation milestone in April 2025 and is conducting public road testing in Texas and Sweden, targeting a commercial launch of factory-built autonomous trucks in 2027. Plus emphasizes an OEM-led commercialization strategy, partnering with major vehicle manufacturers such as TRATON GROUP, Hyundai, and IVECO to integrate its virtual driver software directly into factory-built trucks. This approach leverages trusted manufacturing and service networks to scale deployment and provide fleet operators with a clear path to autonomy. Strategic collaborations with companies like DSV, Bosch, and NVIDIA support this effort. Notably, Plus and IVECO launched an automated trucking pilot in Germany in partnership with logistics provider DSV and retailer dm-drogerie markt, demonstrating real-world applications of their technology. The SPAC transaction values Plus at a pre-money equity valuation of $1.2 billion and is expected to raise $300 million in gross proceeds, which will fund the company through its planned commercial launch in 2027. The deal has been unanimously approved by both companies’ boards and is anticipated to close in Q4 2025, pending shareholder approval and customary closing conditions. This public listing marks a significant step for Plus as it scales its autonomous trucking technology to address industry challenges and expand globally.

    robotautonomous-trucksAImachine-learningcommercial-vehiclesLevel-4-autonomytransportation-technology
  • Hugging Face says its new robotics model is so efficient it can run on a MacBook

    roboticsAIHugging-FaceSmolVLAmachine-learningrobotics-modelgeneralist-agents
  • Google places another fusion power bet on TAE Technologies

    energyfusion-powerTAE-Technologiesmachine-learningplasma-technologyinvestment-in-energyrenewable-energy
  • AI sorts 1 million rock samples to find cement substitutes in waste

    materialsAIcement-substituteseco-friendly-materialsconcrete-sustainabilitymachine-learningalternative-materials
  • Why Intempus thinks robots should have a human physiological state

    robotroboticsAIemotional-intelligencehuman-robot-interactionIntempusmachine-learning
  • Agibot’s humanoid readies for robot face-off with Kung Fu flair

    robotAIhumanoidroboticsautomationmachine-learninginteraction
  • Robot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto

    robotmachine-learningadaptable-robotsroboticsartificial-intelligenceautonomous-machinesreinforcement-learning
  • Robot see, robot do: System learns after watching how-tos

    robotartificial-intelligencemachine-learningimitation-learningroboticstask-automationvideo-training
  • SS Innovations to submit SSi Mantra 3 to FDA in July

    robotsurgical-roboticstelesurgeryFDA-approvalhealthcare-technologymachine-learningmodular-design
  • Mô hình AI cho phép điều khiển robot bằng lời

    robotAIMotionGlotmachine-learningroboticshuman-robot-interactionautomation
  • EPS đảm bảo công tác sửa chữa bảo dưỡng các nhà máy điện đầu năm 2025

    energymaintenancepower-plantsreliabilityremote-monitoringoperational-efficiencymachine-learning
  • Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

    robotIoTenergymaterialsmachine-learningsemiconductor-manufacturingvirtual-metrology
  • DeepSeek upgrades its AI model for math problem solving

    AImath-problem-solvingDeepSeektechnology-upgradesmachine-learningartificial-intelligenceeducation-technology
  • Meta says its Llama AI models have been downloaded 1.2B times

    MetaLlama-AIartificial-intelligencedownloadstechnology-newsmachine-learningAI-models
  • Meta previews an API for its Llama AI models

    MetaLlama-AIAPIartificial-intelligencetechnologymachine-learningsoftware-development
  • Alibaba unveils Qwen 3, a family of ‘hybrid’ AI reasoning models

    AlibabaQwen-3AI-modelshybrid-AImachine-learningtech-newsopen-source-AI