RIEM News LogoRIEM News

Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions - Robohub

  Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions - Robohub
Source: robohub
Published: 7/25/2025

To read the full content, please visit the original article.

Read original article
In this interview, Kate Candon, a PhD student at Yale University, discusses her research on improving human-robot interaction by leveraging both explicit and implicit feedback. Traditional robot learning often relies on explicit feedback, such as simple "good job" or "bad job" signals from a human teacher who is not actively engaged in the task. However, Candon emphasizes that humans naturally provide a range of implicit cues—like facial expressions, gestures, or subtle actions such as moving an object away—that convey valuable information without additional effort. Her current research aims to develop a framework that combines these implicit signals with explicit feedback to enable robots to learn more effectively from humans in natural, interactive settings. Candon explains that interpreting implicit feedback is challenging due to variability across individuals and cultures. Her initial approach focuses on analyzing human actions within a shared task to infer appropriate robot responses, with plans to incorporate visual cues such as facial expressions and gestures in future work. The research is tested in a pizza-making scenario, chosen for

Tags

robothuman-robot-interactionimplicit-feedbackexplicit-feedbackinteractive-agentsrobot-learningAI