Modeling robot trust based on emergent emotion in an interactive task | Kütüphane.osmanlica.com

Modeling robot trust based on emergent emotion in an interactive task

İsim Modeling robot trust based on emergent emotion in an interactive task
Yazar Kırtay, M., Öztop, Erhan, Asada, M., Hafner, V. V.
Basım Tarihi: 2021
Basım Yeri - IEEE
Konu Emotions, HRI, Internal reward, Trust, Visual recalling
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane: Özyeğin Üniversitesi
Demirbaş Numarası 978-172816242-3
Kayıt Numarası fffe9f6a-e105-40f0-b2a7-51df24c62fca
Lokasyon Computer Science
Tarih 2021
Notlar Deutsche Forschungsgemeinschaft
Örnek Metin Trust is an essential component in human-human and human-robot interactions. The factors that play potent roles in these interactions have been an attractive issue in robotics. However, the studies that aim at developing a computational model of robot trust in interaction partners remain relatively limited. In this study, we extend our emergent emotion model to propose that the robot's trust in the interaction partner (i.e., trustee) can be established by the effect of the interactions on the computational energy budget of the robot (i.e., trustor). To be concrete, we show how high-level emotions (e.g., wellbeing) of an agent can be modeled by the computational cost of perceptual processing (e.g., visual stimulus processing for visual recalling) in a decision-making framework. To realize this approach, we endow the Pepper humanoid robot with two modules: an auto-associative memory that extracts the required computational energy to perform a visual recalling, and an internal reward mechanism guiding model-free reinforcement learning to yield computational energy cost-aware behaviors. With this setup, the robot interacts with online instructors with different guiding strategies, namely reliable, less reliable, and random. Through interaction with the instructors, the robot associates the cumulative reward values based on the cost of perceptual processing to evaluate the instructors and determine which one should be trusted. Overall the results indicate that the robot can differentiate the guiding strategies of the instructors. Additionally, in the case of free choice, the robot trusts the reliable one that increases the total reward - and therefore reduces the required computational energy (cognitive load)- to perform the next task.
DOI 10.1109/ICDL49984.2021.9515645
Kaynağa git Özyeğin Üniversitesi Özyeğin Üniversitesi
Özyeğin Üniversitesi Özyeğin Üniversitesi
Kaynağa git

Modeling robot trust based on emergent emotion in an interactive task

Yazar Kırtay, M., Öztop, Erhan, Asada, M., Hafner, V. V.
Basım Tarihi 2021
Basım Yeri - IEEE
Konu Emotions, HRI, Internal reward, Trust, Visual recalling
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane Özyeğin Üniversitesi
Demirbaş Numarası 978-172816242-3
Kayıt Numarası fffe9f6a-e105-40f0-b2a7-51df24c62fca
Lokasyon Computer Science
Tarih 2021
Notlar Deutsche Forschungsgemeinschaft
Örnek Metin Trust is an essential component in human-human and human-robot interactions. The factors that play potent roles in these interactions have been an attractive issue in robotics. However, the studies that aim at developing a computational model of robot trust in interaction partners remain relatively limited. In this study, we extend our emergent emotion model to propose that the robot's trust in the interaction partner (i.e., trustee) can be established by the effect of the interactions on the computational energy budget of the robot (i.e., trustor). To be concrete, we show how high-level emotions (e.g., wellbeing) of an agent can be modeled by the computational cost of perceptual processing (e.g., visual stimulus processing for visual recalling) in a decision-making framework. To realize this approach, we endow the Pepper humanoid robot with two modules: an auto-associative memory that extracts the required computational energy to perform a visual recalling, and an internal reward mechanism guiding model-free reinforcement learning to yield computational energy cost-aware behaviors. With this setup, the robot interacts with online instructors with different guiding strategies, namely reliable, less reliable, and random. Through interaction with the instructors, the robot associates the cumulative reward values based on the cost of perceptual processing to evaluate the instructors and determine which one should be trusted. Overall the results indicate that the robot can differentiate the guiding strategies of the instructors. Additionally, in the case of free choice, the robot trusts the reliable one that increases the total reward - and therefore reduces the required computational energy (cognitive load)- to perform the next task.
DOI 10.1109/ICDL49984.2021.9515645
Özyeğin Üniversitesi
Özyeğin Üniversitesi yönlendiriliyorsunuz...

Lütfen bekleyiniz.