Trustworthiness assessment in multimodal human-robot interaction based on cognitive load | Kütüphane.osmanlica.com

Trustworthiness assessment in multimodal human-robot interaction based on cognitive load

İsim Trustworthiness assessment in multimodal human-robot interaction based on cognitive load
Yazar Kırtay, M., Öztop, Erhan, Kuhlen, A. K., asa, M., Hafner, V. V.
Basım Tarihi: 2022
Basım Yeri - IEEE
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane: Özyeğin Üniversitesi
Demirbaş Numarası 978-172818859-1
Kayıt Numarası 536447f1-1f34-4ea2-9377-3053430db36d
Lokasyon Computer Science
Tarih 2022
Notlar Deutsche Forschungsgemeinschaft
Örnek Metin In this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment.
DOI 10.1109/RO-MAN53752.2022.9900730
Kaynağa git Özyeğin Üniversitesi Özyeğin Üniversitesi
Özyeğin Üniversitesi Özyeğin Üniversitesi
Kaynağa git

Trustworthiness assessment in multimodal human-robot interaction based on cognitive load

Yazar Kırtay, M., Öztop, Erhan, Kuhlen, A. K., asa, M., Hafner, V. V.
Basım Tarihi 2022
Basım Yeri - IEEE
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane Özyeğin Üniversitesi
Demirbaş Numarası 978-172818859-1
Kayıt Numarası 536447f1-1f34-4ea2-9377-3053430db36d
Lokasyon Computer Science
Tarih 2022
Notlar Deutsche Forschungsgemeinschaft
Örnek Metin In this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment.
DOI 10.1109/RO-MAN53752.2022.9900730
Özyeğin Üniversitesi
Özyeğin Üniversitesi yönlendiriliyorsunuz...

Lütfen bekleyiniz.