Inferring cost functions using reward parameter search and policy gradient reinforcement learning | Kütüphane.osmanlica.com

Inferring cost functions using reward parameter search and policy gradient reinforcement learning

İsim Inferring cost functions using reward parameter search and policy gradient reinforcement learning
Yazar Arditi, Emir, Kunavar, T., Ugur, E., Babic, J., Öztop, Erhan
Basım Tarihi: 2021
Basım Yeri - IEEE
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane: Özyeğin Üniversitesi
Demirbaş Numarası 978-1-6654-3554-3
Kayıt Numarası 23b429e6-82c9-4cfe-be04-500036388cd4
Lokasyon Computer Science
Tarih 2021
Notlar Slovenia/ARRS -Turkey/TUBITAK bilateral collaboration ; Bogazici Resarch Fund (BAP) IMAGINE-COG++ Project
Örnek Metin This study focuses on inferring cost functions of obtained movement data using reward parameter search and policy gradient based Reinforcement Learning (RL). The behavior data for this task is obtained through a series of squat-to-stand movements of human participants under dynamic perturbations. The key parameter searched in the cost function is the weight of total torque used in performing the squat-to-stand action. An approximate model is used to learn squat-to-stand movements via a policy gradient method, namely Proximal Policy Optimization(PPO). A behavioral similarity metric based on Center of Mass(COM) is used to find the most likely weight parameter. The stochasticity in the training result of PPO is dealt with multiple runs, and as a result, a reasonable and a stable Inverse Reinforcement Learning(IRL) algorithm is obtained in terms of performance. The results indicate that for some participants, the reward function parameters of the experts were inferred successfully.
DOI 10.1109/IECON48115.2021.9589967
Kaynağa git Özyeğin Üniversitesi Özyeğin Üniversitesi
Özyeğin Üniversitesi Özyeğin Üniversitesi
Kaynağa git

Inferring cost functions using reward parameter search and policy gradient reinforcement learning

Yazar Arditi, Emir, Kunavar, T., Ugur, E., Babic, J., Öztop, Erhan
Basım Tarihi 2021
Basım Yeri - IEEE
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane Özyeğin Üniversitesi
Demirbaş Numarası 978-1-6654-3554-3
Kayıt Numarası 23b429e6-82c9-4cfe-be04-500036388cd4
Lokasyon Computer Science
Tarih 2021
Notlar Slovenia/ARRS -Turkey/TUBITAK bilateral collaboration ; Bogazici Resarch Fund (BAP) IMAGINE-COG++ Project
Örnek Metin This study focuses on inferring cost functions of obtained movement data using reward parameter search and policy gradient based Reinforcement Learning (RL). The behavior data for this task is obtained through a series of squat-to-stand movements of human participants under dynamic perturbations. The key parameter searched in the cost function is the weight of total torque used in performing the squat-to-stand action. An approximate model is used to learn squat-to-stand movements via a policy gradient method, namely Proximal Policy Optimization(PPO). A behavioral similarity metric based on Center of Mass(COM) is used to find the most likely weight parameter. The stochasticity in the training result of PPO is dealt with multiple runs, and as a result, a reasonable and a stable Inverse Reinforcement Learning(IRL) algorithm is obtained in terms of performance. The results indicate that for some participants, the reward function parameters of the experts were inferred successfully.
DOI 10.1109/IECON48115.2021.9589967
Özyeğin Üniversitesi
Özyeğin Üniversitesi yönlendiriliyorsunuz...

Lütfen bekleyiniz.