Modeling the development of infant imitation using inverse reinforcement learning | Kütüphane.osmanlica.com

Modeling the development of infant imitation using inverse reinforcement learning

İsim Modeling the development of infant imitation using inverse reinforcement learning
Yazar Tekden, A. E., Ugur, E., Nagai, Y., Öztop, Erhan
Basım Tarihi: 2018-09
Basım Yeri - IEEE
Konu Observers, Task analysis, Reinforcement learning, Trajectory, Robot sensing systems, Entropy
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane: Özyeğin Üniversitesi
Demirbaş Numarası 978-1-5386-6110-9
Kayıt Numarası 58ce4895-8832-4696-a6a2-f0b50713761d
Lokasyon Computer Science
Tarih 2018-09
Notlar Bogazici Resarch Fund (BAP) Startup project ; Slovenia/ARRS -Turkey/TUBITAK bilateral collaboration grant (ARRS Project) ; TÜBİTAK ; JST CREST Cognitive Mirroring, Japan
Örnek Metin Little is known about the computational mechanisms of how imitation skills develop along with infant sensorimotor learning. In robotics, there are several well developed frameworks for imitation learning or so called learning by demonstration. Two paradigms dominate: Direct Learning (DL) and Inverse Reinforcement Learning (IRL). The former is a simple mechanism where the observed state and action pairs are associated to construct a copy of the action policy of the demonstrator. In the latter, an optimality principle or reward structure is sought that would explain the observed behavior as the optimal solution governed by the optimality principle or the reward function found. In this study, we explore the plausibility of whether some form of IRL mechanism in infants can facilitate imitation learning and understanding of others' behaviours. We propose that infants project the events taking place in the environment into their internal representations through a set of features that evolve during development. We implement this idea on a grid world environment, which can be considered as a simple model for reaching with obstacle avoidance. The observing infant has to imitate the demonstrator's reaching behavior through IRL by using various set of features that correspond to different stages of development. Our simulation results indicate that the U-shape performance change during imitation development observed in infants can be reproduced with the proposed model.
DOI 10.1109/DEVLRN.2018.8761045
Kaynağa git Özyeğin Üniversitesi Özyeğin Üniversitesi
Özyeğin Üniversitesi Özyeğin Üniversitesi
Kaynağa git

Modeling the development of infant imitation using inverse reinforcement learning

Yazar Tekden, A. E., Ugur, E., Nagai, Y., Öztop, Erhan
Basım Tarihi 2018-09
Basım Yeri - IEEE
Konu Observers, Task analysis, Reinforcement learning, Trajectory, Robot sensing systems, Entropy
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane Özyeğin Üniversitesi
Demirbaş Numarası 978-1-5386-6110-9
Kayıt Numarası 58ce4895-8832-4696-a6a2-f0b50713761d
Lokasyon Computer Science
Tarih 2018-09
Notlar Bogazici Resarch Fund (BAP) Startup project ; Slovenia/ARRS -Turkey/TUBITAK bilateral collaboration grant (ARRS Project) ; TÜBİTAK ; JST CREST Cognitive Mirroring, Japan
Örnek Metin Little is known about the computational mechanisms of how imitation skills develop along with infant sensorimotor learning. In robotics, there are several well developed frameworks for imitation learning or so called learning by demonstration. Two paradigms dominate: Direct Learning (DL) and Inverse Reinforcement Learning (IRL). The former is a simple mechanism where the observed state and action pairs are associated to construct a copy of the action policy of the demonstrator. In the latter, an optimality principle or reward structure is sought that would explain the observed behavior as the optimal solution governed by the optimality principle or the reward function found. In this study, we explore the plausibility of whether some form of IRL mechanism in infants can facilitate imitation learning and understanding of others' behaviours. We propose that infants project the events taking place in the environment into their internal representations through a set of features that evolve during development. We implement this idea on a grid world environment, which can be considered as a simple model for reaching with obstacle avoidance. The observing infant has to imitate the demonstrator's reaching behavior through IRL by using various set of features that correspond to different stages of development. Our simulation results indicate that the U-shape performance change during imitation development observed in infants can be reproduced with the proposed model.
DOI 10.1109/DEVLRN.2018.8761045
Özyeğin Üniversitesi
Özyeğin Üniversitesi yönlendiriliyorsunuz...

Lütfen bekleyiniz.