This research project was partially funded by ERC Starting Grant EXPLORERS 240007.
[1] J.Konczak,“Onthenotionofmotorprimitivesinhumansandrobots,”in
Fifth International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, vol. 123. Lund University Cognitive Studies, 2005, pp. 47–53.
[2] S. Calinon, F. Guenter, and A. Billard, “On learning, representing, and generalizing a task in a humanoid robot.” IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society, vol. 37, no. 2, pp. 286–98, Apr. 2007.
[3] D. H. Grollman and O. C. Jenkins, “Incremental Learning of Subtasks from Unsegmented Demonstration,” in IROS, Taipei, Taiwan, 2010. [4] S. Calinon and A. G. Billard, “Statistical Learning by Imitation of Com-
peting Constraints in Joint Space and Task Space,” Advanced Robotics,
vol. 23, pp. 2059–2076, 2009. [5] Y. Li, C. Fermuller, Y. Aloimonos, and H. Ji, “Learning shift-invariant
sparse representation of actions,” in 2010 IEEE Computer Society Con- ference on Computer Vision and Pattern Recognition. San-Francisco: IEEE, Jun. 2010, pp. 2630–2637.
[6] M. Pardowitz, S. Knoop, R. Dillmann, and R. D. Zo ̈llner, “Incremental learning of tasks from user demonstrations, past experiences, and vocal comments.” IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society, vol. 37, no. 2, pp. 322–32, Apr. 2007.
[7] R. S. Sutton, D. Precup, and S. Singh, “Between MDPs and semi- MDPs: A framework for temporal abstraction in reinforcement learning,” Artificial intelligence, vol. 112, no. 1, pp. 181–211, 1999.
[8] D.D.LeeandH.S.Seung,“Learningthepartsofobjectsbynon-negative matrix factorization.” Nature, vol. 401, no. 6755, pp. 788–91, Oct. 1999