Nadine Social Robot

Nadine
Year of creation 2013

Nadine is a female humanoid social robot that is modelled on Professor Nadia Magnenat Thalmann. The robot has a strong human-likeness with a natural-looking skin and hair[1][2][3] and realistic hands.[4][5][6][7][8][9][10][11] Nadine is a socially intelligent robot which returns a greeting, makes eye contact, and remembers all the conversations had with it. It is able to answer questions autonomously in several languages, simulate emotions both in gestures and facially, depending on the content of the interaction with the user.[12][13][14] Nadine can recognise persons it has previously seen, and engage in flowing conversation.[15][16][17][18] Nadine has been programmed with a "personality", in that its demeanour can change according to what is said to it.[19] Nadine has a total of 27 degrees of freedom for facial expressions and upper body movements. With persons it has previously encountered, it remembers facts and events related to each person.[20] It can assist people with special needs by reading stories, showing images, put on Skype sessions, send emails, and communicate with other members of the family.[21][22][23][24] It can play the role of a receptionist in an office or be dedicated to be a personal coach.[25][26] Nadine interacted with more than 100,000 visitors at the ArtScience Museum in Singapore during the exhibition, "HUMAN+: The Future of our Species", that was held from May to October 2017.[27][28][29]

Platform

Nadine’s platform is implemented as a classic Perception-Decision-Action architecture. The perception layer is composed of a Microsoft Kinect V2 and a microphone. The perception includes face recognition, gestures recognition[30] and some understanding of social situations. In regard to decision, the platform includes emotion and memory models as well as social attention. Finally, the action layer consists of a dedicated robot controller which includes emotional expressions, lips synchronization and online gaze generation.

Specifications

Nadine
Weight 35 kg
Sitting Height 131.5 cm
Degrees of Freedom 27
Rated input voltage/frequency 100–240 V AC
Power consumption approx. 500 W

References

  1. S. Guo, H. Xu, N. Magnenat Thalmann, J. Yao, Customization and fabrication of the appearance for humanoid robot, The Visual Computer, Springer, (IP: 1.09), Vol 33 (1), pp. 63-74, 2017
  2. Y. Xiao, et al, Body Movement Analysis and Recognition, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 31-53, 2015
  3. Z. Zhang, A. Beck, and N. Magnenat Thalmann, Human-Like Behavior Generation Based on Head-Arms Model for Robot Tracking External Targets and Body Parts, IEEE Transaction on Cybernetics, Vol. 45, No. 8, Pp. 1390-1400, 2015
  4. L. Tian, N. Magnenat Thalmann, D. Thalmann, J. Zheng, The Making of a 3D-Printed, Cable-Driven, Single-Model, Lightweight Humanoid Robotic Hand, Frontiers in Robotics and AI, DOI: 10.3389/frobt.2017.00065, pp. 65, December 04, 2017
  5. N. Magnenat Thalmann, L. Tian and F. Yao, Nadine: A Social Robot that Can Localize Objects and Grasp Them in a Human Way, Frontiers in Electronic Technologies, Springer, pp. 1-23, 2017
  6. H. Liang, J. Yuan, D. Thalmann and N. Magnenat Thalmann, AR in Hand: Egocentric Palm Pose Tracking and Gesture Recognition for Augmented Reality Applications, ACM Multimedia Conference 2015 (ACMMM 2015), Brisbane, Australia, 2015
  7. H. Liang, J. Yuan and D. Thalmann, Egocentric Hand Pose Estimation and Distance Recovery in a Single RGB Image, IEEE International Conference on Multimedia and Expo (ICME 2015), Italy, 2015
  8. H. Liang, J. Yuan and D. Thalmann, Resolving Ambiguous Hand Pose Predictions by Exploiting Part Correlations, IEEE Transactions on Circuits and Systems for Video Technology, Pp. 1, Issue 99, 2014
  9. H. Liang and J. Yuan, Hand Parsing and Gesture Recognition with a Commodity Depth Camera, Computer Vision and Machine Learning with RGB-D Sensors, Springer, Pp. 239-265, 2014
  10. H. Liang, J. Yuan and D. Thalmann, Model-based Hand Pose Estimation via Spatial-temporal Hand Parsing and 3D Fingertip Localization, The Visual Computer: International Journal of Computer Graphics, Vol. 29, Issue 6-8, Pp. 837-848, 2013
  11. H. Liang, J. Yuan and D. Thalmann, Hand Pose Estimation by Combining Fingertip Tracking and Articulated ICP, 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI 2012), Singapore, 2012
  12. Media coverage on Nadine exhibition
  13. BBC News coverage on Nadine
  14. Reuters News Media coverage on Nadine
  15. J Ren, X Jiang and J Yuan, Quantized Fuzzy LBP for Face Recognition, 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015, Brisbane, Australia, 2015
  16. J. Ren, X. Jiang and J. Yuan, Learning LBP Structure by Maximizing the Conditional Mutual Information, Pattern Recognition, Pp. 3180–3190, Vol. 48, Issue 10, 2015
  17. J. Ren, X. Jiang and J. Yuan, A Chi-Squared-Transformed Subspace of LBP Histogram for Visual Recognition, IEEE Transactions on Image Processing, Vol. 24, Issue 6, Pp 1893 - 1904, 2015
  18. J. Ren, X. Jiang, J. Yuan and G. Wang, Optimizing LBP Structure For Visual Recognition Using Binary Quadratic Programming, IEEE Signal Processing Letters, Pp. 1346 – 1350, 2014
  19. Kochanowicz J, A. H. Tan and D. Thalmann, Modeling human-like non-rationality for social agents, Proceedings of the ACM 29th International Conference on Computer Animation and Social Agents (CASA 2016), pp. 11-20, Geneva, Switzerland, May 23-25, 2016
  20. J. Zhang J, N. Magnenat Thalmann and J. Zheng, Combining Memory and Emotion With Dialog on Social Companion: A Review, Proceedings of the ACM 29th International Conference on Computer Animation and Social Agents (CASA 2016), pp. 1-9, Geneva, Switzerland, May 23-25, 2016
  21. A. Beck, Z. Zhang and N. Magnenat Thalmann, Motion Control for Social Behaviors, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 237-256, 2015
  22. Z.P. Bian, J. Hou, L.P. Chau and N. Magnenat Thalmann, Fall Detection Based on Body Part Tracking Using a Depth Camera, IEEE Journal of Biomedical and Health Informatics, Vol. 19, No. 2, Pp. 430-439, 2015
  23. J. Zhang, J. Zheng and N. Magnenat Thalmann, Fall Detection Based on Body Part Tracking Using a Depth Camera PCMD: Personality-Characterized Mood Dynamics Model Towards Personalized Virtual Characters, Computer Animation and Virtual Worlds, 26(3-4): 237-245, 2015
  24. J. Zhang, J. Zheng and N. Magnenat Thalmann, Modeling Personality, Mood, and Emotions, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 211-236, 2015
  25. Y. Xiao, Z. Zhang, A. Beck, J. Yuan and D. Thalmann, Human-Robot Interaction by Understanding Upper Body Gestures, MIT Press Journals - Presence: Teleoperators and Virtual Environments, Vol. 23, No. 2, Pp. 133-154, 2014
  26. Z. Yumak, J. Ren, N. Magnenat Thalmann, and J. Yuan, Modelling Multi-Party Interactions among Virtual Characters, Robots, and Humans, MIT Press Journals - Presence: Teleoperators and Virtual Environments, Vol. 23, No. 2, Pp. 172-190, 2014
  27. Singapore’s receptionist robot makes her public debut at ArtScience Museum’s futuristic show
  28. Conversation with a humanoid robot
  29. Chat With A Female Robot (Who's Made To Look Just Like Her Creator)
  30. L. Ge, H. Liang, J. Yuan and D. Thalmann, Robust 3D Hand Pose Estimation in Single Depth Images: from Single-View CNN to Multi-View CNNs, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, USA, 24 June 2016
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.