Articulated body pose estimation

Articulated body pose estimation in computer vision is the study of algorithms and systems that recover the pose of an articulated body, which consists of joints and rigid parts using image-based observations. It is one of the longest-lasting problems in computer vision because of the complexity of the models that relate observation with pose, and because of the variety of situations in which it would be useful.[1][2]

Description

Perception of human beings in their neighboring environment is an important capability that robots must possess. If a person uses gestures to point to a particular object, then the interacting machine should be able to understand the situation in real world context. Thus pose estimation is an important and challenging problem in computer vision, and many algorithms have been deployed in solving this problem over the last two decades. Many solutions involve training complex models with large data sets.

Pose estimation is a difficult problem and an active subject of research because the human body has 244 degrees of freedom with 230 joints. Although not all movements between joints are evident, the human body is composed of 10 large parts with 20 degrees of freedom. Algorithms must account for large variability introduced by differences in appearance due to clothing, body shape, size, and hairstyles. Additionally, the results may be ambiguous due to partial occlusions from self-articulation, such as a person's hand covering their face, or occlusions from external objects. Finally, most algorithms estimate pose from monocular (two-dimensional) images, taken from a normal camera. Other issues include varying lighting and camera configurations. The difficulties are compounded if there are additional performance requirements. These images lack the three-dimensional information of an actual body pose, leading to further ambiguities. There is recent work in this area wherein images from RGBD cameras provide information about color and depth.[3]

There is a need to develop accurate, tether-less, vision-based articulated body pose estimation systems to recover the pose of bodies, such as the human body, a hand, or non-human creatures. Such a system has several foreseeable applications, including the following:

The typical articulated body pose estimation system involves a model-based approach, in which the pose estimation is achieved by maximizing/minimizing a similarity/dissimilarity between an observation (input) and a template model. Different kinds of sensors have been explored for use in making the observation, including the following:

These sensors produce intermediate representations that are directly used by the model. The representations include the following:

  • Image appearance,
  • Voxel (volume element) reconstruction,
  • 3D point clouds, and sum of Gaussian kernels[5]
  • 3D surface meshes.

Part models

The basic idea of part based model can be attributed to the human skeleton. Any object having the property of articulation can be broken down into smaller parts wherein each part can take different orientations, resulting in different articulations of the same object. Different scales and orientations of the main object can be articulated to scales and orientations of the corresponding parts. To formulate the model so that it can be represented in mathematical terms, the parts are connected to each other using springs. As such, the model is also known as a spring model. The degree of closeness between each part is accounted for by the compression and expansion of the springs. There is geometric constraint on the orientation of springs. For example, limbs of legs cannot move 360 degrees. Hence parts cannot have that extreme orientation. This reduces the possible permutations.[6]

The spring model forms a graph G(V,E) where V (nodes) corresponds to the parts and E (edges) represents springs connecting two neighboring parts. Each location in the image can be reached by the and coordinates of the pixel location. Let be point at location. Then the cost associated in joining the spring between and the point can be given by . Hence the total cost associated in placing components at locations is given by

The above equation simply represents the spring model used to describe body pose. To estimate pose from images, cost or energy function must be minimized. This energy function consists of two terms. The first is related to how each component matches the image data and the second deals with how much the oriented (deformed) parts match, thus accounting for articulation along with object detection.[7]

The part models, also known as pictorial structures, are of one the basic models on which other efficient models are built by slight modification. One such example is the flexible mixture model which reduces the database of hundreds or thousands of deformed parts by exploiting the notion of local rigidity.[8]

Articulated model with quaternion

The kinematic skeleton is constructed by a tree-structured chain, as illustrated in the Figure.[9] Each rigid body segment has its local coordinate system that can be transformed to the world coordinate system via a 4×4 transformation matrix ,

where denotes the local transformation from body segment to its parent . Each joint in the body has 3 degrees of freedom (DoF) rotation. Given a transformation matrix , the joint position at the T-pose can be transferred to its corresponding position in the world coordination. In many works, the 3D joint rotation is expressed as a normalized quaternion due to its continuity that can facilitate gradient-based optimization in the parameter estimation.

Applications

Assisted living

Personal care robots may be deployed in future assisted living homes. For these robots, high-accuracy human detection and pose estimation is necessary to perform a variety of tasks, such as fall detection. Additionally, this application has a number of performance constraints.

Character animation

Traditionally, character animation has been a manual process. However, poses can be synced directly to a real-life actor through specialized pose estimation systems. Older systems relied on markers or specialized suits. Recent advances in pose estimation and motion capture have enabled markerless applications, sometimes in real time.[10]

Intelligent driver assisting system

Car accidents account for about two percent of deaths globally each year. As such, an intelligent system tracking driver pose may be useful for emergency alerts . Along the same lines, pedestrian detection algorithms have been used successfully in autonomous cars, enabling the car to make smarter decisions.

Video games

Commercially, pose estimation has been used in the context of video games, popularized with the Microsoft Kinect sensor (a depth camera). These systems track the user to render their avatar in-game, in addition to performing tasks like gesture recognition to enable the user to interact with the game. As such, this application has a strict real-time requirement.[11]

Medical Applications

Pose estimation has been used to detect postural issues such as scoliosis by analyzing abnormalities in a patient's posture,[12] physical therapy, and the study of the cognitive brain development of young children by monitoring motor functionality.[13]

Other applications

Other applications include video surveillance, animal tracking and behavior understanding, sign language detection, advanced human–computer interaction, and markerless motion capturing.

A commercially successful but specialized computer vision-based articulated body pose estimation technique is optical motion capture. This approach involves placing markers on the individual at strategic locations to capture the 6 degrees-of-freedom of each body part.

Research groups

A number of groups and companies are researching pose estimation, including groups at Brown University, Carnegie Mellon University, MPI Saarbruecken, Stanford University, the University of California, San Diego, the University of Toronto, the École Centrale Paris, ETH Zurich, National University of Sciences and Technology (NUST),[14] and the University of California, Irvine.

Companies

At present, several companies are working on articulated body pose estimation.

  • Bodylabs: Bodylabs is a Manhattan-based software provider of human-aware artificial intelligence.

References

  1. Survey of Computer Vision-Based Human Motion Capture (2001)
  2. "Survey of Advances in Computer Vision-based Human Motion Capture (2006)". Archived from the original on 2008-03-02. Retrieved 2007-09-15.
  3. Droeschel, David, and Sven Behnke. "3D body pose estimation using an adaptive person model for articulated ICP." Intelligent Robotics and Applications. Springer Berlin Heidelberg, 2011. 157167.
  4. Han, J.; Gaszczak, A.; Maciol, R.; Barnes, S.E.; Breckon, T.P. (September 2013). "Human Pose Classification within the Context of Near-IR Imagery Tracking" (PDF). In Zamboni, Roberto; Kajzar, Francois; Szep, Attila A; Burgess, Douglas; Owen, Gari (eds.). Proc. SPIE Optics and Photonics for Counterterrorism, Crime Fighting and Defence. Optics and Photonics for Counterterrorism, Crime Fighting and Defence IX; and Optical Materials and Biomaterials in Security and Defence Systems Technology X. 8901. SPIE. pp. 89010E. CiteSeerX 10.1.1.391.380. doi:10.1117/12.2028375. Retrieved 5 November 2013.
  5. M. Ding and G. Fan, "Generalized Sum of Gaussians for Real-Time Human Pose Tracking from a Single Depth Sensor" 2015 IEEE Winter Conference on Applications of Computer Vision (WACV), Jan 2015
  6. Fischler, Martin A., and Robert A. Elschlager. "The representation and matching of pictorial structures." IEEE Transactions on computers 1 (1973): 6792.
  7. Felzenszwalb, Pedro F., and Daniel P. Huttenlocher. "Pictorial structures for object recognition." International Journal of Computer Vision 61.1 (2005): 5579.
  8. Yang, Yi, and Deva Ramanan. "Articulated pose estimation with flexible mixtures-of-parts." Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011.
  9. M. Ding and G. Fan, "Articulated and Generalized Gaussian Kernel Correlation for Human Pose Estimation" IEEE Transactions on Image Processing, Vol. 25, No. 2, Feb 2016
  10. Dent, Steven. "What you need to know about 3D motion capture". Engadget. AOL Inc. Retrieved 31 May 2017.
  11. Kohli, Pushmeet; Shotton, Jamie. "Key Developments in Human Pose Estimation for Kinect" (PDF). Microsoft. Retrieved 31 May 2017.
  12. Aroeira, Rozilene Maria C., Estevam B. de Las Casas, Antônio Eustáquio M. Pertence, Marcelo Greco, and João Manuel R.S. Tavares. “Non-Invasive Methods of Computer Vision in the Posture Evaluation of Adolescent Idiopathic Scoliosis.” Journal of Bodywork and Movement Therapies 20, no. 4 (October 2016): 832–43. https://doi.org/10.1016/j.jbmt.2016.02.004.
  13. Khan, Muhammad Hassan, Julien Helsper, Muhammad Shahid Farid, and Marcin Grzegorzek. “A Computer Vision-Based System for Monitoring Vojta Therapy.” International Journal of Medical Informatics 113 (May 2018): 85–95. https://doi.org/10.1016/j.ijmedinf.2018.02.010.
  14. "NUST-SMME RISE Research Center".
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.