Estimating 3D human pose from single images using iterative refinement of the prior

Ben Daubney, Xianghua Xie

    Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

    Abstract

    This paper proposes a generative method to extract 3D human pose using just a single image. Unlike many existing approaches we assume that accurate foreground background segmentation is not possible and do not use binary silhouettes. A stochastic method is used to search the pose space and the posterior distribution is maximized using Expectation Maximization (EM). It is assumed that some knowledge is known a priori about the position, scale and orientation of the person present and we specifically develop an approach to exploit this. The result is that we can learn a more constrained prior without having to sacrifice its generality to a specific action type. A single prior is learnt using all actions in the Human Eva dataset [9] and we provide quantitative results for images selected across all action categories and subjects, captured from differing viewpoints.
    Original languageEnglish
    Title of host publicationProceedings - International Conference on Pattern Recognition
    Pages3440-3443
    Number of pages4
    DOIs
    Publication statusPublished - 2010

    Publication series

    NameProceedings - International Conference on Pattern Recognition

    Keywords

    • Joints
    • Three dimensional displays
    • Image edge detection
    • Wrist
    • Image color analysis
    • humans
    • Approximation methods
    • prior refinement
    • Pose estimation
    • single image
    • human eva data set
    • stochastic processes
    • expectation-maximisation algorithm

    Fingerprint

    Dive into the research topics of 'Estimating 3D human pose from single images using iterative refinement of the prior'. Together they form a unique fingerprint.

    Cite this