Estimating pose of articulated objects using low-level motion

Ben Daubney, David Gibson, Neill Campbell

    Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

    Abstract

    In this work a method is presented to track and estimate pose of articulated objects using the motion of a sparse set of moving features. This is achieved by using a bottom-up generative approach based on the Pictorial Structures representation [1]. However, unlike previous approaches that rely on appearance, our method is entirely dependent on motion. Initial low-level part detection is based on how a region moves as opposed to its appearance. This work is best described as Pictorial Structures using motion. A standard feature tracker is used to automatically extract a sparse set of features. These features typically contain many tracking errors, however, the presented approach is able to overcome both this and their sparsity. The proposed method is applied to two problems: 2D pose estimation of articulated objects walking side onto the camera and 3D pose estimation of humans walking and jogging at arbitrary orientations to the camera. In each domain quantitative results are reported that improve on state of the art. The motivation of this work is to illustrate the information present in low-level motion that can be exploited for the task of pose estimation. ?? 2011 Elsevier Inc. All rights reserved.
    Original languageEnglish
    Title of host publicationComputer Vision and Image Understanding
    Pages330-346
    Number of pages17
    DOIs
    Publication statusPublished - Mar 2012

    Publication series

    NameComputer Vision and Image Understanding
    Volume116

    Keywords

    • Low-level motion
    • Pictorial Structures
    • Pose estimation
    • Sparse features

    Fingerprint

    Dive into the research topics of 'Estimating pose of articulated objects using low-level motion'. Together they form a unique fingerprint.

    Cite this