Human Action Recognition
We developed a novel framework for view-invariant recognition of human actions and human body poses. Unlike previous works that regard an action as a whole object, or as a sequence of individual poses, we represent an action as a set of pose transitions defined by all possible triplets of body points, i.e., we break down further each pose into a set of point-triplets and find invariants for the motion of these triplets across frames. We proposed two distinct methods of recognizing pose transitions independent of camera calibration matrix and viewpoint, and applied them to the problem of action recognition and pose recognition. UCF-CIL Action Dataset was used in this work.
Publications
- Yuping Shen and Hassan Foroosh, View Invariant Action Recognition from Point Triplets, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), to appear, 2009. (PDF)
- Yuping Shen and Hassan Foroosh, View Invariant Action Recognition Using Fundamental Ratios, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008. (PDF)
- Yuping Shen and Hassan Foroosh, View Invariant Recognition of Body Pose from Space-Time Templates, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008. (PDF)