Skip to main navigation Skip to search Skip to main content

Motion-let clustering for skeleton-based action recognition

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

The representation of 3D skeleton is critical for human action recognition. In this paper, we propose a new approach for skeleton-based action recognition by motion-let clustering. To discover discriminative action features, action sequences are represented by the 3D movements of joints among neighbor frames, called motion-lets. We divide the joints into four groups according to the root-to-end structure, and project the motion-lets to 2D projections. In each 2D projection, the 2D motion-lets of each group are clustered to make primitive representation of actions as histograms. The temporal pyramid is employed to capture the temporal ordering of motion-lets followed by the Random Forest algorithm for classification. The proposed method is validated on four benchmark 3D human action datasets and the experimental results show that the proposed approach outperforms state-of-the-art deep-learning-based methods.

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages150-155
Number of pages6
ISBN (Electronic)9781538692141
DOIs
StatePublished - Jul 2019
Event2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019 - Shanghai, China
Duration: Jul 8 2019Jul 12 2019

Publication series

NameProceedings - 2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019

Conference

Conference2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019
Country/TerritoryChina
CityShanghai
Period07/8/1907/12/19

Keywords

  • Action Recognition
  • Motion-let
  • Primitive Descriptor
  • Skeleton

Fingerprint

Dive into the research topics of 'Motion-let clustering for skeleton-based action recognition'. Together they form a unique fingerprint.

Cite this