CV
AAAI

Count What You Want: Exemplar Identification and Few-shot Counting of Human Actions in the Wild

January 8, 2024

This paper addresses the task of counting human actions of interest using sensor data from wearable devices. We propose a novel exemplar-based framework, allowing users to provide exemplars of the actions they want to count by vocalizing predefined sounds “one”, “two”, and “three”. Our method first localizes temporal positions of these utterances from the audio sequence. These positions serve as the basis for identifying exemplars representing the action class of interest. A similarity map is then computed between the exemplars and the entire sensor data sequence, which is further fed into a density estimation module to generate a sequence of estimated density values. Summing these density values provides the final count. To develop and evaluate our approach, we introduce a diverse and realistic dataset consisting of real-world data from 37 subjects and 50 action categories, encompassing both sensor and audio data. The experiments on this dataset demonstrate the viability of the proposed method in counting instances of actions from new classes and subjects that were not part of the training data. On average, the discrepancy between the predicted count and the ground truth value is 7.47, significantly lower than the errors of the frequency-based and transformer-based methods. Our project, code and dataset can be found at https://github.com/cvlab-stonybrook/ExRAC.

Overall

2 minutes

Yifeng*, Duc Nguyen Duy*, Lam Nguyen Thanh, Cuong Pham, Minh Hoai

Share Article

Related publications

GenAI
CV
NeurIPS
November 28, 2024

Hao Phung*, Quan Dao*, Trung Dao, Viet Hoang Phan, Dimitris N. Metaxas, Anh Tran

GenAI
CV
ECCV
November 28, 2024

Uy Dieu Tran, Minh Luu, Phong Ha Nguyen, Khoi Nguyen, Binh-Son Hua

GenAI
CV
ECCV
November 28, 2024

Phuong Dam, Jihoon Jeong, Anh Tran, Daeyoung Kim

CV
ECCV
November 28, 2024

Hoang Pham, The-Anh Ta, Anh Tran, Khoa Doan