Skip to main content

Research Repository

Advanced Search

CapMatch: Semi-Supervised Contrastive Transformer Capsule With Feature-Based Knowledge Distillation for Human Activity Recognition

Xiao, Zhiwen; Tong, Huagang; Qu, Rong; Xing, Huanlai; Luo, Shouxi; Zhu, Zonghai; Song, Fuhong; Feng, Li

CapMatch: Semi-Supervised Contrastive Transformer Capsule With Feature-Based Knowledge Distillation for Human Activity Recognition Thumbnail


Authors

Zhiwen Xiao

Huagang Tong

Profile image of RONG QU

RONG QU rong.qu@nottingham.ac.uk
Professor of Computer Science

Huanlai Xing

Shouxi Luo

Zonghai Zhu

Fuhong Song

Li Feng



Abstract

This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing semisupervised learning (SSL) techniques for wearable human activity recognition (HAR), called CapMatch. CapMatch gracefully hybridizes supervised learning and unsupervised learning to extract rich representations from input data. In unsupervised learning, CapMatch leverages the pseudolabeling, contrastive learning (CL), and feature-based KD techniques to construct similarity learning on lower and higher level semantic information extracted from two augmentation versions of the data“, weak” and “timecut”, to recognize the relationships among the obtained features of classes in the unlabeled data. CapMatch combines the outputs of the weak-and timecut-augmented models to form pseudolabeling and thus CL. Meanwhile, CapMatch uses the feature-based KD to transfer knowledge from the intermediate layers of the weak-augmented model to those of the timecut-augmented model. To effectively capture both local and global patterns of HAR data, we design a capsule transformer network consisting of four capsule-based transformer blocks and one routing layer. Experimental results show that compared with a number of state-of-the-art semi-supervised and supervised algorithms, the proposed CapMatch achieves decent performance on three commonly used HAR datasets, namely, HAPT, WISDM, and UCI_HAR. With only 10% of data labeled, CapMatch achieves F1 values of higher than 85.00% on these datasets, outperforming 14 semi-supervised algorithms. When the proportion of labeled data reaches 30%, CapMatch obtains F1 values of no lower than 88.00% on the datasets above, which is better than several classical supervised algorithms, e.g., decision tree and k -nearest neighbor (KNN).

Citation

Xiao, Z., Tong, H., Qu, R., Xing, H., Luo, S., Zhu, Z., …Feng, L. (2023). CapMatch: Semi-Supervised Contrastive Transformer Capsule With Feature-Based Knowledge Distillation for Human Activity Recognition. IEEE Transactions on Neural Networks and Learning Systems, 1-15. https://doi.org/10.1109/TNNLS.2023.3344294

Journal Article Type Article
Acceptance Date Dec 10, 2023
Online Publication Date Dec 27, 2023
Publication Date Dec 27, 2023
Deposit Date Dec 30, 2023
Publicly Available Date Jan 2, 2024
Journal IEEE Transactions on Neural Networks and Learning Systems
Electronic ISSN 2162-237X
Publisher Institute of Electrical and Electronics Engineers
Peer Reviewed Peer Reviewed
Pages 1-15
DOI https://doi.org/10.1109/TNNLS.2023.3344294
Keywords Human activity recognition , Feature extraction , Semantics , Transformers , Data mining , Classification algorithms , Unsupervised learning , Capsule network (CapNet) , contrastive learning (CL) , human activity recognition (HAR) , knowledge distillation
Public URL https://nottingham-repository.worktribe.com/output/29001505
Publisher URL https://ieeexplore.ieee.org/document/10375112

Files





You might also like



Downloadable Citations