Skip to main content

Research Repository

Advanced Search

A fusion spatial attention approach for few-shot learning

Song, Heda; Deng, Bowen; Pound, Michael; Özcan, Ender; Triguero, Isaac

A fusion spatial attention approach for few-shot learning Thumbnail


Authors

Heda Song

Bowen Deng

Profile Image

ENDER OZCAN ender.ozcan@nottingham.ac.uk
Professor of Computer Science and Operational Research



Abstract

Few-shot learning is a challenging problem in computer vision that aims to learn a new visual concept from very limited data. A core issue is that there is a large amount of uncertainty introduced by the small training set. For example, the few images may include cluttered backgrounds or different scales of objects. Existing approaches mostly address this problem from either the original image space or the embedding space by using meta-learning. To the best of our knowledge, none of them tackle this problem from both spaces jointly. To this end, we propose a fusion spatial attention approach that performs spatial attention in both image and embedding spaces. In the image space, we employ a Saliency Object Detection (SOD) module to extract the saliency map of an image and provide it to the network as an additional channel. In the embedding space, we propose an Adaptive Pooling (Ada-P) module tailored to few-shot learning that introduces a meta-learner to adaptively fuse local features of the feature maps for each individual embedding. The fusion process assigns different pooling weights to the features at different spatial locations. Then, weighted pooling can be conducted over an embedding to fuse local information, which can avoid losing useful information by considering the spatial importance of the features. The SOD and Ada-P modules can be used within a plug-and-play module and incorporated into various existing few-shot learning approaches. We empirically demonstrate that designing spatial attention methods for few-shot learning is a nontrivial task and our method has proven effective for it. We evaluate our method using both shallow and deeper networks on three widely used few-shot learning benchmarks, miniImageNet, tieredImageNet and CUB, and demonstrate very competitive performance.

Citation

Song, H., Deng, B., Pound, M., Özcan, E., & Triguero, I. (2022). A fusion spatial attention approach for few-shot learning. Information Fusion, 81, 187-202. https://doi.org/10.1016/j.inffus.2021.11.019

Journal Article Type Article
Acceptance Date Nov 22, 2021
Online Publication Date Dec 22, 2021
Publication Date 2022-05
Deposit Date Jan 6, 2022
Publicly Available Date Mar 28, 2024
Journal Information Fusion
Print ISSN 1566-2535
Electronic ISSN 1872-6305
Publisher Elsevier BV
Peer Reviewed Peer Reviewed
Volume 81
Pages 187-202
DOI https://doi.org/10.1016/j.inffus.2021.11.019
Keywords Hardware and Architecture; Information Systems; Signal Processing; Software
Public URL https://nottingham-repository.worktribe.com/output/7169207
Publisher URL https://www.sciencedirect.com/science/article/pii/S156625352100244X?via%3Dihub

Files




You might also like



Downloadable Citations