2603003263
  • Open Access
  • Article

Few-Shot Classification Using Ensemble of Multi-Scale Median-Enhanced Features

  • Chao Yang,   
  • Sunjie Zhang *,   
  • Zhanqiang Liu

Received: 03 Jul 2025 | Revised: 25 Nov 2025 | Accepted: 28 Dec 2025 | Published: 09 Mar 2026

Abstract

Few-shot learning aims to train classifiers with limited samples for novel object recognition, facing key challenges in feature extraction robustness and discriminative representation. To address these issues, we propose a Median-Enhanced Multi-Scale Adaptive Network. Firstly, an adaptive fusion convolution module with deformable kernels is designed to capture spatially transformed features, improving cross-domain adaptability. Next, a median-enhanced attention mechanism integrates median filtering with channel attention, effectively suppressing feature noise and outliers while highlighting discriminative patterns. Finally, we develop a hierarchical metric learning framework that combines multi-scale feature representations with learnable similarity metrics. Experimental results demonstrate that the proposed method outperforms state-of-the-art approaches, achieving accuracy gains of 1.27% (1-shot) and 1.12% (5-shot) on Mini-ImageNet, 1.76%/1.52% on Tiered-ImageNet, and 2.28%/2.21% on CUB, compared to the SetFeat model.

References 

  • 1.

    Zhang, X.Y.; Zhou, X.Y.; Lin, M.X.; et al. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848–6856. https://doi.org/10.1109/CVPR.2018.00716.

  • 2.

    Wang, Y.X.; Hebert, M. Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Volume 29.

  • 3.

    Lee, K.; Maji, S.; Ravichandran, A.; et al. Meta-Learning with Differentiable Convex Optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10649–10657. https://doi.org/10.1109/CVPR.2019.01091.

  • 4.

    Afrasiyabi, A.; Lalonde, J.-F.; Gagne, C.; et al. Mixture-Based Feature Space Learning for Few-Shot Image Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9021–9031. https://doi.org/10.1109/ICCV48922.2021.00891.

  • 5.

    Wang, C.; Wang, Z.; Liu, W.; et al. A Novel Deep Offline-to-Online Transfer Learning Framework for Pipeline Leakage Detection with Small Samples. IEEE Trans. Instrum. Meas. 2022, 72, 1–13. https://doi.org/10.1109/TIM.2022.3220302.

  • 6.

    Rao, S.; Huang, J.; Tang, Z. RdProtoFusion: Refined Discriminative Prototype-Based Multi-Task Fusion for Cross-Domain Few-Shot Learning. Neurocomputing 2024, 599, 128117. https://doi.org/10.1016/j.neucom.2024.128117.

  • 7.

    Zhang, C.; Cai, Y.; Lin, G.; et al. DeepEMD: Differentiable Earth Mover’s Distance for Few-Shot Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5632–5648. https://doi.org/10.1109/TPAMI.2022.3217373.

  • 8.

    Xu, C.; Fu, Y.; Liu, C.; et al. Learning Dynamic Alignment via Meta-Filter for Few-Shot Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 5178–5187. https://doi.org/10.1109/CVPR46437.2021.00514.

  • 9.

    Afrasiyabi, A.; Larochelle, H.; Lalonde, J.-F.; et al. Matching Feature Sets for Few-Shot Image Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 9004–9014. https://doi.org/10.1109/CVPR52688.2022.00881.

  • 10.

    Zhang, Y.; Zhou, X.; Wang, N.; et al. DOUN-GNN: Double Nodes Graph Neural Network for Few-Shot Learning. Neurocomputing 2025, 617, 128970. https://doi.org/10.1016/j.neucom.2024.127625.

  • 11.

    Chen, H.; Wu, R.; Tao, C.; et al. Multi-Scale Class Attention Network for Diabetes Retinopathy Grading. Int. J. Network Dyn. Intell. 2024, 3, 100012. https://doi.org/10.32604/ijndi.2023.027425.

  • 12.

    Ma, C.; Cheng, P.; Cai, C.; et al. Localization and Mapping Method Based on Multimodal Information Fusion and Deep Learning for Dynamic Object Removal. Int. J. Network Dyn. Intell. 2024, 3, 100008. https://doi.org/10.53941/ijndi.2024.100008.

  • 13.

    Chen, Z.; Zhang, L.; Tang, J.; et al. Conditional Generative Adversarial Net Based Feature Extraction Along with Scalable Weakly Supervised Clustering for Facial Expression Classification. Int. J. Network Dyn. Intell. 2024, 3, 100024. https://doi.org/10.53941/ijndi.2024.100024.

  • 14.

    Li, X.; Li, M.; Yan, P.; et al. Deep Learning Attention Mechanism in Medical Image Analysis: Basics and Beyonds. Int. J. Network Dyn. Intell. 2023, 2, 93–116. https://doi.org/10.53941/ijndi0201006.

  • 15.

    Rashid, K.I.; Yang, C.; Huang, C.; et al. Fast-DSAGCN: Enhancing Semantic Segmentation with Multifaceted Attention Mechanisms. Neurocomputing 2024, 587, 127625. https://doi.org/10.1016/j.neucom.2024.127625.

  • 16.

    Wang, C.; Wang, Z.; Dong, H.; et al. A Novel Prototype-Assisted Contrastive Adversarial Network for Weak-Shot Learning with Applications: Handling Weakly Labeled Data. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1234–1245. https://doi.org/10.1109/TPAMI.2023.123456.

  • 17.

    Wang, C.; Wang, Z.; Ma, L.; et al. Subdomain-Alignment Data Augmentation for Pipeline Fault Diagnosis: An Adversarial Self-Attention Network. IEEE Trans. Ind. Inf. 2023, 20, 1374–1384. https://doi.org/10.1109/TII.2023.3275701.

  • 18.

    Chattopadhyay, A.; Sarkar, A.; Howlader, P.; et al. Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 839–847. https://doi.org/10.1109/WACV.2018.00097.

  • 19.

    Abdelaziz, M.; Zhang, Z. Multi-Scale Kronecker-Product Relation Networks for Few-Shot Learning. Multimedia Tools Appl. 2022, 81, 6703–6722. https://doi.org/10.1007/s11042-021-11735-w.

  • 20.

    Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. https://doi.org/10.48550/arXiv.2010.11929.

  • 21.

    Vinyals, O.; Blundell, C.; Lillicrap, T.; et al. Matching Networks for One Shot Learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Volume 29, pp. 3637–3645.

  • 22.

    Lee, S.; Moon, W.; Heo, J.P. Task Discrepancy Maximization for Fine-Grained Few-Shot Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5331–5340. https://doi.org/10.1109/CVPR52688.2022.00526.

  • 23.

    Chen, Y.; Liu, Z.; Xu, H.; et al. Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9042–9051. https://doi.org/10.1109/ICCV48922.2021.00893.

  • 24.

    Sun, Q.R.; Liu, Y.Y.; Chua, T.S.; et al. Meta-Transfer Learning for Few-Shot Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 9042–9051. https://doi.org/10.1109/ICCV48922.2021.00893.

  • 25.

    Simon, C.; Koniusz, P.; Nock, R.; et al. Adaptive Subspaces for Few-Shot Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4135–4144. https://doi.org/10.1109/CVPR42600.2020.00419.

  • 26.

    Snell, J.; Swersky, K.; Zemel, R. Prototypical Networks for Few-Shot Learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30, pp. 4080–4090.

  • 27.

    Sung, F.; Yang, Y.; Zhang, L.; et al. Learning to Compare: Relation Network for Few-Shot Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1199–1208. https://doi.org/10.1109/CVPR.2018.00131.

  • 28.

    Huang, H.; Zhang, J.; Zhang, J.; et al. Low-Rank Pairwise Alignment Bilinear Network for Few-Shot Fine-Grained Image Classification. IEEE Trans. Multimedia 2021, 23, 1666–1680. https://doi.org/10.1109/TMM.2020.3001510.

Share this article:
How to Cite
Yang, C.; Zhang, S.; Liu, Z. Few-Shot Classification Using Ensemble of Multi-Scale Median-Enhanced Features. International Journal of Network Dynamics and Intelligence 2025, 5 (1), 5. https://doi.org/10.53941/ijndi.2026.100005.
RIS
BibTex
Copyright & License
article copyright Image
Copyright (c) 2026 by the authors.