2509001267
  • Open Access
  • Article

LightweightPhys: A Lightweight and Robust Network for Remote Photoplethysmography Signal Extraction

  • Yu Liu 1,   
  • Yinqiao Li 1,   
  • Yan He 1,   
  • Tao Wang 1, *,   
  • Zhigao Zheng 2

Received: 25 Apr 2025 | Revised: 18 Aug 2025 | Accepted: 29 Aug 2025 | Published: 08 Sep 2025

Abstract

Remote Photoplethysmography (rPPG) has emerged as a promising technology for non-contact heart rate monitoring by analyzing subtle color variations in facial videos, which are caused by changes in blood volume. While traditional rPPG methods often struggle with environmental noise and motion artifacts, deep learning-based approaches have shown improved accuracy but typically come with high computational costs, limiting their applicability in real-time, resource-constrained scenarios. To address these challenges, this paper introduces LightweightPhys, a novel 3D-CNN-based network that combines Depthwise Separable Convolution (DSC) and a newly proposed Simulated Temporal Noise Suppression (Sim-TN) module. The DSC module significantly reduces the computational complexity of the model by decoupling spatial and channel-wise convolutions, making it more efficient for deployment on low-power devices. Meanwhile, the Sim-TN module enhances the robustness of feature extraction by effectively suppressing temporal noise, which is a common issue in rPPG signal processing due to environmental factors and subject movement. Extensive evaluations on two widely-used datasets, PURE and UBFC-rPPG, demonstrate that LightweightPhys achieves state-of-the-art performance in heart rate estimation while maintaining significantly lower computational overhead compared to existing deep learning models. This makes LightweightPhys particularly suitable for real-time health monitoring applications on resource-constrained devices, such as wearable gadgets and mobile health platforms.

References 

  • 1.
    Poh, M.Z.; McDuff, D.J.; Picard, R.W. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Opt. Express 2010, 18, 10762–10774.
  • 2.
    Wu, H.Y.; Rubinstein, M.; Shih, E.; et al. Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. 2012, 31, 1–8.
  • 3.
    Burzo, M.; McDuff, D.; Mihalcea, R.; et al. Towards sensing the influence of visual narratives on human affect. In Proceedings of the 14th ACM International Conference on Multimodal Interaction, Santa Monica, CA, USA, 22–26 October 2012; pp. 153–160.
  • 4.
    Hsu, G.S.; Ambikapathi, A.; Chen, M.S. Deep learning with time-frequency representation for pulse estimation from facial videos. In Proceedings of the 2017 IEEE international joint conference on biometrics (IJCB), Denver, CO, USA, 1–4 October, 2017; pp. 383–389.
  • 5.
    Prakash, S.K.A.; Tucker, C.S. Bounded Kalman filter method for motion-robust, non-contact heart rate estimation. Biomed. Opt. Express 2018, 9, 873–897.
  • 6.
    Zhu, J.; Ji, L.; Liu, C. Heart rate variability monitoring for emotion and disorders of emotion. Physiol. Meas. 2019, 40, 064004.
  • 7.
    Sabour, R.M.; Benezeth, Y.; Marzani, F.; et al. Emotional state classification using pulse rate variability. In Proceedings of the 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP), Wuxi, China, 19–21 July 2019; pp. 86–90.
  • 8.
    Macwan, R.; Benezeth, Y.; Mansouri, A. Heart rate estimation using remote photoplethysmography with multi-objective optimization. Biomed. Signal Process. Control 2019, 49, 24–33.
  • 9.
    Zhang, J.; Zheng, K.; Mazhar, S.; et al. Trusted emotion recognition based on multiple signals captured from video. Expert Syst. Appl. 2023, 233, 120948.
  • 10.
    Hsu, Y.; Lin, Y.L.; Hsu, W. Learning-based heart rate detection from remote photoplethysmography features. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 4433–4437.
  • 11.
    Osman, A.; Turcot, J.; El Kaliouby, R. Supervised learning approach to remote heart rate estimation from facial videos. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015.
  • 12.
    Nardelli, M.; Valenza, G.; Greco, A.; et al. Recognizing emotions induced by affective sounds through heart rate variability. IEEE Trans. Affect. Comput. 2015, 6, 385–394.
  • 13.
    Harper, R.; Southern, J. A bayesian deep learning framework for end-to-end prediction of emotion from heartbeat. IEEE Trans. Affect. Comput. 2020, 13, 985–991.
  • 14.
    Zhou, K.; Schinle, M.; Stork, W. Dimensional emotion recognition from camera-based PRV features. Methods 2023, 218, 224–232.
  • 15.
    Fan, H.; Zhang, X.; Xu, Y.; et al. Transformer-based multimodal feature enhancement networks for multimodal depression detection integrating video, audio and remote photoplethysmograph signals. Inf. Fusion 2024, 104, 102161.
  • 16.
    Zhao, C.; Cao, P.; Hu, M.; et al. WTC3D: An Efficient Neural Network for Noncontact Pulse Acquisition in Internet of Medical Things. IEEE Trans. Ind. Inform. 2025, 21, 1547–1556.
  • 17.
    Zhang, X.; Xia, Z.; Liu, L.; et al. Demodulation Based Transformer for rPPG Generation and Heart Rate Estimation. IEEE Signal Process. Lett. 2023, 30, 1042–1046.
  • 18.
    Chen, W.; McDuff, D. Deepphys: Video-based physiological measurement using convolutional attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 349–365.
  • 19.
    Yu, Z.; Li, X.; Zhao, G. Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks. arXiv 2019, arXiv:1905.02419.
  • 20.
    Liu, X.; Fromm, J.; Patel, S.; et al. Multi-task temporal shift attention networks for on-device contactless vitals measurement. Adv. Neural Inf. Process. Syst. 2020, 33, 19400–19411.
  • 21.
    Narayanswamy, G.; Liu, Y.; Yang, Y.; et al. Bigsmall: Efficient multi-task learning for disparate spatial and temporal physiological measurements. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2024; pp. 7914–7924.
  • 22.
    Yu, Z.; Shen, Y.; Shi, J.; et al. Physformer: Facial video-based physiological measurement with temporal difference transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 4186–4196.
  • 23.
    Joshi, J.; Cho, Y. IBVP dataset: RGB-thermal RPPG dataset with high resolution signal quality labels. Electronics 2024, 13, 1334.
  • 24.
    Luo, C.; Xie, Y.; Yu, Z. PhysMamba: Efficient Remote Physiological Measurement with SlowFast Temporal Difference Mamba. In Proceedings of the Chinese Conference on Biometric Recognition, Nanjing, China, 22–24 November 2024; pp. 248–259.
  • 25.
    Zhuang, J.; Chen, Y.; Zhang, Y.; et al. FastBVP-Net: a lightweight pulse extraction network for measuring heart rhythm via facial videos. arXiv 2022, arXiv:cs.CV/2206.12558.
  • 26.
    Liu, X.; Hill, B.; Jiang, Z.; et al. Efficientphys: Enabling simple, fast and accurate camera-based cardiac measurement. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 5008–5017.
  • 27.
    Botina-Monsalve, D.; Benezeth, Y.; Miteran, J. RTrPPG: An Ultra Light 3DCNN for Real-Time Remote Photoplethys- mography. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 2145–2153.
  • 28.
    Zhao, C.; Cao, P.; Xu, S.; et al. Pruning rPPG Networks: Toward Small Dense Network with Limited Number of Training Samples. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 2054–2063.
  • 29.
    Zhao, C.; Zhang, S.; Cao, P.; et al. Pruning remote photoplethysmography networks using weight-gradient joint criterion. Expert Syst. Appl. 2025, 282, 127623.
  • 30.
    Howard, A.G.; Zhu, M.; Chen, B.; et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861.
  • 31.
    Yang, L.; Zhang, R.Y.; Li, L.; et al. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International conference on machine learning, PMLR, Virtual, 18–24 July 2021; pp. 11863–11874.
  • 32.
    Wang, K.; Tang, J.; Wei, Y.; et al. A Plug-and-Play Temporal Normalization Module for Robust Remote Photoplethysmog- raphy. arXiv 2024, arXiv:eess.IV/2411.15283.
  • 33.
    Stricker, R.; Mu… ller, S.; Gross, H.M. Non-contact video-based pulse rate measurement on a mobile service robot. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 1056–1062.
  • 34.
    Bobbia, S.; Macwan, R.; Benezeth, Y.; et al. Unsupervised skin tissue segmentation for remote photoplethysmography. Pattern Recognit. Lett. 2019, 124, 82–90.
  • 35.
    Zhang, K.; Zhang, Z.; Li, Z.; et al. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Process. Lett. 2016, 23, 1499–1503.
  • 36.
    De Haan, G.; Jeanne, V. Robust pulse rate from chrominance-based rPPG. IEEE Trans. Biomed. Eng. 2013, 60, 2878–2886.
  • 37.
    Wang, W.; Den Brinker, A.C.; Stuijk, S.; et al. Algorithmic principles of remote PPG. IEEE Trans. Biomed. Eng. 2016, 64, 1479–1491.
Share this article:
How to Cite
Liu, Y.; Li, Y.; He, Y.; Wang, T.; Zheng, Z. LightweightPhys: A Lightweight and Robust Network for Remote Photoplethysmography Signal Extraction. Journal of Advanced Digital Communications 2025, 2 (1), 2. https://doi.org/10.53941/jadc.2025.100002.
RIS
BibTex
Copyright & License
article copyright Image
Copyright (c) 2025 by the authors.