2512002699
  • Open Access
  • Review

A Survey on Neural Dynamics for Computing and Control: Theories, Models, and Applications

  • Long Jin

Received: 16 Nov 2025 | Revised: 25 Dec 2025 | Accepted: 06 Jan 2026 | Published: 13 Jan 2026

Abstract

Neural dynamics provides a powerful and unifying tool for understanding systems that learn, adapt, and interact. This survey provides a comprehensive overview of the theories, models, and applications of neural dynamics at the intersection of computing and control. We first articulate the core concept of neural dynamics, explaining the close connection between this concept and dynamical system theory. We then demonstrate the broad applicability of neural dynamics by reviewing a wide range of models across various key domains. In the field of control, we survey neural dynamics approaches to classical problems of stability and optimality, especially within control systems and multi-agent systems (MASs). In the field of computing, we focus on deep learning, analyzing both model architectures and optimizers as different dynamical systems. The principal contribution of this work is to bridge these domains, revealing the computation and control topics governed by neural dynamics theories. This integrated viewpoint illuminates numerous applications and inspires future research directions focusing on advanced models in terms of computation and control.

References 

  • 1.

    Awan, A.U.; Zamani, M. Reduced-Order Gaussian Processes for Partially Unknown Nonlinear Control Systems. IEEE Trans. Autom. Control. 2025,70, 6893–6900.

  • 2.

    Xie, M.; An, B.; Jia, X. Simultaneous Update of Sensing and Control DATA Using Free-Ride Codes in Vehicular Networks: An Age and Energy Perspective. Comput. Netw. 2024, 252, 110667.

  • 3.

    Xiang, Z.; Guo, Y. Controlling Melody Structures in Automatic Game Soundtrack CompositionsWith Adversarial Learning Guided Gaussian Mixture Models. IEEE Trans. Games 2021, 13, 193–204.

  • 4.

    Liu, H.; Guo, D.; Cangelosi, A. Embodied Intelligence: A Synergy of Morphology, Action, Perception and Learning. ACM Comput. Surv. 2025,57, 1–36.

  • 5.

    Ren, L.; Dong, J.; Liu, S.; et al. Embodied Intelligence Toward Future Smart Manufacturing in the Era of AI Foundation Model. IEEE/ASME Trans. Mechatron. 2025, 30, 2632–2642.

  • 6.

    Shen, T.; Sun, J.; Kong, S.; et al. The Journey/DAO/TAO of Embodied Intelligence: From Large Models to Foundation Intelligence and Parallel Intelligence. IEEE/CAA J. Autom. Sin. 2024, 11, 1313–1316.

  • 7.

    Li, J.; Guan, Y.; Deng, T.; et al. Periodic-Noise-Tolerant Neurodynamic Approach for kWTA Operation Applied to Opinions Evolution. Neural Netw. 2025, 191, 107839.

  • 8.

    Wang, C.; Wang, Y.; Yuan, Y.; et al. Joint Computation Offloading and Resource Allocation for End-Edge Collaboration in Internet of Vehicles via Multi-Agent Reinforcement Learning. Neural Netw. 2024, 179, 106621.

  • 9.

    He, J.; Treude, C.; Lo, D. LLM-Based Multi-Agent Systems for Software Engineering: Literature Review, Vision, and the Road Ahead. ACM Trans. Softw. Eng. Methodol. 2025,34, 1–30.

  • 10.

    Kumar, P. Large Language Models (LLMs): Survey, Technical Frameworks, and Future Challenges. Artif. Intell. Rev. 2024, 57, 260.

  • 11.

    Mudrik, N.; Chen, Y.; Yezerets, E.; et al. Decomposed Linear Dynamical Systems (dLDS) for Learning the Latent Components of Neural Dynamics. J. Mach. Learn. Res. 2024,25, 1–44.

  • 12.

    Tsuda, I. Toward an Interpretation of Dynamic Neural Activity in Terms of Chaotic Dynamical Systems. Behav. Brain Sci. 2001,24, 793–810.

  • 13.

    Chialvo, D.R. Emergent Complex neural Dynamics. Nat. Phys. 2010,6, 744–750.

  • 14.

    Mondi´e, S.; Egorov, A.; Ortiz, R. Lyapunov Stability Tests for Integral Delay Systems. Annu. Rev. Control. 2025, 59, 100985.

  • 15.

    Hu, J.; Hu, Y.; Chen, W.; et al. Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective. Adv. Neural Inf. Process. Syst. 2024, 37, 20786–20818.

  • 16.

    Duenas, J.; Nunez, C.; Obaya, R. Bifurcation Theory of Attractors and Minimal Sets in d-Concave Nonautonomous Scalar Ordinary Differential Equations. J. Differ. Equ. 2023, 361, 138–182.

  • 17.

    Zhang, S.; Chen, C.; Zhang, Y.; et al. Multidirectional Multidouble-Scroll Hopfield Neural Network With Application to Image Encryption. IEEE Trans. Syst. Man Cybern. Syst. 2025,55, 735–746.

  • 18.

    Liu, G.-H.; Theodorou, E.A. Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective. arXiv 2019, arXiv.1908.10920.

  • 19.

    Coombes, S.; Wedgwood, K.C.A. Neurodynamics: An Applied Mathematics Perspective, 1st ed.; Springer: Cham, Switzerland, 2023.

  • 20.

    Stuart, A.M. Numerical Analysis of DYNAMICAL systems. Acta Numer. 1994, 3, 467–572.

  • 21.

    Colonius, F.; Kliemann,W. Some Aspects of Control Systems as Dynamical Systems. J. Dyn. Differ. Equ. 1993, 5, 469–494.

  • 22.

    Yu, W.; Chen, G.; Cao, M. Some Necessary and Sufficient Conditions for Second-Order Consensus in Multi-Agent Dynamical Systems. Automatica 2010,46, 1089–1095.

  • 23.

    Bahri, Y.; Kadmon, J.; Pennington, J.; et al. Statistical Mechanics of Deep Learning. Annu. Rev. Condens. Matter Phys. 2020, 11, 501–528.

  • 24.

    Hirsch, M.W.; Smale, S.; Devaney, R.L. Differential Equations, Dynamical Systems, and an Introduction to Chaos, 3rd ed.; Academic Press: Cambridge, MA, USA, 2013.

  • 25.

    Hartman, P. Ordinary Differential Equations, 2nd ed.; SIAM: Philadelphia, PA, USA, 2002.

  • 26.

    Hu, Y.; Abu-Dakka, F.J.; Chen, F.; et al. Fusion Dynamical Systems With Machine Learning in Imitation Learning: A Comprehensive Overview. Inf. Fusion 2024, 108, 102379.

  • 27.

    Chakraborty, D.; Chung, S.W.; Arcomano, T.; et al. Divide and Conquer: Learning Chaotic Dynamical Systems With Multistep Penalty Neural Ordinary Differential Equations. Comput. Methods Appl. Mech. Eng. 2024, 432, 117442.

  • 28.

    Maranh˜ao, D.M.; Medrano-T, R.O. Periodicity in the Asymmetrical Quartic Map. Chaos Solitons Fractals 2024, 186, 115204.

  • 29.

    Chen, R.T.Q.; Rubanova, Y.; Bettencourt, J.; et al. Neural Ordinary Differential Equations. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 1–13.

  • 30.

    Jin, Y.; Hou, L.; Zhong, S. Extended Dynamic Mode Decomposition With Invertible Dictionary Learning. Neural Netw. 2024, 173, 106177.

  • 31.

    Volkmann, E.; Brandle, A.; Durstewitz, D.; et al. A Scalable Generative Model for Dynamical System Reconstruction From Neuroimaging Data. Adv. Neural Inf. Process. Syst. 2024, 37, 80328–80362.

  • 32.

    Glad, T.; Ljung, L. Control Theory, 1st ed.; CRC Press: Boca Raton, FL, USA, 2000.

  • 33.

    Bernauer, C.; Leitner, P.; Zapata, A.; et al. Segmentation-Based Closed-Loop Layer Height Control for Enhancing Stability and Dimensional Accuracy in Wire-Based Laser Metal Deposition. Robot. Comput. Integr. Manuf. 2024, 86, 102683.

  • 34.

    Shim, H.; Jo, N.H. An Almost Necessary and Sufficient Condition for Robust Stability of Closed-Loop Systems With Disturbance Observer. Automatica 2009, 45, 296–299.

  • 35.

    Jain, S.; Garg, V. A Review of Open Loop Control Strategies for Shades, Blinds and Integrated Lighting by Use of Real-Time Daylight Prediction Methods. Build. Environ. 2018, 135, 352–364.

  • 36.

    Salo, M.; Tuusa, H. A Novel Open-Loop Control Method for a Current-Source Active Power Filter. IEEE Trans. Ind. Electron. 2003, 50, 313–321.

  • 37.

    Vengsungnle, P.; Poojeera, S.; Srichat, A.; et al. Optimized Performance of Closed Loop Control Electromagnetic Field for the Electric Generators With Energy Storage. Eng. Sci. 2024,30, 1173.

  • 38.

    Rosen, R. Dynamical Systems and Control Theory. In Optimality Principles in Biology; Springer: New York, NK, USA, 1967; pp. 155–165.

  • 39.

    Grune, L. On the Relation Between Discounted and Average Optimal Value Functions. J. Differ. Equ. 1998, 148, 65–99.

  • 40.

    Baratchart, L.; Chyba, M.; Pomet, J.-B. A Grobman-Hartman Theorem for Control Systems. J. Dyn. Differ. Equ. 2007, 19, 75–107.

  • 41.

    Zhong, X.; Chen, Z.; Huang, Y. Equi-Invariability, Bounded Invariance Complexity and L-Stability for Control Systems. Sci. China Math. 2021, 64, 2275–2294.

  • 42.

    Yang, R.; Chen, E.; Yang, J.; et al. Bowen’s Equations for Invariance Pressure of Control Systems. SIAM J. Control. Optim. 2025, 63, 1104–1128. https://doi.org/10.1137/23M1607684.

  • 43.

    Course, K.; Nair, P.B. State Estimation of a Physical System With Unknown Governing Equations. Nature 2023, 662, 261–267.

  • 44.

    Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Discovering Governing Equations From Data by Sparse Identification of Nonlinear Dynamical Systems. Proc. Natl. Acad. Sci. USA 2016, 113, 3932–3937.

  • 45.

    Jia, D.; Zhou, X.; Li, S.; et al. Governing Equation Discovery Based on Causal Graph for Nonlinear Dynamic Systems. Mach. Learn. Sci. Technol. 2023, 4, 045008.

  • 46.

    Schwenzer, M.; Ay, M.; Bergs, T.; et al. Review on Model Predictive Control: An Engineering Perspective. Int. J. Adv. Manuf. Technol. 2021, 117, 1327–1349.

  • 47.

    Kohler, J.; Muller, M.A.; Allgower, F. Analysis and Design of Model Predictive Control Frameworks for Dynamic Operation—An Overview. Annu. Rev. Control. 2024, 57, 100929.

  • 48.

    Abdelghany, M.B.; Al-Durra, A.; Zeineldin, H.H.; et al. A Coordinated Multitimescale Model Predictive Control for Output Power Smoothing in Hybrid Microgrid Incorporating Hydrogen Energy Storage. IEEE Trans. Ind. Inform. 2024, 20, 10987–11001.

  • 49.

    Meyn, S. Control Systems and Reinforcement Learning, 1st ed.; Cambridge University Press: Cambridge, UK, 2022.

  • 50.

    Kuhnle, A.; Kaiser, J.-P.; Theiß, F.; et al. Designing an Adaptive Production Control System Using Reinforcement Learning. J. Intell. Manuf. 2021,32, 855–876.

  • 51.

    Kiumarsi, B.; Vamvoudakis, K.G.; Modares, H.; et al. Optimal and Autonomous Control Using Reinforcement Learning: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2042–2062.

  • 52.

    Chen, X.; Heidarinejad, M.; Liu, J.; et al. Distributed Economic MPC: Application to a Nonlinear Chemical Process Network. J. Process Control. 2012, 22, 689–699.

  • 53.

    Cavone, G.; Bozza, A.; Carli, R.; et al. MPC-Based Process Control of Deep Drawing: An Industry 4.0 Case Study in Automotive. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1586–1598.

  • 54.

    Huang, K.; Wei, K.; Li, F.; et al. LSTM-MPC: A Deep Learning Based Predictive Control Method for Multimode Process Control. IEEE Trans. Ind. Electron. 2023, 70, 11544.

  • 55.

    Ma, D.; Lv, J.; Xu, C.; et al. Logic-Adaptive Discrete Neural Dynamics for Distributed Cooperative Control of Multi-Robot Systems via Minimum Infinity Norm Optimization. IEEE Trans. Fuzzy Syst. 2025, 33, 1–10.

  • 56.

    Li, X.; Ren, X.; Zhang, Z.; et al. A Varying-Parameter Complementary Neural Network for Multi-Robot Tracking and Formation via Model Predictive Control. Neurocomputing 2024, 609, 128384.

  • 57.

    Tang, Z.; Zhang, Y.; Ming, L. Novel Snap-Layer MMPC Scheme via Neural Dynamics Equivalency and Solver for Redundant Robot Arms With Five-Layer Physical Limits. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 3534–3546.

  • 58.

    Li, S.; Zhou, M.; Luo, X. Modified Primal-Dual Neural Networks for Motion Control of Redundant Manipulators With Dynamic Rejection of Harmonic Noises. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4791–4801.

  • 59.

    Cao, Z.; Xiao, Q.; Huang, R.; et al. Robust Neuro-Optimal Control of Underactuated Snake Robots With Experience Replay. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 208–217.

  • 60.

    Yan, J.; Jin, L.; Hu, B. Data-Driven Model Predictive Control for Redundant Manipulators With Unknown Model. IEEE Trans. Cybern. 2024, 54, 5901–5911.

  • 61.

    Iftikhar, A.; Ghazanfar, M.A.; Ayub, M.; et al. A Reinforcement Learning Recommender System Using Bi-Clustering and Markov Decision Process. Expert Syst. Appl. 2024, 237, 121541.

  • 62.

    Li, G.; Li, X.; Li, J.; et al. PTMB: An Online Satellite Task Scheduling Framework Based on Pre-Trained Markov Decision Process for Multi-Task Scenario. Knowl.-Based Syst. 2024, 284, 111339.

  • 63.

    Chebotar, Y.; Hausman, K.; Zhang, M.; et al. Combining Model-Based and Model-Free Updates For Trajectory-Centric Reinforcement Learning. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 703–711.

  • 64.

    Gullapalli, V.; Franklin, J.; Benbrahim, H. Acquiring Robot Skills via Reinforcement Learning. IEEE Control. Syst. Mag. 1994, 14, 13–24.

  • 65.

    Raffin, A.; Kober, J.; Stulp, F. Smooth Exploration for Robotic Reinforcement Learning. In Proceedings of the 6th Conference on Robot Learning, Auckland, New Zealand, 14–18 December 2022; pp. 1634–1644.

  • 66.

    Sallab, A.E.; Abdou, M.; Perot, E.; et al. Deep Reinforcement Learning Framework for Autonomous Driving. arXiv 2017, arXiv:1704.02532.

  • 67.

    Zeng, Z.; Wang, J.; Liao, X. Global Exponential Stability of a General Class of Recurrent Neural Networks With Time-Varying Delays. IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 2003, 50, 1353–1358.

  • 68.

    Li, Z.; Li, S. Recursive Recurrent Neural Network: A Novel Model for Manipulator Control With Different Levels of Physical Constraints. CAAI Trans. Intell. Technol. 2023, 8, 622–634.

  • 69.

    Doostmohammadian, M.; Aghasi, A.; Pirani, M.; et al. Survey of Distributed Algorithms for Resource Allocation Over Multi-Agent Systems. Annu. Rev. Control. 2025, 59, 100983.

  • 70.

    Su, H.; Chen, M.Z.Q.; Lam, J.; et al. Semi-Global Leader-Following Consensus of Linear Multi-Agent Systems With Input Saturation via Low Gain Feedback. IEEE Trans. Circuits Syst. I Regul. Pap. 2013, 60, 1881–1889.

  • 71.

    Li, K.; Liu, Q.; Zeng, Z. Multiagent System With Periodic and Event-Triggered Communications for Solving Distributed Resource Allocation Problem. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 6245–6256.

  • 72.

    Kalyva, D.; Psillakis, H.E. Distributed Control of a Mobile Robot Multi-AGENT System for Nash Equilibrium Seeking With Sampled Neighbor Information. Automatica 2024, 166, 111712.

  • 73.

    Su, H.; Zhang, J.; Zeng, Z. Formation-Containment Control of Multi-Robot Systems Under a Stochastic Sampling Mechanism. Sci. China Technol. Sci. 2020, 63, 1025–1034.

  • 74.

    Sun, Z.; Yu, Z.; Guo, B.; et al. Integrated Sensing and Communication for Effective Multi-Agent Cooperation Systems. IEEE Commun. Mag. 2024, 62, 68–73.

  • 75.

    Caprioli, C. The Integration of Multi-Agent System and Multicriteria Analysis for Developing Participatory Planning Alternatives in Urban Contexts. Environ. Impact Assess. Rev. 2025, 113, 107855.

  • 76.

    Izmirlioglu, Y.; Pham, L.; Son, T.C.; et al. A Survey of Multi-Agent Systems for Smartgrids. Energies 2024, 17, 3620.

  • 77.

    Singh, A.; Raut, G.; Choudhary, A. Multi-Agent Collaborative Perception for Robotic Fleet: A Systematic Review. In Proceedings of the European Conference on Computer Vision, Dublin, Ireland, 22–23 October 2025; pp. 1–15.

  • 78.

    Hou, X.; Wang, J.; Du, J.; et al. Distributed Machine Learning for Autonomous Agent Swarm: A Survey. IEEE Commun. Surv. Tutor. 2025, 28, 1597–1636.

  • 79.

    Katsikis, V.N.; Liao, B.; Hua, C. Survey of Neurodynamic Methods for Control and Computation in Multi-Agent Systems. Symmetry 2025, 17, 936.

  • 80.

    Deng, Q.; Zhang, Y. Distributed Near-Optimal Consensus of Double-Integrator Multi-Agent SystemsWith Input Constraints. In Proceedings of the International Joint Conference on Neural Networks, Shenzhen, China, 18–22 July 2021; pp. 1–6.

  • 81.

    Deng, Q.; Liu, K.; Zhang, Y. Privacy-Preserving Consensus of Double-Integrator Multi-Agent Systems With Input Constraints. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 4119–4129.

  • 82.

    Yang, S.; Liu, Q.; Wang, J. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 981–992.

  • 83.

    Leonard, N.E. Multi-Agent System Dynamics: Bifurcation and Behavior of Animal Groups. Annu. Rev. Control. 2014, 38, 171–183.

  • 84.

    Olfati-Saber, R.; Murray, R. Consensus Problems in Networks of Agents With Switching Topology and Time-Delays. IEEE Trans. Autom. Control. 2004, 49, 1520–1533.

  • 85.

    Le, X.; Chen, S.; Yan, Z.; et al. A Neurodynamic Approach to Distributed OptimizationWith Globally Coupled Constraints. IEEE Trans. Cybern. 2018, 48, 3149–3158.

  • 86.

    Li, H.; Qin, S. A Neurodynamic Approach for Solving Time-Dependent Nonlinear Equation System: A Distributed Optimization Perspective. IEEE Trans. Ind. Inform. 2024, 20, 10031–10039.

  • 87.

    Li, Z.; Wen, G.; Duan, Z.; et al. Designing Fully Distributed Consensus Protocols for Linear Multi-Agent Systems With Directed Graphs. IEEE Trans. Autom. Control. 2015, 60, 1152–1157.

  • 88.

    Lu, H.; He, W.; Han, Q.-L.; et al. Finite-Time Containment Control for Nonlinear Multi-Agent Systems With External Disturbances. Inf. Sci. 2020, 512, 338–351.

  • 89.

    Sar, G.K.; Ghosh, D. Flocking and Swarming in a Multi-Agent Dynamical System. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 12306.

  • 90.

    Khaw, Y.N.; Kowalczyk, R.; Vo, Q.B.; et al. Transition-State Replicator Dynamics. Expert Syst. Appl. 2021, 182, 115254.

  • 91.

    Amirkhani, A.; Barshooi, A.H. Consensus in Multi-Agent Systems: A Review. Artif. Intell. Rev. 2022, 55, 3897–3935.

  • 92.

    Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and Cooperation in Networked Multi-Agent Systems. Proc. IEEE 2007, 95, 215–233.

  • 93.

    Lin, L.; Cao, J.; Lam, J.; et al. Leader-Follower Consensus Over Finite Fields. IEEE Trans. Autom. Control. 2024, 69, 4718–4725.

  • 94.

    Thunberg, J.; Song, W.; Montijano, E.; et al. Distributed Attitude Synchronization Control of Multi-Agent Systems With Switching Topologies. Automatica 2014, 50, 832–840.

  • 95.

    Wu, J.; Yuan, S.; Ji, S.; et al. Multi-Agent System Design and Evaluation for Collaborative Wireless Sensor Network in Large Structure Health Monitoring. Expert Syst. Appl. 2010, 37, 2028–2036.

  • 96.

    Ota, J. Multi-Agent Robot Systems as Distributed Autonomous Systems. Adv. Eng. Inform. 2006, 20, 59–70.

  • 97.

    Ji, M.; Ferrari-Trecate, G.; Egerstedt, M.; et al. Containment Control in Mobile Networks. IEEE Trans. Autom. Control. 2008, 53, 1972–1975.

  • 98.

    Ji, Z.; Wang, Z.; Lin, H.; et al. Interconnection Topologies for Multi-Agent Coordination Under Leader-Follower Framework. Automatica 2009, 45, 2857–2863.

  • 99.

    Liang, H.; Zhou, Y.; Ma, H.; et al. Adaptive Distributed Observer Approach for Cooperative Containment Control of Nonidentical Networks. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 299–307.

  • 100.

    Li, J.; Ren, W.; Xu, S. Distributed Containment Control With Multiple Dynamic Leaders for Double-Integrator Dynamics Using Only Position Measurements. IEEE Trans. Autom. Control. 2012, 57, 1553–1559.

  • 101.

    Dai, S.; Li, S.; Tang, H.;et al. MARP: A Cooperative Multiagent DRL System for Connected Autonomous Vehicle Platooning. IEEE Internet Things J. 2024, 11, 32454–32463.

  • 102.

    Allouche, M.K.; Boukhtouta, A. Multi-Agent Coordination by Temporal Plan Fusion: Application to Combat Search and Rescue. Inf. Fusion 2010, 11, 220–232.

  • 103.

    Olfati-Saber, R. Flocking for Multi-Agent Dynamic Systems: Algorithms and Theory. IEEE Trans. Autom. Control. 2006, 51, 401–420.

  • 104.

    Chen, C.; Hou, Y.; Ong, Y. A Conceptual Modeling of Flocking-Regulated Multi-Agent Reinforcement Learning. In Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; pp. 5256–5262.

  • 105.

    Li, C.; Yang, Y.; Jiang, G.; et al. A Flocking Control Algorithm of Multi-Agent Systems Based on Cohesion of the Potential Function. Complex Intell. Syst. 2024, 10, 2585–2604.

  • 106.

    Brittain, M.; Wei, P. Scalable Autonomous Separation Assurance With Heterogeneous Multi-Agent Reinforcement Learning. IEEE Trans. Autom. Sci. Eng. 2022, 19, 2837–2848.

  • 107.

    Zhang, H.-T.; Zhai, C.; Chen, Z. A General Alignment Repulsion Algorithm for Flocking of Multi-Agent Systems. IEEE Trans. Autom. Control. 2011, 56, 430–435.

  • 108.

    Tahir, A.; Boling, J.; Haghbayan, M.-H.; et al. Swarms of Unmanned Aerial Vehicles—A Survey. J. Ind. Inf. Integr. 2019, 16, 100106.

  • 109.

    Chen, T.-T.; Zheng, B.; Li, Y.; et al. New Approaches in Agent-Based Modeling of Complex Financial Systems. Front. Phys. 2017, 12, 128905.

  • 110.

    Foerster, J.; Assael, I.A.; de Freitas, N.; et al. Learning to Communicate With Deep Multi-Agent Reinforcement Learning. Adv. Neural Inf. Process. Syst. 2016, 29, 2145–2153.

  • 111.

    Wen, M.; Kuba, J.; Lin, R.; et al. Multi-Agent Reinforcement Learning is a Sequence Modeling Problem. Adv. Neural Inf. Process. Syst. 2022, 35, 16509–16521.

  • 112.

    Zhang, J.; Koppel, A.; Bedi, A.S.; et al. Variational Policy Gradient Method for Reinforcement Learning With General Utilities. Adv. Neural Inf. Process. Syst. 2020, 33, 4572–4583.

  • 113.

    Tan, S.; Wang, Y. Graphical Nash Equilibria and Replicator Dynamics on Complex Networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1831–1842.

  • 114.

    Hennes, D.; Morrill, D.; Omidshafiei, S.; et al. Neural Replicator Dynamics: Multiagent Learning via Hedging Policy Gradients. In Proceedings of the International Conference on Autonomous Agents and MultiAgent Systems, Auckland, New Zealand, 9–13 May 2020; pp. 492–501.

  • 115.

    Roca, C.P.; Cuesta, J.A.; Sanchez, A. Evolutionary Game Theory: Temporal and Spatial Effects Beyond Replicator Dynamics. Phys. Life Rev. 2009, 6, 208–249.

  • 116.

    Wang, Q.; He, N.; Chen, X. Replicator Dynamics for Public Goods Game With Resource Allocation in Large Populations. Appl. Math. Comput. 2018, 328, 162–170.

  • 117.

    Branch, W.A.; McGough, B. Replicator Dynamics in a Cobweb Model With Rationally Heterogeneous Expectations. J. Econ. Behav. Organ. 2008, 65, 224–244.

  • 118.

    Ioannidou, A.; Chatzilari, E.; Nikolopoulos, S.; et al. Deep Learning Advances in Computer Vision With 3D Data: A Survey. ACM Comput. Surv. 2017, 50, 20.

  • 119.

    Otter, D.W.; Medina, J.R.; Kalita, J.K. A Survey of the Usages of Deep Learning for Natural Language Processing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 604–624.

  • 120.

    Kheddar, H.; Hemis, M.; Himeur, Y. Automatic Speech Recognition Using Advanced Deep Learning Approaches: A Survey. Inf. Fusion 2024, 109, 102422.

  • 121.

    Zeng, Z.; Wang, J.; Liao, X. Stability Analysis of Delayed Cellular Neural Networks Described Using Cloning Templates. IEEE Trans. Circuits Syst. I Regul. Pap. 2004, 51, 2313–2324.

  • 122.

    Weinan, E. A Proposal on Machine Learning via Dynamical Systems. Commun. Math. Stat. 2017, 5, 1–11.

  • 123.

    He, K.; Zhang, X.; Ren, S.; et al. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.

  • 124.

    Haber, E.; Ruthotto, L. Stable Architectures for Deep Neural Networks. Inverse Probl. 2017, 34, 014004,

  • 125.

    Lu, Y.; Zhong, A.; Li, Q.; et al. Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018; pp. 3276–3285.

  • 126.

    He, K.; Zhang, X.; Ren, S.; et al. Identity Mappings in Deep Residual Networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 630–645.

  • 127.

    Zhang, X.; Li, Z.; Loy, C.C.; et al. PolyNet: A Pursuit of Structural Diversity in Very Deep Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3900–3908.

  • 128.

    Larsson, G.; Maire, M.; Shakhnarovich, G. FractalNet: Ultra-Deep Neural Networks Without Residuals, In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016; pp. 1–11.

  • 129.

    Zhang, Z.; Zheng, L.; Li, L.; et al. A New Finite-Time Varying-Parameter Convergent-Differential Neural-Network for Solving Nonlinear and Nonconvex Optimization Problems. Neurocomputing 2018, 319, 74–83.

  • 130.

    Sclocchi, A.; Wyart, M. On the Different Regimes of Stochastic Gradient Descent. Proc. Natl. Acad. Sci. USA 2024, 121, e2316301121.

  • 131.

    Li, Q.; Tai, C. Stochastic Modified Equations and Adaptive Stochastic Gradient Algorithms. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 2101–2110.

  • 132.

    Chaudhari, P.; Soatto, S. Stochastic Gradient Descent Performs Variational Inference, Converges to Limit Cycles for Deep Networks. In Proceedings of the Proceedings of the Information Theory and Applications Workshop, San Diego, CA, USA, 11–16 February 2018; pp. 1–10.

  • 133.

    Liu, X.; Xiao, T.; Si, S.; et al. Neural SDE: Stabilizing Neural Ode Networks With Stochastic Noise. arXiv 2019, arXiv:1906.02355.

  • 134.

    Bai, S.; Kolter, J.Z.; Koltun, V. Deep Equilibrium Models. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 690–701.

  • 135.

    Sutskever, I.; Martens, J.; Dahl, G.; et al. On the Importance of Initialization and Momentum in Deep Learning. Proc. Int. Conf. Mach. Learn. 2013, 28, 1139–1147.

  • 136.

    Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–15.

  • 137.

    Si, T.; Bagchi, J.; Miranda, P.B. Artificial Neural Network Training Using Metaheuristics for Medical Data Classification: An Experimental Study. Expert Syst. Appl. 2022, 193, 116423.

  • 138.

    Hu, X.; Wang, J. Solving Pseudomonotone Variational Inequalities and Pseudoconvex Optimization Problems Using the Projection Neural Network. IEEE Trans. Neural Netw. 2006, 17, 1487–1499.

  • 139.

    Wei, L.; Jin, L. Collaborative Neural Solution for Time-Varying Nonconvex Optimization With Noise Rejection. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2935–2948.

  • 140.

    Zhang, F.; Wang, J. Index Tracking via Sparse Bayesian Regression and Collaborative Neurodynamic Optimization. IEEE Trans. Cybern. 2025, 55, 1238–1249.

  • 141.

    Zhang, Z.; Xu, Z.-Q.J. Implicit Regularization of Dropout. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 4206–4217.

  • 142.

    Huang, W.; Cui, Y.; Li, H.; et al. Practical Probabilistic Model-Based Reinforcement Learning by Integrating Dropout Uncertainty and Trajectory Sampling. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 12812–12826.

  • 143.

    Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization Methods for Large-Scale Machine Learning. SIAM Rev. 2018, 60, 223–311.

  • 144.

    Broyden, C. A Class of Methods for Solving Nonlinear Simultaneous Equations. Math. Comput. 1965, 6, 577–593.

  • 145.

    Zhu, Q.; Wu, X.; Lin, Q.; et al. Two-Stage Evolutionary Reinforcement Learning for Enhancing Exploration and Exploitation. Proc. AAAI Conf. Artif. Intell. 2024, 38, 20892–20900.

  • 146.

    Sui, S.; Chen, C.L.P.; Tong, S. Command Filter-Based Predefined Time Adaptive Control for Nonlinear Systems. IEEE Trans. Autom. Control. 2024, 69, 7863–7870.

  • 147.

    Zhang, S.; Duan, G. Robust Adaptive Control of Uncertain Fully Actuated Systems With Unknown Parameters and Perturbed Input Matrices. IEEE Trans. Cybern. 2025, 55, 927–938.

  • 148.

    Golmisheh, F.M.; Shamaghdari, S. Optimal Robust Formation of Multi-Agent Systems as Adversarial Graphical Apprentice Games With Inverse Reinforcement Learning. IEEE Trans. Autom. Sci. Eng. 2025, 22, 4867–4880.

  • 149.

    Iervolino, R.; Manfredi, S. Global Stability of Multi-Agent Systems With Heterogeneous Transmission and Perception Functions. Automatica 2024, 162, 111510.

  • 150.

    Wani, N.A.; Kumar, R.; Mamta Bedi, J.; et al. Explainable AI-Driven IoMT Fusion: Unravelling Techniques, Opportunities, and Challenges With Explainable AI in Healthcare. Information Fusion 2024, 110, 102472.

  • 151.

    Cerneviciene, J.; Kabasinskas, A. Explainable Artificial Intelligence (XAI) in Finance: A Systematic Literature Review. Artif. Intell. Rev. 2024, 57, 216.

  • 152.

    Bennetot, A.; Donadello, I.; Haouari, A.E.Q.E.; et al. A Practical Tutorial on Explainable AI Techniques. ACM Comput. Surv. 2024, 57, 1–44.

Share this article:
How to Cite
Jin, L. A Survey on Neural Dynamics for Computing and Control: Theories, Models, and Applications. Journal of Artificial Intelligence for Automation 2026, 1 (1), 1.
RIS
BibTex
Copyright & License
article copyright Image
Copyright (c) 2026 by the authors.