2508001195
  • Open Access
  • Editorial

Perspective on Artificial Intelligence for Security

  • Cong Wang 1,   
  • Wei Bao 2, *

Received: 04 Aug 2025 | Accepted: 29 Aug 2025 | Published: 29 Aug 2025

Abstract

The unprecedented rise of Artificial Intelligence (AI) in recent years has not only revolutionized automation, reasoning, and language understanding but has also introduced new vectors of risk and vulnerability. As AI systems, especially large language models (LLMs), become increasingly embedded in safety-critical domains such as healthcare, finance, and infrastructure, their security becomes not just a technical concern but also a societal imperative. This special issue on AI Security in the Transactions on Artificial Intelligence (TAI) brings together contemporary research on the threats, defenses, and emerging paradigms associated with secure AI deployment.

References 

  • 1.
    Zhou, Y.; Ni, T.; Lee, W.-B.; et al. A Survey on Backdoor Threats in Large Language Models (LLMs): Attacks, Defenses, and Evaluation Methods. Trans. Artif. Intell. 2025, 1, 28–58.
  • 2.
    Bagwe, G.; Zhang, L.; Guo, L.; et al. Is Embedding-as-a-Service Safe? Meta-Prompt-Based Backdoor Attacks for User-Specific Trigger Migration. Trans. Artif. Intell. 2025, 1, 16–27.
  • 3.
    Wang, J.; Ni, T.; Lee, W.-B.; et al. A Contemporary Survey of Large Language Model Assisted Program Analysis. Trans. Artif. Intell. 2025, 1, 105–129.
  • 4.
    Hu, J.; Yang, Z.; Wang, P.; et al. Federated Learning for Medical Image Analysis: Privacy-Preserving Paradigms and Clinical Challenges. Trans. Artif. Intell. 2025, 1, 153–169.
Share this article:
How to Cite
Wang, C.; Bao, W. Perspective on Artificial Intelligence for Security. Transactions on Artificial Intelligence 2025, 1 (1), 197–198. https://doi.org/10.53941/tai.2025.100012.
RIS
BibTex
Copyright & License
article copyright Image
Copyright (c) 2025 by the authors.