Open Access
Article

Prompt Injection Detection in LLM Integrated Applications

Qianlong Lan*
AnujKaul*
Shaun Jones*
Author Information
Submitted: 14 Aug 2024 | Accepted: 15 Mar 2025 | Published: 30 Jun 2025

Abstract

The integration of large language models (LLMs) into creative applications has unlocked new capabilities but also introduced vulnerabilities, notably prompt injections. These are malicious inputs designed to manipulate model responses, posing threats to security, privacy, and functionality. This paper delves into the mechanisms of prompt injections, their impacts, and presents novel detection strategies. More specifically, the necessity for robust detection systems is outlined, a predefined list of banned terms is combined to embed techniques for similarity search, and a BERT (Bidirectional Encoder Representa- tions from Transformers) model is built to identify and mitigate prompt injections effectively with the aim to neutralize prompt injections in real-time. The research highlights the challenges in balancing secu- rity with usability, evolving attack vectors, and LLM limitationsm, and emphasizes the significance of securing LLM-integrated applications against prompt injections to preserve data privacy, user trust, and uphold ethical standards. This work aims to foster collaboration for developing standardized security frameworks, contributing to more safer and reliable AI-driven systems.

References

Share this article:
Graphical Abstract
How to Cite
Lan, Q., AnujKaul, & Jones, S. (2025). Prompt Injection Detection in LLM Integrated Applications. International Journal of Network Dynamics and Intelligence, 4(2), 100013. https://doi.org/10.53941/ijndi.2025.100013
RIS
BibTex
Copyright & License
article copyright Image
Copyright (c) 2025 by the authors.
scilight logo

About Scilight

Contact Us

Suite 4002 Level 4, 447 Collins Street, Melbourne, Victoria 3000, Australia
General Inquiries: info@sciltp.com
© 2025 Scilight Press Pty Ltd All rights reserved.