Abstract: Large Language Models (LLMs) have significantly advanced natural language processing tasks but are heavily reliant on well-crafted prompts for optimal performance. However, manual prompt engineering is time-consuming, often sub-optimal, and lacks robustness to various perturbations in input prompts. Additionally, there is a growing need to address the security implications of prompt extraction attacks. Existing research has proposed automated methods for prompt engineering, ranging from rewriting under-optimized prompts to generating high-quality human-like prompts from scratch. Despite these advancements, challenges persist in achieving prompt effectiveness, robustness, and security.

Keywords: NLP, LLM, Prompt Engineering, Prompt Recovery, Zero shots, Few shots, Chain of Thoughts.


PDF | DOI: 10.17148/IJARCCE.2025.14551

Open chat
Chat with IJARCCE