Glossary

Prompt Engineering

Prompt Engineering is the practice of crafting input text that elicits desired behaviour from a large language model without modifying its weights. Because LLMs are highly sensitive to how tasks are framed, small changes in wording can produce large differences in output quality. Prompt engineering has emerged as an essential skill for practitioners working with LLMs, combining linguistic intuition, systematic experimentation, and understanding of how models respond to various input patterns.

Common techniques include: providing clear instructions at the start of the prompt; using few-shot examples to demonstrate the desired format and style; requesting step-by-step reasoning (chain-of-thought); specifying output format explicitly (JSON, bullet points, specific length); assigning the model a role ("You are an expert mathematician..."); breaking complex tasks into smaller steps; and using delimiters to separate instructions from data. System prompts—instructions that precede the user's query and often remain invisible to the user—can set overall tone and constraints.

More advanced techniques include retrieval augmentation (pulling relevant context from external sources), self-consistency (sampling multiple answers and taking the majority), self-critique (asking the model to evaluate and improve its own output), and constitutional prompting (specifying principles the model should follow). As models become more capable, prompt engineering shades into prompt programming, where carefully crafted prompts become reusable components of larger applications. The field is empirical and evolves rapidly; what works for GPT-4 may need adjustment for Claude or Gemini, and techniques that worked with earlier model versions may be obsolete with newer ones.

Related terms: Large Language Model, Chain-of-Thought, In-Context Learning

Discussed in:

Also defined in: Textbook of AI