Common Prompt Engineering Techniques

Explore top LinkedIn content from expert professionals.

Summary

Common prompt engineering techniques are structured methods to design and refine text inputs for large language models (LLMs) to yield accurate, useful, and task-specific outputs. These approaches range from breaking tasks into logical steps to designing prompts that emulate reasoning or guide responses.

  • Encourage clear reasoning: Use methods like Chain-of-Thought (CoT) to break down complex tasks into step-by-step logic or Tree-of-Thought (ToT) to explore multiple solutions before selecting the best one.
  • Provide structured prompts: Embed explicit context, roles, or examples in your prompts to shape tone and guide the model's focus, improving clarity and relevance.
  • Iterate and test: Continuously refine prompts and test different formats, such as JSON or XML structures, to enhance outcomes and adapt to specific tasks.
Summarized by AI based on LinkedIn member posts
  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    40,914 followers

    In the last three months alone, over ten papers outlining novel prompting techniques were published, boosting LLMs’ performance by a substantial margin. Two weeks ago, a groundbreaking paper from Microsoft demonstrated how a well-prompted GPT-4 outperforms Google’s Med-PaLM 2, a specialized medical model, solely through sophisticated prompting techniques. Yet, while our X and LinkedIn feeds buzz with ‘secret prompting tips’, a definitive, research-backed guide aggregating these advanced prompting strategies is hard to come by. This gap prevents LLM developers and everyday users from harnessing these novel frameworks to enhance performance and achieve more accurate results. https://lnkd.in/g7_6eP6y In this AI Tidbits Deep Dive, I outline six of the best and recent prompting methods: (1) EmotionPrompt - inspired by human psychology, this method utilizes emotional stimuli in prompts to gain performance enhancements (2) Optimization by PROmpting (OPRO) - a DeepMind innovation that refines prompts automatically, surpassing human-crafted ones. This paper discovered the “Take a deep breath” instruction that improved LLMs’ performance by 9%. (3) Chain-of-Verification (CoVe) - Meta's novel four-step prompting process that drastically reduces hallucinations and improves factual accuracy (4) System 2 Attention (S2A) - also from Meta, a prompting method that filters out irrelevant details prior to querying the LLM (5) Step-Back Prompting - encouraging LLMs to abstract queries for enhanced reasoning (6) Rephrase and Respond (RaR) - UCLA's method that lets LLMs rephrase queries for better comprehension and response accuracy Understanding the spectrum of available prompting strategies and how to apply them in your app can mean the difference between a production-ready app and a nascent project with untapped potential. Full blog post https://lnkd.in/g7_6eP6y

  • View profile for Rishab Kumar

    Staff DevRel at Twilio | GitHub Star | GDE | AWS Community Builder

    22,061 followers

    I recently went through the Prompt Engineering guide by Lee Boonstra from Google, and it offers valuable, practical insights. It confirms that getting the best results from LLMs is an iterative engineering process, not just casual conversation. Here are some key takeaways I found particularly impactful: 1. 𝐈𝐭'𝐬 𝐌𝐨𝐫𝐞 𝐓𝐡𝐚𝐧 𝐉𝐮𝐬𝐭 𝐖𝐨𝐫𝐝𝐬: Effective prompting goes beyond the text input. Configuring model parameters like Temperature (for creativity vs. determinism), Top-K/Top-P (for sampling control), and Output Length is crucial for tailoring the response to your specific needs. 2. 𝐆𝐮𝐢𝐝𝐚𝐧𝐜𝐞 𝐓𝐡𝐫𝐨𝐮𝐠𝐡 𝐄𝐱𝐚𝐦𝐩𝐥𝐞𝐬: Zero-shot, One-shot, and Few-shot prompting aren't just academic terms. Providing clear examples within your prompt is one of the most powerful ways to guide the LLM on desired output format, style, and structure, especially for tasks like classification or structured data generation (e.g., JSON). 3. 𝐔𝐧𝐥𝐨𝐜𝐤𝐢𝐧𝐠 𝐑𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠: Techniques like Chain of Thought (CoT) prompting – asking the model to 'think step-by-step' – significantly improve performance on complex tasks requiring reasoning (logic, math). Similarly, Step-back prompting (considering general principles first) enhances robustness. 4. 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐚𝐧𝐝 𝐑𝐨𝐥𝐞𝐬 𝐌𝐚𝐭𝐭𝐞𝐫: Explicitly defining the System's overall purpose, providing relevant Context, or assigning a specific Role (e.g., "Act as a senior software architect reviewing this code") dramatically shapes the relevance and tone of the output. 5. 𝐏𝐨𝐰𝐞𝐫𝐟𝐮𝐥 𝐟𝐨𝐫 𝐂𝐨𝐝𝐞: The guide highlights practical applications for developers, including generating code snippets, explaining complex codebases, translating between languages, and even debugging/reviewing code – potential productivity boosters. 6. 𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬 𝐚𝐫𝐞 𝐊𝐞𝐲: Specificity: Clearly define the desired output. Ambiguity leads to generic results. Instructions > Constraints: Focus on telling the model what to do rather than just what not to do. Iteration & Documentation: This is critical. Documenting prompt versions, configurations, and outcomes (using a structured template, like the one suggested) is essential for learning, debugging, and reproducing results. Understanding these techniques allows us to move beyond basic interactions and truly leverage the power of LLMs. What are your go-to prompt engineering techniques or best practices? Let's discuss! #PromptEngineering #AI #LLM

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    Product Leader @AWS | Startup Investor | 2X Linkedin Top Voice for AI, Data Science, Tech, and Innovation | Quantum Computing & Web 3.0 | I build software that scales AI/ML Network infrastructure

    216,010 followers

    Prompt engineering remains one the most effective alignment strategies because it allows developers to steer LLM behavior without modifying model weights, enabling fast, low-cost iteration. It also leverages the model’s pretrained knowledge and internal reasoning patterns, making alignment more controllable and interpretable through natural language instructions. However, it doesn’t come without cons, such as fragility of prompts (ex: changing one word can lead to different behavior), and scalability limits (ex: prompt engineer limits long chain reasoning capabilities). However, different tasks demand different prompting strategies, allowing you to select what best fit your business objectives, including budget constraints. If you're building with LLMs, you need to know when and how to use these. Let’s break them down: 1.🔸Chain of Thought (CoT) Teach the AI to solve problems step-by-step by breaking them into logical parts for better reasoning and clearer answers. 2.🔸ReAct (Reason + Act) Alternate between thinking and doing. The AI reasons, takes action, evaluates, and then adjusts based on real-time feedback. 3.🔸Tree of Thought (ToT) Explore multiple reasoning paths before selecting the best one. Helps when the task has more than one possible approach. 4.🔸Divide and Conquer (DnC) Split big problems into subtasks, handle them in parallel, and combine the results into a comprehensive final answer. 5.🔸Self-Consistency Prompting Ask the AI to respond multiple times, then choose the most consistent or commonly repeated answer for higher reliability. 6.🔸Role Prompting Assign the AI a specific persona like a lawyer or doctor to shape tone, knowledge, and context of its replies. 7.🔸Few-Shot Prompting Provide a few good examples and the AI will pick up the pattern. Best for structured tasks or behavior cloning. 8.🔸Zero-Shot Chain of Thought Prompt the AI to “think step-by-step” without giving any examples. Great for on-the-fly reasoning tasks. Was this type of guide useful to you? Let me know below. Follow for plug-and-play visuals, cheat sheets, and step-by-step agent-building guides. #genai #promptengineering #artificialintelligence

  • View profile for Addy Osmani

    Engineering Leader, Google Chrome. Best-selling Author. Speaker. AI, DX, UX. I want to see you win.

    235,152 followers

    Just published: "The Prompt Engineering Playbook for Programmers" My latest free write-up: https://lnkd.in/g9Kxa7hG ✍️ After working with AI coding assistants daily, I've learned that the quality of your output depends entirely on the quality of your prompts. A vague "fix my code" gets you generic advice, while a well-crafted prompt can produce thoughtful, accurate solutions. I've distilled the key patterns and frameworks that actually work into a playbook covering: ✅ Patterns that work - Role-playing, rich context, specific goals, and iterative refinement ✅ Debugging strategies - From "my code doesn't work" to surgical problem-solving ✅ Refactoring techniques - How to get AI to improve performance, readability, and maintainability ✅ Feature implementation - Building new functionality step-by-step with AI as your pair programmer ✅ Common anti-patterns - What NOT to do (and how to fix it when things go wrong) The article includes side-by-side comparisons of poor vs. improved prompts with actual AI responses, plus commentary on why one succeeds where the other fails. Key insight: Treat AI coding assistants like very literal, knowledgeable collaborators. The more context you provide, the better the output. It's not magic - it's about communication. Whether you're debugging a tricky React hook, refactoring legacy code, or implementing a new feature, these techniques can turn AI from a frustrating tool into a true development partner. #ai #softwareengineering #programming

  • View profile for Meri Nova

    ML/AI Engineer | Community Builder | Founder @Break Into Data | ADHD + C-PTSD advocate

    145,278 followers

    You only need 10 Prompt Engineering techniques to build a production-grade AI application. Save these 👇 After analyzing 100s of prompting techniques, I found the most common principles that every #AIengineer follows. Keep them in mind when building apps with LLMs: 1. Stop relying on vague instructions; be explicit instead. ❌ Don't say: "Analyze this customer review." ✅ Say: "Analyze this customer review and extract 3 actionable insights to improve the product." Why? Ambiguity confuses models. 2. Stop overloading prompts   ❌ Asking the model to do everything at once.     ✅ Break it down:   Step 1: Identify the main issues. Step 2: Suggest specific improvements for each issue.  Why? Smaller steps reduce errors and improve reliability.  3. Always provide examples.   ❌ Skipping examples for context-dependent tasks.     ✅ Follow this example: 'The battery life is terrible.' → Insight: Improve battery performance to meet customer expectations.  Why? Few-shot examples improve performance.  4. Stop ignoring instruction placement.   ❌ Putting the task description in the middle.   ✅ Place instructions at the start or end of the system prompt.  Why? Models process beginning and end information more effectively.  5. Encourage step-by-step thinking.   ❌ What are the insights from this review? ✅ Analyze this review step by step: First, identify the main issues. Then, suggest actionable insights for each issue. Why? Chain-of-thought (CoT) prompting reduces errors.  6. Stop ignoring output formats.   ❌ Expecting structured outputs without clear instructions.     ✅ Provide the output as JSON: {‘Name’: [value], ‘Age’: [value]}.  Use Pydantic to validate the LLM outputs. Why? Explicit formats prevent unnecessary or malformed text. 7. Restrict to the provided context.   ❌ Answer the question about a customer.   ✅ Answer only using the customer's context below. If unsure, respond with 'I don’t know. Why? Clear boundaries prevent reliance on inaccurate internal knowledge.  8. Stop assuming that the first version of a prompt is the best version.   ❌ Never iterating on prompts   ✅ Use the model to critique and refine your prompt. 9. Don't forget about the edge cases.   ❌ Designing for the “ideal” or most common inputs.   ✅ Test different edge cases and specify fallback instructions.   Why? Real-world use often involves imperfect inputs. Cover for most of them. 10. Stop overlooking prompt security; design prompts defensively.**     ❌ Ignoring risks like prompt injection or extraction.     ✅ Explicitly define boundaries: *"Do not return sensitive information."*     Why? Defensive prompts reduce vulnerabilities and prevent harmful outputs.  --- #promptengineering

  • View profile for Aakash Gupta
    Aakash Gupta Aakash Gupta is an Influencer

    AI + Product Management 🚀 | Helping you land your next job + succeed in your career

    291,137 followers

    Which is it: use LLMs to improve the prompt, or is that over-engineering? By now, we've all seen a 1000 conflicting prompt guides. So, I wanted to get back to the research: • What do actual studies say? • What actually works in 2025 vs 2024? • What do experts at OpenAI, Anthropic, & Google  say? I spent the past month in Google Scholar, figuring it out. I firmed up the learnings with Miqdad Jaffer at OpenAI. And I'm ready to present: "The Ultimate Guide to Prompt Engineering in 2025: The Latest Best Practices." https://lnkd.in/d_qYCBT7 We cover: 1. Do You Really Need Prompt Engineering? 2. The Hidden Economics of Prompt Engineering 3. What the Research Says About Good Prompts 4. The 6-Layer Bottom-Line Framework 5. Step-by-step: Improving Your Prompts as a PM 6. The 301 Advanced Techniques Nobody Talks About 7. The Ultimate Prompt Template 2.0 8. The 3 Most Common Mistakes Some of my favorite takeaways from the research: 1. It's not just revenue, but cost You have to realize that APIs charge by number of input and output tokens. An engineered prompt can deliver the same quality with 76% cost reduction. We're talking $3,000 daily vs $706 daily for 100k calls. 2. Chain-of-Table beats everything else This new technique gets 8.69% improvement on structured data by manipulating table structure step-by-step instead of reasoning about tables in text. For things like financial dashboards and data analysis tools, it's the best. 3. Few-shot prompting hurts advanced models OpenAI's o1 and DeepSeek's R1 actually perform worse with examples. These reasoning models don't need your sample outputs - they're smart enough to figure it out themselves. 4. XML tags boost Claude performance Anthropic specifically trained Claude to recognize XML structure. You get 15-20% better performance just by changing your formatting from plain text to XML tags. 5. Automated prompt engineering destroys manual AI systems create better prompts in 10 minutes than human experts do after 20 hours of careful optimization work. The machines are better at optimizing themselves than we are. 6. Most prompting advice is complete bullshit Researchers analyzed 1,500+ academic papers and found massive gaps between what people claim works and what's actually been tested scientifically. And what about Ian Nuttal's tweet? Well, Ian's right about over-engineering. But for products, prompt engineering IS the product. Bolt hit $50M ARR via systematic prompt engineering. The Key? Knowing when to engineer vs keep it simple.

  • View profile for Hadas Frank

    Founder & CEO of NextGenAI | EdTech | AI Strategic Consultant | Speaker | Community& Events | Prompt Engineering

    3,062 followers

    “You don’t need to be a data scientist or a machine learning engineer- everyone can write a prompt” Google recently released a comprehensive guide on prompt engineering for Large Language Models (LLMs), specifically Gemini via Vertex AI. key takeaways for the article: What is prompt engineering really about? It’s the art (and science) of designing prompts that guide LLMs to produce the most accurate, useful outputs. It involves iterating, testing, refining not just throwing in a question and hoping for the best. Things you should know: 1. Prompt design matters. Not just what you say, but how you say it: wording, structure, examples, tone, and clarity all affect results. 2. LLM settings are critical: • Temperature = randomness. Lower means more focused, higher means more creative (but riskier). • Top-K / Top-P = how much the model “thinks outside the box.” • For balanced results: Temperature 0.2 / Top-P 0.95 / Top-K 30 is a solid start. • Prompting strategies that actually work: • Zero-shot, one-shot, few-shot • System / Context / Role prompting • Chain of Thought (reasoning step by step) • Tree of Thoughts (explore multiple paths) • ReAct (reasoning + external tools = power moves) 4. Use prompts for code too! Writing, translating, debugging, just test your output. 5. Best Practices Checklist: • Use relevant examples • Prefer instructions over restrictions • Be specific • Control token length • Use variables • Test different formats (Q&A, statements, structured outputs like JSON) • Document everything (settings, model version, results) Bottom line: Prompting is a strategic skill. If you’re building anything with AI, this is a must read👇

  • View profile for Paolo Perrone

    No BS AI/ML Content | ML Engineer with a Plot Twist 🥷50M+ Views 📝

    108,983 followers

    I spent 1,000+ hours figuring out Prompt Engineering Here's everything I learned distilled into 12 rules you can use right now: 1️⃣ Understand the tool A prompt is how you talk to a language model. Better input = better output 2️⃣ Choose your model wisely GPT-4, Claude, Gemini—each has strengths. Know your tools 3️⃣ Use the right technique ↳ Zero-shot: ask directly ↳ Few-shot: show examples ↳ Chain-of-thought: guide the model step by step 4️⃣ Control the vibe Tune temperature, top-p and max tokens to shape output 5️⃣ Be specific Vagueness kills good output. Say exactly what you want 6️⃣ Context is king Add details, background, goals, constraints—treat it like briefing a world-class assistant 7️⃣ Iterate like crazy Great prompts aren’t written once—they’re rewritten 8️⃣ Give examples Format, tone, structure—show what you want 9️⃣ Think in turns Build multi-step conversations. Follow up, refine, go deeper 🔟 Avoid traps ↳ Too vague → garbage ↳ Too long → confusion ↳ Too complex → derailment ↳ Biased input → biased output 1️⃣1️⃣ One size fits none Customize prompts by task—writing, coding, summarizing, support, etc. 1️⃣2️⃣ Structure is Your Friend: Use headings, bullets, XML tags, or delimiters (like ```) to guide the LLM's focus Mastering these isn't optional—it's how you unlock the *real* power of AI. It's leverage. Which rule do you see people ignore the MOST? 👇 Repost this to help someone level up their prompting game! ♻️

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | Strategist | Generative AI | Agentic AI

    691,645 followers

    LLMs are no longer just fancy autocomplete engines. We’re seeing a clear shift—from single-shot prompting to techniques that mimic 𝗮𝗴𝗲𝗻𝗰𝘆: reasoning, retrieving, taking action, and even coordinating across steps. In this visual, I’ve laid out five core prompting strategies: - 𝗥𝗔𝗚 – Brings in external knowledge, enhancing factual accuracy   - 𝗥𝗲𝗔𝗰𝘁 – Enables reasoning 𝗮𝗻𝗱 acting, the essence of agentic behavior   - 𝗗𝗦𝗣 – Adds directional hints through policy models   - 𝗧𝗼𝗧 (𝗧𝗿𝗲𝗲-𝗼𝗳-𝗧𝗵𝗼𝘂𝗴𝗵𝘁) – Simulates branching reasoning paths, like a mini debate inside the LLM   - 𝗖𝗼𝗧 (𝗖𝗵𝗮𝗶𝗻-𝗼𝗳-𝗧𝗵𝗼𝘂𝗴𝗵𝘁) – Breaks down complex thinking into step-by-step logic While not all of these are fully agentic on their own, techniques like 𝗥𝗲𝗔𝗰𝘁 and 𝗧𝗼𝗧 are clear stepping stones to 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 — where autonomous agents can 𝗿𝗲𝗮𝘀𝗼𝗻, 𝗽𝗹𝗮𝗻, 𝗮𝗻𝗱 𝗶𝗻𝘁𝗲𝗿𝗮𝗰𝘁 𝘄𝗶𝘁𝗵 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝘀. The big picture?  We’re slowly moving from "𝘱𝘳𝘰𝘮𝘱𝘵 𝘦𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨" to "𝘤𝘰𝘨𝘯𝘪𝘵𝘪𝘷𝘦 𝘢𝘳𝘤𝘩𝘪𝘵𝘦𝘤𝘵𝘶𝘳𝘦 𝘥𝘦𝘴𝘪𝘨𝘯." And that’s where the real innovation lies.

  • View profile for Kashif M.

    VP of Technology | CTO | GenAI • Cloud • SaaS • FinOps • M&A | Board & C-Suite Advisor

    4,094 followers

    🧠 Designing AI That Thinks: Mastering Agentic Prompting for Smarter Results Have you ever used an LLM and felt it gave up too soon? Or worse, guessed its way through a task? Yeah, I've been there. Most of the time, the prompt is the problem. To get AI that acts more like a helpful agent and less like a chatbot on autopilot, you need to prompt it like one. Here are the three key components of an effective 🔁 Persistence: Ensure the model understands it's in a multi-turn interaction and shouldn't yield control prematurely. 🧾 Example: "You are an agent; please continue working until the user's query is resolved. Only terminate your turn when you are certain the problem is solved." 🧰 Tool Usage: Encourage the model to use available tools, especially when uncertain, instead of guessing. 🧾 Example:" If you're unsure about file content or codebase structure related to the user's request, use your tools to read files and gather the necessary information. Do not guess or fabricate answers." 🧠 Planning: Prompt it to plan before actions and reflect afterward. Prevent reactive tool calls with no strategy. 🧾 Example: "You must plan extensively before each function call and reflect on the outcomes of previous calls. Avoid completing the task solely through a sequence of function calls, as this can hinder insightful problem-solving." 💡 I've used this format in AI-powered research and decision-support tools and saw a clear boost in response quality and reliability. 👉 Takeaway: Agentic prompting turns a passive assistant into an active problem solver. The difference is in the details. Are you using these techniques in your prompts? I would love to hear what's working for you; leave a comment, or let's connect! #PromptEngineering #AgenticPrompting #LLM #AIWorkflow

Explore categories