In this post, I’m going through a few common techniques I’ve picked up over the last few weeks, that you can use to improve your prompts and get better results from LLMs. Some of them yield better results, some not so much. But based on your use cases, that can change. Play around with it and try to have fun.
(If you are lazy, skip to the last section to get a custom GPT system prompt that can write prompts that follows all these best practices for you).
Let’s begin.
1. Escape Hatches
Sometimes, your prompt might be too strict or unclear, and the AI may feel forced to give an answer even when it doesn’t have enough information. To avoid that, include an "escape hatch". A sentence that tells the AI it’s okay to pause, ask questions, or admit uncertainty. This helps you avoid made-up answers and keeps the response more honest. For example, you can say: “If you’re not sure, don’t guess. Ask me a clarifying question instead. It’s okay to say ‘I don’t know.’” This tells the AI that it's better to stay accurate than to force a reply.
2. Ask for Feedback
One of the easiest ways to get better at prompting is to ask the AI for feedback on how you asked your question. After giving your main instruction, you can add something like: “Also, tell me how I could have asked this better.” This turns every interaction into a learning opportunity. The AI will point out if your prompt was too vague, too long, or missing important details. Over time, this helps you write clearer, more effective prompts and get better results.
3. Role Priming
Before you ask your question, tell the AI who it should act like. This is called role priming, and it helps the AI respond in the right voice, tone, and level of detail. For example, you might say: “You are a product manager giving advice to a junior teammate,” or “You are a startup investor reviewing a pitch.” Giving the AI a clear role makes the response feel more focused and relevant. It’s like setting the stage before a conversation.
4. Give Examples
When you want the AI to follow a specific format or tone, the best thing you can do is show it what you’re expecting. Instead of just saying “Write a Jira ticket,” give it a clear example of what a good ticket looks like. Including the structure, headings, and level of detail. For example:
Input: We need to improve the loading time on the dashboard.
Output:
Title: Improve Dashboard Load Time
Description: The dashboard currently takes 5–7 seconds to load for most users. We’ve received multiple reports from the support team about this delay.
Acceptance Criteria:
– Loading time is reduced to under 2 seconds on average
– Works on Chrome, Firefox, and Safari
– No functionality is broken in the process
Once you’ve given an example, follow it with your real task. The AI will pick up on the pattern and match the format much more accurately. This saves you time and avoids back-and-forth correction.
5. Chain-of-Thought Prompting
When you're asking the AI to solve something complex, don't expect a perfect answer all at once. Instead, ask it to think step by step. This is called chain-of-thought prompting. You can say something like: “Let’s break this down into smaller parts. First, list the possible causes. Then, evaluate each one. Finally, recommend the best next step.” By guiding the AI through a process, you help it reason more clearly and avoid shallow or rushed answers.
I’ve found better results, when I do that in three different prompts. For example
First prompt: Break it down.
Second prompt: Now take the first line item, ask questions about the first line item.
and so on.
6. Add Constraints
You can get better answer by adding strict constraints.
For example, you can say: “Answer in 3 bullet points,” or “Give the response as a two-column table,” or “Keep it under 100 words.” These constraints help the AI focus and match your expectations.
7. Try Reverse Psychology
Instead of asking “What should I do?”, try “What would someone do if they wanted to fail at this?” This flips the problem and helps the AI think more clearly. It’s a great way to uncover blind spots, risks, and bad ideas before they happen.
This is simply the Inversion mental model, which is one of my favourite to use in real life scenarios. I’m experimenting with giving LLMs mental models to think through certain problems - I’ll write more about this in a future edition.
8. Use Time Framing
Change the time setting to change the perspective. For example, say: “It’s 1999 and you’re launching a new web product,” or “It’s 2035 and you’re looking back at today’s decisions.” Time framing helps the AI think creatively or reflect more deeply.
9. Self-Reflection
Ask the AI to review its own answer and improve it. This often leads to sharper, clearer responses. Just add: “Now critique your answer and rewrite it with improvements.” Example: “Write two versions. Pick the better one and explain why.” It’s like giving the AI a chance to edit itself.
10. Using de-limiters and structure
A few things that I commonly use:
Use ``` to wrap reference content.
Use clear headings or tags like
Input:
,Output:
,Context:
,Task:
Use numbered step for instructions if the prompt has more than 1 instruction
Make boundaries super obvious with «Instructions» «end of content» etc.
An example prompt
<<<CONTEXT>>>
You are a senior product manager writing a Jira ticket for the engineering team.
<<<EXAMPLE>>>
Input:
We need to improve the loading time on the dashboard.
Output:
Title: Improve Dashboard Load Time
Description:
The dashboard currently takes 5–7 seconds to load for most users. We’ve received multiple reports from the support team about this delay.
Acceptance Criteria:
– Loading time is reduced to under 2 seconds on average
– Works on Chrome, Firefox, and Safari
– No functionality is broken in the process
<<<TASK>>>
Now follow the same format for the input below.
<<<INPUT>>>
Users are reporting that the mobile menu sometimes disappears after login, especially on older Android devices. Support says this has been happening since the last release.
<<<CONSTRAINTS>>>
- Follow the Jira ticket format from the example
- Keep it under 120 words
- Bullet points only under Acceptance Criteria
<<<BONUS>>>
After writing the ticket, critique your own output. Suggest 1 improvement and rewrite it.
11. Sense-Check
Before starting a complex task, ask the AI to confirm it understands. You can say: “Repeat the instructions in your own words,” or “Say ‘Understood’ if everything is clear.” This helps catch mistakes early and nudges the AI to follow your directions more carefully.
For the lazy ones
Try building a custom GPT that takes all the above into consideration. It will look something like below:
You are a prompt optimizer.
Your job is to take rough, unstructured, or vague user prompts and turn them into clear, effective, and structured prompts that get the best results from an LLM like ChatGPT.You follow these principles:
Use role priming to set the right tone (e.g., "You are a UX expert...")
Add escape hatches ("If unsure, ask clarifying questions...")
Use chain-of-thought prompting for complex tasks ("Think step by step...")
Set constraints ("Keep the response under 150 words...")
Use input/output examples where helpful
Add sense-checking prompts ("If you understand the instructions, begin...")
Encourage self-reflection when useful ("Review your answer and improve it")
Use clear delimiters (e.g., <<<CONTEXT>>>, <<<TASK>>>)
Favor structured formatting (tables, bullet points, numbered steps)
Optionally use reverse psychology, time framing, or inversion to improve clarity
Now type in a simple prompt and it will generate a better prompt following all the best practices. Play around and have fun.
Thanks for taking the time to read this post. If you found it useful, share it with someone. Have a nice day!