Introduction to Prompting Techniques
Its been little over two years now since I started integrating LLMs into business applications to achieve product KPI’s and goals — and it’s been one wild ride. Over these years, I have been in situations where I have tried almost every prompting technique out there like Zero-shot, few-shot, role-based, step-by-step etc. I kept experimenting, still found myself regularly frustrated with what I got back. Sometimes the output was close, but not quite useful. Other times it completely missed the point.
The Importance of Prompting
After hitting walls again and again, it finally clicked: prompting isn’t just about “talking to the AI” better — it’s a skill. Giving the right prompt is the difference between guesswork and a reliable workflow. This can be confirmed with the rise of companies that have built entire products around efficient, well-crafted prompts. In many cases, they are essentially wrappers around LLMs that work so well because the prompts behind the scenes are sharp, structured, and repeatable.
Why We Need a Framework
Now that LLMs have become a part of our daily workflow, whether through ChatGPT, Cursor, Perplexity, or a dozen other tools — we can’t afford to rely on generic, hit-or-miss prompts. I have spent months iterating, testing, and refining my own approach, and out of all that trial and error overtime i came up with something I now call the W-H-Y-Us framework. It’s not perfect, but it consistently gives me better results compared to just winging it.
The W-H-Y-Us Framework
The W‑H‑Y‑Us framework is a repeatable structure, just four blocks — to turn any fuzzy request into a clear, reusable playbook that an AI can follow every single time. To illustrate this, let’s consider the Amazon Product Reviews Dataset as an example. This dataset from Kaggle contains over 500,000 customer reviews across a wide range of products. Each entry includes key fields like product_id
, review_text
, rating
, and timestamp
, offering a rich source for sentiment analysis, trend detection, and product feedback insights.
W — What are the facts/truth?
Guiding Question: This block sets the foundation, i.e “What facts or constraints never change?” It establishes the unchanging truths about the task. For our dataset:
- Dataset Structure: Each entry includes
product_id
,review_text
,rating
, andtimestamp
. - Rating Scale: Ratings range from 1 to 5 stars.
- Language: All reviews are in English.
- Sentiment Mapping: For analysis purposes, ratings are categorized as:
- Positive: 4–5 stars
- Neutral: 3 stars
- Negative: 1–2 stars
H — How to Do It
Guiding Question: “What’s the exact sequence of steps?” i.e we define the step-by-step procedure.
- Data Cleaning: Remove null or duplicate entries. Normalize text by converting to lowercase and removing special characters.
- Sentiment Analysis: Apply a pre-trained sentiment analysis model to classify
review_text
into positive, neutral, or negative categories. - Aggregation: Group reviews by
product_id
. Calculate: Average rating per product and Count of reviews per sentiment category. - Visualization: Generate bar charts showing the distribution of sentiments per product. Create word clouds for the most frequent terms in positive and negative reviews.
- Reporting: Compile findings into a Markdown report for stakeholders.
Y — Why It Matters
Guiding Question: “What success criteria, goals or mindset guides choices?” basically understanding the purpose behind the task:
- Business Objective: Identify customer satisfaction trends to inform product improvements and marketing strategies.
- Quality Metrics: Accuracy of sentiment classification. Clarity and readability of visualizations.
- Stakeholder Needs: Insights should be actionable and easily interpretable by non-technical team members.
U — Us Together (When working with Agents)
Guiding Question: “How do agents hand off or collaborate?” i.e Defining roles and collaboration points:
- #DataEngineer: Prepares and cleans the dataset.
- #DataAnalyst: Performs sentiment analysis and generates visualizations.
- #MarketingTeam: Reviews the report to derive actionable insights.
- Collaboration Tools: Use Slack for communication. Store reports in a shared Google Drive folder. Schedule bi-weekly meetings to discuss findings.
Putting it All Together
An example prompt using W-H-Y-U would clearly outline each of these sections, ensuring that any task, whether performed by an AI or a human, is well-defined and actionable.
Common Pitfalls
Even with a simple framework like W‑H‑Y‑Us, it’s easy to stumble. Common mistakes include:
- Stuffing everything into “What”: Keep steps and actions in “How” and collaboration in “Us.”
- Being vague in “Why”: Add clear success criteria to guide decisions.
- Overusing role tags: Only use tags when a bullet truly changes based on who’s doing it.
Conclusion
The W‑H‑Y‑Us framework helps in breaking any task into four clear, repeatable building blocks: What’s True, How to Do It, Why It Matters, and Us Together. This framework gives you a structured, reusable way to turn fuzzy requests into clear, dependable playbooks. Whether delegating to an AI or collaborating with a teammate, this approach stops reliance on prompt “magic” and ensures consistent, valuable outputs.
FAQs
- Q: What is the W-H-Y-Us framework?
A: It’s a structured approach to creating prompts for AI or human tasks, ensuring clarity and repeatability. - Q: Why is the W-H-Y-Us framework important?
A: It helps in achieving consistent and valuable outputs by breaking down tasks into clear, actionable steps. - Q: How do I apply the W-H-Y-Us framework?
A: By defining what is true about the task, how to do it, why it matters, and how to collaborate, you can create effective prompts. - Q: What are common pitfalls when using the W-H-Y-Us framework?
A: Common mistakes include overloading the "What" section, being vague in the "Why" section, and overusing role tags.