Introduction to Modular Prompting
These days, if you ask a tech-savvy person whether they know how to use ChatGPT, they might take it as an insult. After all, using GPT seems as simple as asking anything and instantly getting a magical answer. But here’s the thing. There’s a big difference between using ChatGPT and using it well. Most people stick to casual queries; they ask something and ChatGPT answers. Either they will be happy or sad. If the latter, they will ask again and probably get further sad, and there might be a time when they start thinking of committing suicide. On the other hand, if you start designing prompts with intention, structure, and a clear goal, the output changes completely. That’s where the real power of prompt engineering shows up, especially with something called modular prompting.
What is Modular Prompting?
Modular prompting is a technique to divide a prompt into multiple sections. These sections are usually interlinked with each other, either referring back or forward. Usually, people write prompts like this, regardless of whether it is a very raw prompt or a one-shot prompt, or COT. Dividing the same prompt into multiple sections can make your life easier.
Benefits of Modular Prompting
At the marketing agency I work at, I’m automating different parts of the system. One of the modules is an internal support bot that itself contains multiple sub-modules. Each module was a custom GPT using OpenAI’s Assistant APIs, with its own prompt, and then there’s a meta-prompt. The idea is that each module performs a specific function, and we also wanted the bot itself to behave in a particular way. In software engineering, there’s a concept called Separation of Concerns (SoC), which is about breaking down a piece of code into small, manageable chunks. We’re trying to mimic something similar here. Each section has a specific role to perform.
Example of Modular Prompting
I’ve created a custom GPT that adds two numbers and returns an answer based on the following instructions:
Persona: You are a friendly and precise assistant designed to collect numeric inputs from the user and return a structured JSON response.
Core Logic: Your task is to ask the user for two numbers, one at a time. After collecting both numbers, you must calculate their sum and return a JSON object that includes the two input numbers, their sum, and a human-friendly message in the bot_response
field.
Output Format: Once both numbers are received, respond in this exact JSON format:
{
"number_1": <first_number>,
"number_2": <second_number>,
"sum": <sum_of_the_two>,
"bot_response": "The sum of <number_1> and <number_2> is <sum_of_the_two>."
}
You might be wondering why I’m returning a JSON and what the bot_response
field is for. As I mentioned, each module of the bot is a mini custom GPT-essentially an Assistant API app. The output has two branches: one for humans and one for the API.
Changing the Persona
Now, let’s change the prompt’s persona:
Persona: You are Victor, an old, grumpy, and highly intelligent brand assistant. You’ve been doing this for decades, and you have zero patience for nonsense. You complain about "the good old days" but still do your job brilliantly.
Core Logic: Your task is to ask the user for two numbers, one at a time. After collecting both numbers, you must calculate their sum and return a JSON object that includes the two input numbers, their sum, and a human-friendly message in the bot_response
field.
Output Format: Once both numbers are received, respond in this exact JSON format:
{
"number_1": <first_number>,
"number_2": <second_number>,
"sum": <sum_of_the_two>,
"bot_response": "<computer answer based on persona and logic>"
}
The same logic but an entirely different experience. The Persona section, which has nothing to do with the logic or output, can be changed to whatever is required without touching or messing with the actual work.
Conclusion
So, did you see the beauty of modular prompting? It brings structure, flexibility, and peace of mind-especially when you’re working with multiple stakeholders or integrating GPT into real products. By separating concerns and turning prompts into clean, manageable blocks, you don’t just improve performance, you make collaboration easier and scaling smoother. Now, I don’t have to worry about what changes my boss makes; he can do whatever he wants within his realm.
FAQs
Q: What is modular prompting?
A: Modular prompting is a technique to divide a prompt into multiple sections, making it easier to manage and customize.
Q: What are the benefits of modular prompting?
A: Modular prompting brings structure, flexibility, and peace of mind, especially when working with multiple stakeholders or integrating GPT into real products.
Q: Can I use modular prompting with any LLM?
A: Yes, the modular prompt technique works with any LLM that supports system instructions.
Q: How can I get started with modular prompting?
A: You can start by dividing your prompts into sections, such as persona, core logic, and output format, and then customize each section to fit your needs.