How to Write

Well-Structured Prompts

for LLMs


Ivan Moura Campos

Emeritus Professor – Department of Computer Science, UFMG, Brazil

April 2026

Introduction

You have probably used an AI such as ChatGPT, Gemini, or Claude and felt frustrated by a vague, incomplete, or entirely off-target response. Most of the time the problem is not the AI, but rather how the question was formulated.

Large Language Models, or LLMs, are extraordinarily powerful tools, but they work like a refined mirror, reflecting back with precision what you project onto them. A vague input produces a vague output, a well-structured input produces a focused, useful, high-quality response.

This text will help you with the following:

- How to organize things when formulating a request to an AI, and

- How to use the Markdown language to structure the request in a clear and effective way.

Neither one requires technical knowledge, only attention and practice.

1. LLMs and the Context Window

To communicate well with any interlocutor, it is useful to understand how they work. With AIs, this is no different.

What is an LLM? A language model is a program trained on enormous quantities of text, including books, articles, conversations, code, etc., to learn language patterns. Based on those patterns, it is capable of generating text that makes sense, answering questions, and executing tasks.

What is the context window? Every time you open a conversation with an AI, it starts "from scratch." It has no memory of previous conversations, unless you provide them. The only material it has to work with is what is present in the current conversation. This is called the context window.

Think of the context window as the AI's work desk. Everything you place on that desk is taken into consideration. The clearer, more organized, and more relevant the material you lay out on that desk, the better the work that will be delivered.

The practical implication is as follows: what you place in that window, and how you organize it, determines everything the model can do for you. A well-structured prompt uses the context window efficiently. It provides the model with the right information in the right order and leaves out what is irrelevant. A poorly structured prompt wastes space and introduces noise.

2. Why Precision Matters

During training, LLMs learn statistical patterns: which words (tokens) tend to follow other words, which ideas cluster together, which structures signal which intentions. It did not memorize facts like a database. It internalized relationships.

When you send it a message, the model does not think like a person. It generates a response by predicting, token by token, what the most plausible continuation of your input would be. The quality of that prediction depends almost entirely on the quality of what you provided. "Garbage in, garbage out" is not a cliché in this context, it is a technical description of what happens.

The AI does not guess intentions. It processes what is written. If you write "help me with a text about leadership," the AI does not know whether you want an academic article, a LinkedIn post, a lecture script, or a list of tips. It will choose an interpretation, and it may not be yours.

That is why precision matters. The model cannot read your mind. It cannot ask clarifying questions, unless you invite it to do so. It will make assumptions to fill in the gaps, and those assumptions may not match your expectations. Every ambiguity in your prompt is an invitation for the model to guess.

3. Why Structure Matters and What Markdown Is

Before presenting the recommended prompt model, we need to briefly discuss a tool that will make your queries much clearer: Markdown.

Markdown is a lightweight way to add structure to plain text. You do not need any special program. You do not need to know how to code. Just type a few simple characters. A # at the beginning of a line to create a heading, two asterisks ** around a word to make it bold, a > to create a block quote, and the text immediately becomes more organized.

Why does this matter for LLMs? Because language models were trained (also) on enormous quantities of text formatted in Markdown. When you use headings, labels, and visual hierarchy, you are speaking a language the model already knows deeply. You are signaling structure, not just content. The model can identify what is context, what is task, what is constraint — because each of those elements is visually and semantically delimited.

Compare these two ways of asking for the same thing:

Without structure:

I am writing a blog post for my company, about sustainable packaging, it is for consumers and not for specialists, I want it to be around 500 words, please write it in a friendly tone and without technical jargon.

With structure:

#CONTEXT

- I am writing a blog post for my company's website.

- Topic: sustainable packaging.

 

#TASK

Write the blog post.

 

#REQUIREMENTS

- Target audience: general consumers, not technical specialists

- Length: approximately 500 words

- Tone: friendly and accessible

 

#RESTRICTIONS

- Avoid technical jargon

- Do not include footnotes or citations

Both prompts convey the same information, but the structured version is unambiguous, intelligible, and immediately interpretable, both by the model and by you. If you need to revise it later, you will know exactly where to make changes.

4. A Canonical Prompt Format

The format proposed below uses seven labeled sections, each introduced by a Markdown heading. Not all sections are necessary in every query: a simple factual question needs nothing more than a TASK. But for any complex request, using all seven sections consistently will produce better results.

Overview of the seven labels:

#CONTEXT     — background information the LLM needs to better understand you

#PERSONA      — the role or expertise you want the model to adopt

#TASK         — the specific thing you want it to do

#REQUIREMENTS — what the response must include

#RESTRICTIONS — what the response should avoid

#OUTPUT       — format, length, tone, language, ...

#ACTION       — the explicit execution command (the trigger)

Let us look at each one in detail.

4.1 #CONTEXT — Setting the Stage

#CONTEXT - The backdrop the model needs

This section answers the question: what does the model need to know about the situation, before it can help you? It is not the task itself, it is the prior, surrounding information that makes the task meaningful.

Good context answers questions such as: Who are you? What are you working on? What is the goal of the project? Who is the target audience? What has already been done?

Example:

#CONTEXT

- I am a high school teacher preparing a lesson on climate change.

- My students are 16 years old and have no prior knowledge of environmental science.

4.2 #PERSONA — Assigning a Role to the Model

#PERSONA - The expertise or perspective to adopt

Language models can adopt different voices, styles, and areas of expertise depending on how you frame the request. The PERSONA label lets you specify which hat the model should wear.

This is not merely a stylistic feature. When you ask the model to behave as an expert in a given area, it draws more heavily on the specialized knowledge associated with that field. The quality and relevance of the response typically improve.

Example:

#PERSONA

Act as an experienced science communicator, specialized in explaining complex topics to teenagers, without oversimplifying.

4.3 #TASK — The Central Request

#TASK - Exactly what you need the model to do

This is the heart of the prompt. The TASK should be a single, clear, actionable instruction. Avoid combining multiple tasks under a single label. If you need several things, list them explicitly or consider separate prompts.

The most common failure here is vagueness. "Help me with my report" is not a task. "Write the introduction section of my report" is.

Example:

#TASK

Develop a lesson plan on the causes and effects of climate change.

4.4 #REQUIREMENTS — What the Response Must Include

#REQUIREMENTS - Positive constraints on the response

Requirements are the things the response must do, include, or satisfy. Think of them as a checklist the model should verify before delivering its response.

Use bullet points here. Each requirement should be concrete and verifiable, something you could literally check off as fulfilled when reading the response.

Example:

#REQUIREMENTS

- The lesson will be 45 minutes long

- Include at least one hands-on group activity

- Reference a real example of climate impact from the last 10 years

- Ensure all vocabulary is accessible at the high school level

- Provide the estimated time for each segment of the lesson

– Include a discussion question for the closing

4.5 #RESTRICTIONS — What the Response Should Avoid

#RESTRICTIONS - Negative constraints on the response

While REQUIREMENTS tell the model what to include, RESTRICTIONS tell it what to leave out. This section is especially useful for preventing common failures: excessive length, inappropriate tone, unwanted content, or technical terminology your audience will not understand.

Example:

#RESTRICTIONS

- Do not suggest activities that require special equipment

- Avoid a pessimistic or alarmist tone

4.6 #OUTPUT — How the Response Should Look

#OUTPUT - Format, length, tone, language, ...

The OUTPUT section is where you specify the form of the response, independently of its content. This is where you say: give me a bulleted list, write in formal prose, limit it to 300 words, or respond in English.

Separating output format from task content is one of the essential techniques in prompt writing. It prevents you from burying formatting instructions inside the task description, where they can easily be overlooked.

Example:

#OUTPUT

- Format: structured document with named sections

- Length: approximately 1000 words

- Tone: professional, yet welcoming

- Language: American English

4.7 #ACTION — The Trigger

#ACTION - The explicit instruction to begin

The ACTION label is short and direct. It is the final instruction that tells the model to execute everything that came before. In most cases, it is a single sentence.

You may wonder why this is necessary if you already wrote a TASK. The reason is both psychological and technical: it clearly separates specification from execution and signals to the model that all framing is complete and that it should now produce the final output.

Example:

#ACTION

Write the lesson plan

5. The Complete Example

Here is everything brought together in a coherent prompt, using the climate change lesson as an example:

Notice how easy it is to read this prompt. Not just for the model, but for yourself as well. If a colleague asked you what you were trying to accomplish, you could hand them this document and they would understand immediately. That clarity is not accidental. It is the result of structure.

#CONTEXT

- I am a high school teacher preparing a lesson on climate change.

- My students are 16 years old and have no prior knowledge of environmental science.

 

#TASK

Develop a lesson plan on the causes and effects of climate change.

 

#PERSONA

Act as an experienced science communicator, specialized in explaining complex topics to teenagers, without oversimplifying.

 

#REQUIREMENTS

- The lesson will be 45 minutes long

- Include at least one hands-on group activity

- Reference a real example of climate impact from the last 10 years

- Ensure all vocabulary is accessible at the high school level

- Provide the estimated time for each segment of the lesson

– Include a discussion question for the closing

 

#RESTRICTIONS

- Do not suggest activities that require special equipment

- Avoid a pessimistic or alarmist tone

 

#OUTPUT

- Format: structured document with named sections

- Length: approximately 1000 words

- Tone: professional, yet welcoming

- Language: American English

 

#ACTION

Write the lesson plan

6. Practical Tips for Better Prompts

6.1 You Do Not Always Need All Seven Labels

A simple factual question, "What is the capital of Australia?" needs only a TASK. The canonical format is a proposed structure to facilitate, not to constrain. Use the sections that add clarity and skip the unnecessary ones.

6.2 Iterate and Refine

Think of your first prompt as a draft. If the model's response is not right, examine your prompt before blaming the model. Usually, the problem is a poorly specified TASK, a missing RESTRICTION, or an OUTPUT format you forgot to mention.

6.3 Be Affirmative Whenever Possible

In general, it is more effective to say what you want than only what you do not want. "Write in simple, direct sentences" works better than "do not use complicated language." Reserve RESTRICTIONS for things that are genuinely difficult to express in a positive form.

6.4 One Task per Prompt

If you need five things, write five prompts — or list the five explicitly within a single TASK and number them. Stacking unrelated requests in a single prompt is the fastest way to get a response that addresses everything partially and nothing completely.

6.5 Adjust the Level of Detail to the Complexity

A creative brainstorm needs less specification than a legal summary. A casual email draft needs less structure than a technical report. Use your judgment, the right amount of structure should be just enough to remove ambiguity.

7. Common Errors and How to Fix Them

Error: The task is mixed in with the context.

Solution: Move the task to its own section. The model should never have to guess what you actually want it to do.

Error: Contradictory requirements and restrictions.

Solution: Read your prompt aloud before sending it. "Be comprehensive" and "limit to 200 words" cannot both be true at the same time.

Error: Assuming the model knows what you know.

Solution: Use the CONTEXT section. If a detail would change the response, include it. The model knows nothing about your specific situation unless you inform it.

Error: Specifying the format as part of the task.

Solution: Move the format instructions to the OUTPUT section. "Write a bulleted list of the ten most important considerations" mixes task and format. Write them separately:

"#TASK: List the ten most important considerations" and

"#OUTPUT: Format: bulleted list"

8. Conclusion

Writing a well-structured prompt is not a technical skill. It is a communication skill. The same principles that produce a good email, a clear meeting agenda, or an effective work brief also produce a good prompt. Describe the situation, define the objective, specify the constraints, and say what you expect to receive.

The seven-label format proposed here, which includes CONTEXT, PERSONA, TASK, REQUIREMENTS, RESTRICTIONS, OUTPUT, ACTION, is a practical implementation of those principles. It takes some practice to use fluently, but the return is immediate: more relevant, more precise, and considerably more useful responses.

The context window is your canvas. What you place in it, and how you organize it, is entirely in your hands. Use it well.