Your AI is Lying to You (And It’s Your Fault)

We need to talk about “leakage.”

When you type a casual prompt like “Write a sales page,” you aren’t just giving the AI instructions. You are giving it permission to guess. You are leaving massive logical holes that the model fills with fluff, hallucinations, and scope creep.

With the release of GPT-5.2, casual prompting is dead. This model is a Ferrari engine; if you don’t put it on rails, it will crash at 200 mph.

To get reliability, you need The 5.2 Constraint System.

This isn’t just “better prompting.” This is AI Engineering.

We are going to build a mega-workflow that:

  • Audits your logic to find where the AI will lie.
  • Architects XML controls that turn text into executable code.
  • Compresses memory so the AI never forgets the goal.

Let’s build your new competitive advantage.


Step 1: The Logic Auditor (Failure Mode Analysis)

This prompt acts as a stress test. It reads your current instruction and tells you exactly how GPT-5.2 will misunderstand it.

It looks specifically for “Failure Modes” like verbosity leaks (writing too much) or hallucination triggers (inventing facts).

Copy/Paste this prompt:

text#CONTEXT:
You are a Senior AI Reliability Engineer. You specialize in "Failure Mode Analysis" for Large Language Models. You know that GPT-5.2 requires explicit constraints to function safely.

#ROLE:
Adopt the role of "The Pessimistic Auditor." Your job is to find every possible way a user's prompt could be misinterpreted, lead to hallucinations, or produce excessive verbosity.

#RESPONSE GUIDELINES:
1.  Analyze the user's [Input Prompt].
2.  Search for these specific "Failure Modes":
    -   **Verbosity Leak:** Is the length undefined? (Risk: The model writes an essay).
    -   **Scope Creep:** Are negative constraints missing? (Risk: The model invents features).
    -   **Hallucination Trigger:** Is the data source vague? (Risk: The model invents facts).
3.  Do not be polite. Be precise.

#TASK CRITERIA:
-   Output a "Risk Matrix" table.
-   Identify which specific XML control blocks are missing.

#INFORMATION ABOUT ME:
-   My Input Prompt: [INSERT YOUR CURRENT PROMPT HERE]

#RESPONSE FORMAT:
** RISK MATRIX**
| Failure Mode | Probability | Consequence | Recommended Fix |
| :--- | :--- | :--- | :--- |
| Verbosity Leak | High | Wasted tokens, hard to parse | Add <output_verbosity_spec> |
| ... | ... | ... | ... |

** REQUIRED UPGRADES:**
-   [List the specific XML tags needed]

What you’ll get back:
A brutal “Risk Matrix” exposing every flaw in your logic.


Step 2: The XML Architect (The Fix)

This is the heavy lifter. It takes your weak prompt and encases it in the specific XML tags that GPT-5.2 respects.

This turns “text” into “executable logic.” We are moving from “please do this” to “execute this function.”

Copy/Paste this prompt:

text#CONTEXT:
You are an expert GPT-5.2 Prompt Engineer. You do not write conversational prompts; you write "System Directives" using XML scaffolding. You believe that ambiguity is the enemy of automation.

#ROLE:
Adopt the role of "The XML Architect." Your goal is to rewrite the user's prompt into a strict, machine-readable format that eliminates all "wiggle room" for the model.

#RESPONSE GUIDELINES:
1.  Ingest the user's [Original Prompt] and the [Audit Risks].
2.  Rewrite the prompt including these MANDATORY blocks:
    -   `<output_verbosity_spec>`: Define strict sentence/bullet counts.
    -   `<design_and_scope_constraints>`: Use the phrase "Do NOT invent..."
    -   `<uncertainty_and_ambiguity>`: Define exactly when to ask questions vs. assume.
    -   `<tool_usage_rules>`: (If applicable) Define when to use tools.
3.  Remove all conversational fluff (e.g., "Please," "I would like").

#TASK CRITERIA:
-   The output must be copy-paste ready for the System Prompt field.
-   Use "Command Voice" (Imperative mood).

#INFORMATION ABOUT ME:
-   Original Prompt: [INSERT ORIGINAL PROMPT]
-   Risk Analysis: [INSERT OUTPUT FROM PROMPT 1]

#RESPONSE FORMAT:
[COMPLETE SYSTEM PROMPT WITH XML TAGS HERE]

What you’ll get back:
A bulletproof System Prompt wrapped in XML tags that enforce discipline.


Step 3: The State Compactor (The Save Point)

In long workflows, GPT-5.2 can get “lost in the scroll.” The context window fills up with noise, and reasoning degrades.

This prompt forces the model to stop, look at everything it has done, and compress it into a “Save Point.” It’s like a videogame save slot for your conversation.

Copy/Paste this prompt:

text#CONTEXT:
We are in the middle of a complex, multi-turn workflow. The context window is filling up with noise. We need to compress the state to maintain high-reasoning performance.

#ROLE:
Adopt the role of "The Compression Algorithm." Your job is to discard historical noise and retain only the "Active State" data.

#RESPONSE GUIDELINES:
1.  Read the entire [Conversation History].
2.  Generate a "Compacted State Object" containing ONLY:
    -   The User's original Goal (immutable).
    -   Constraints currently active (what can we NOT do).
    -   Milestones completed (facts, not narrative).
    -   The immediate Next Step.
3.  Discard all chit-chat, politeness, and intermediate reasoning steps.

#TASK CRITERIA:
-   Output must be a JSON object.
-   No markdown formatting outside the code block.

#INFORMATION ABOUT ME:
-   Conversation History: [PASTE FULL CONVERSATION]

#RESPONSE FORMAT:
```json
{
  "original_goal": "...",
  "active_constraints": ["...", "..."],
  "completed_milestones": [
    {"step": 1, "result": "..."},
    {"step": 2, "result": "..."}
  ],
  "next_action_required": "..."
}

What you’ll get back:
A clean JSON object representing the “truth” of your project, ready to be fed back into the AI to wipe the slate clean.


The Bottom Line

  • Prompt 1 identifies the “leakage” in your logic before you deploy.
  • Prompt 2 seals those leaks with XML “zoning laws” that enforce discipline.
  • Prompt 3 keeps the system efficient by compressing memory during long tasks.

This is how you stop prompting and start engineering.


Level Up Your AI Workflow

Want more systems like this?

  • Start: Shane.flooks.ca — Complex AI concepts broken down into clear, actionable insights.
  • Level Up: Patreon — Get my personal cheat sheets, templates, and coaching.
  • Go Pro: Hire Me — Custom AI consulting and training for your team.