Dev Espresso #1 – Grow with AI, Don’t Outsource Your Intelligence

Dev Espresso #1 – Grow with AI, Don’t Outsource Your Intelligence

Dariusz Luber
Dariusz Luber

What This Post Is About (In Brief)

In this episode I want to focus on the principle: delegate drafting, never delegate judgment - always be the final decision maker.

After reading this you’ll walk away with a small loop you can start using today whenever you prompt a model.

The Core Risk

Many developers (and students) are drifting into a prompt → accept → paste habit. Short‑term speed, long‑term cognitive atrophy. You end up:

  • Shipping code you can’t defend.
  • Accumulating silent architectural drift (inconsistent patterns creeping in unnoticed).
  • Losing the ability to reason independently when the model is wrong or vague.

Ownership Reminder

You still own: correctness, security, maintainability, ethical impact. “The model gave it to me” is never an excuse.

Human-in-the-Loop Flow (Diagram)

When working with AI, never accept first output if it's not well known topic for you, or you don't fully understand the output. When in doubt, force at least one probe + one explanation cycle, to ensure you understand and own the result.

Here is how the flow looks like (simplified version for clarity):

flowchart LR
  A[Generate / Context] --> B[Inspect]
  B --> C[Probe]
  C --> D[Refine]
  D --> E[Explain Back]
  E --> F[Integrate & Test]
  F -->|next cycle| A
  %% Simple style annotations (kept minimal for compatibility)
  style A fill:#222,stroke:#555,color:#fff
  style B fill:#222,stroke:#555,color:#fff
  style C fill:#222,stroke:#555,color:#fff
  style D fill:#222,stroke:#555,color:#fff
  style E fill:#222,stroke:#555,color:#fff
  style F fill:#222,stroke:#555,color:#fff

Diagram breakdown

Let's break down each step of the flow:

  • Generate / Context: Provide the smallest useful brief. Include goal, constraints, acceptance criteria, stack, and any key snippets. Prefer concrete examples over adjectives.
  • Inspect: Skim for feasibility gaps, contradictions, and missing constraints. Flag placeholders, hallucinated APIs, or vague steps.
  • Probe: Ask targeted questions to surface uncertainty and edge cases (e.g., “List invariants,” “What fails under $O(n)$ memory limits?”, “Show 3 failing tests the current plan would not pass.”).
  • Refine: Adjust constraints or request structured output. Prefer “minimal diff” or “options with trade‑offs” over full rewrites.
  • Explain Back: Have the model restate the approach in your terms and map steps to your codebase. Require rationale, complexity notes, and failure modes.
  • Integrate & Test: Apply in a throwaway branch, run unit tests, validate acceptance criteria, and instrument if needed. Keep the delta small.
  • Next Cycle: If gaps remain, loop with a focused prompt. Stop when marginal value drops or acceptance criteria are met.

Decision Lens for AI Suggestions

When we get AI output, we can categorize it into three buckets:

  1. Trivial & Understood: The output is clear and aligns with our understanding. We can adopt it directly.
  2. Not Understood: The output is unclear or confusing. We need to probe further to clarify.
  3. Smells / Over-Abstract: The output seems overly complex or abstract. We should request a simpler variant.
flowchart TD
  Q{AI Output Arrives} -->|Trivial <br>& Understood <br>& Meets Standards| A[Adopt]
  Q -->|Not Understood| P[Probe Further]
  Q -->|Smells / Over-Abstract| R[Request Simpler Variant]
  P --> X[Clarify Why + Risks]
  R --> S[Compare Simpler vs Original]
  X --> V[Validate with Tests]
  S --> V
  V -->|Meets Standards| A
  V -->|Gaps Found| P
  A --> C[(Commit with Notes)]

Goal: institutionalize a pause - avoid unconscious copy/paste.

Prompt Archetypes

Here are some prompt archetypes to use during the Probe and Explain Back steps, which help surface risks, tradeoffs, and understanding:

  • Clarifier: “Explain this like I know OAuth basics but not PKCE.”
  • Tradeoff Explorer: “Give 3 alternatives; compare performance, readability, security risk.”
  • Risk Scanner: “List edge cases + potential failure or escalation paths; suggest test names.”
  • Confidence Probe: “Which part of your answer is most likely to be outdated or incorrect?”
  • Compression: “Summarize the delta between version A and B in 2 sentences.”

Mini Example (Auth Decision)

Let's do a quick example.

Context: Need auth for a new TS / Python stack. Considering Authentik, keyed library, BetterAuth, or rolling a minimal in‑app layer.

Model first answer: lists SaaS + self‑host options (Auth0, Keycloak, SuperTokens, etc.). Suggests Authentik for self‑host due to feature set + community.

It's attempting to accept, but we pause and probe:

  • “Why didn’t you mention BetterAuth? Compare it to Authentik for low‑ops self‑hosting.”
  • “List hidden operational costs for Keycloak vs lightweight lib.”
  • “Explain how session vs token storage differs in BetterAuth; where are third‑party provider tokens persisted?”

Result: You learn tradeoffs (federation support, protocol coverage, operational weight) instead of blindly copying a default.

Silent Architecture Drift (Why This Matters)

Unchecked AI suggestions introduce inconsistent:

  • Error handling patterns
  • Security assumptions (e.g., missing CSRF / unsafely stored tokens)
  • Abstractions (extra layers “because it looked clean”)

Each alone seems harmless; together they rot clarity. Drift is fine only if intentional & documented.

Test First Acceptance Filter

Before trusting generated code ask:

  • Did it supply or suggest tests? (If not: request them.)
  • Are failure paths covered? (Timeouts, malformed input, auth edge cases.)
  • Any “just disable failure” anti‑patterns (e.g. swallowing errors, force‑green tests)? Reject immediately.

Security & Reliability Quick Glance

Even a tiny snippet: ask for

  • Input validation risks
  • Injection / deserialization / SSRF / secret leakage vectors
  • State management and session theft possibilities

Then ask: “What should I manually verify in docs before using this?”

Daily Practice Challenge

Pick one small piece of code/config you only half understand. Run this sequence:

  1. Have AI explain it high‑level.
  2. Ask for line‑by‑line with edge cases & failure modes.
  3. Request one alternative + pros/cons.
  4. You (not AI) write a 3–4 sentence summary of what you now understand.

Repeat tomorrow with a new slice. 30 days = massive compound gains.

Common Failure Modes

Mode Symptom Fix Prompt
Cargo‑culting Copying patterns you can’t explain “Why this shape vs simpler variant?”
Hallucinated Abstraction Unnecessary layer appears “Remove abstraction; show minimal explicit version.”
Outdated Stack Old boilerplate (e.g. CRA) “List 2025‑current recommended starters & why.”
False Green Silent try/catch / disabled test “Show tests that would fail for edge X.”
Drift Inconsistent auth / logging “Generate a style audit checklist for this domain.”

Risk Categorization Snapshot

mindmap
  root((AI Pairing Risks))
    Cognitive
      Passive Acceptance
      Lost Understanding
      Overreliance
    Architecture
      Style Drift
      Hidden Coupling
      Outdated Patterns
    Security
      Token Misuse
      Missing Validation
      Leaky Error Messages
    Quality
      Untested Paths
      Silent Failures
      Over-Abstraction

Use as a quick mental checklist when reviewing model output.

Minimal Checklist (Paste Near Your Editor)

  • Added context (constraints, stack, priorities)
  • Inspected for outdated libs / security smell
  • Asked at least 1 tradeoff + 1 risk question
  • Requested / reviewed tests
  • Re‑explained in my own words
  • Integrated with consistency (naming, patterns, docs)

Mindset Shift

AI is a force multiplier on curiosity, not a substitute for it. Measure success not by lines generated per hour—but by depth of understanding per problem solved.

Call To Action

Try the challenge once today. Share (or keep private) your 3–4 sentence learning summary. Then note one area where AI still feels like a black box for you—future episodes will be shaped by those gaps.

If it helped, pass the challenge to a teammate and compare summaries tomorrow.

“Grow with AI—don’t outsource your intelligence.”

Found this helpful? Power up my next content!

Buy me a coffee