John Beck product · AI · systems

I build AI systems that actually answer to the people using them.

Here's how I work and how I think. All of it asking the same thing: what do humans still need to do as AI does more. Frame it how you want.

Pick a lens. I'll adjust the page around it.

AI learns from us, works for us, and asks us to trust it. Usually before it's earned that. The interesting part isn't the capability. It's the relationship. How much do we hand over. How much do we keep. Where the human judgment still has to sit. What makes you trust AI?

Work and thinking, in one list.

Case studies, essays, and frameworks all live here together. Pick a lens above and the order shifts to match. Less relevant pieces dim but don't disappear.

  1. SayHi A founder's case study. Shipping a language app to 2.6M installs, then through an Amazon acquisition.
  2. The Translator Portal A working surface for thousands of linguists. The workflow had to carry the weight, not the interface.
  3. Brand Cognition A working model for how people remember brands. Used to score marketing work against actual attention rather than taste.
  4. Sourdough A consulting investigation: what does it take to build a usable tool for people who aren't thinking about tools?
  5. MQM Scoring A rubric for judging machine translation, turned into a dashboard linguists could actually triage from.
  6. Life After Acquisition The unglamorous year of integrating a small product into a very large company, and what it taught me about scope.
  7. The AI Improvement Framework A framework for deciding what to fix when the model is wrong, and who should fix it.
  8. Editorial Review Human review at enterprise scale. The argument for doing it anyway, and the shape of making it fast.
  9. AI/Language: Open Threads Essays on what language models still don't know about language, and what that means for the people building with them.
  10. HITL is the Design Question The short argument for why human-in-the-loop is the organizing idea of the next decade of products, not a compliance step.

Things I'm turning over right now.

Half-baked on purpose. They aren't pieces yet. Just the things I'd be happy to think out loud about if we talked this week.

  • 2026-03 open

    What the margin should do

    Working through when AI commentary is useful next to a case study, and when it's just noise. The answer keeps coming back to: only when it can be wrong, and only when the writer has read it.

  • 2026-04 open

    Portfolios as editorial documents

    The default portfolio template in 2026 is a product landing page. But the thing most hiring managers actually read is closer to a magazine feature. Drafting a short piece on why the form should follow the reader.

  • 2026-04 open

    Evaluating AI products end-to-end

    Sketching a practice for evaluating AI product work the way we evaluate design work. The bar: did it make the user's day easier. Open question: is there a rubric, or is it always a story?

You read the whole thing. However you got here, I'm happy to talk about what you're working on.

— john beck