Latest blog posts

  • When speed becomes strategy 💨

    When speed becomes strategy 💨

    Startups thrive on speed and hustle. But when everything feels urgent, how can you tell what actually matters?



  • A pint with Jimothy: on fear, ego, and hiring smarter

    A pint with Jimothy: on fear, ego, and hiring smarter

    ☕️ We started with coffee, like most chats do. Black for me, cappuccino for Jimothy (he actually prefers Jim, or James). Corner terrace, Singelgracht. One of those confusing Amsterdam afternoons where the sky can’t decide − sunlight breaks through the clouds, then ducks back behind them. Just long enough to let a few leaves blow



  • When Slack starts to feel like a DDoS attack

    When Slack starts to feel like a DDoS attack

    In software engineering, we often rely on “exponential back-off” when retrying failed network requests − a technique where each subsequent attempt is spaced out further in time to avoid overloading the system. Oddly enough, I’ve found myself applying a similar concept to human communication. As an Engineering lead, I’m frequently on the receiving end of



  • AI for Engineering managers: adapt now or trail behind

    AI for Engineering managers: adapt now or trail behind

    Remember when a five‑digit Stack Overflow score was a flex? Today that, and a vintage 2022 playbook will buy you precisely zero leverage. Yesterday’s job, tomorrow’s irrelevance Many Engineering managers still run on three rituals: However, none of those moves the product faster. Meanwhile, AI agents are quietly doing code reviews, generating boilerplate, even writing RFCs. The org chart hasn’t



  • A minimal LLM Ops stack with tracing and model costs

    A minimal LLM Ops stack with tracing and model costs

    I built a minimal FastAPI “customer support reply drafter” with TF-IDF retrieval and Langfuse tracing. You’ll see exactly what context the model used, where latency came from, and what each request cost, plus the trade-offs behind the design.



  • RAG: A (mostly) no-buzzword explanation

    RAG: A (mostly) no-buzzword explanation

    Retrieval-Augmented Generation (RAG) is a pattern that fixes the knowledge cutoff and hallucination problems by giving an LLM access to the right data at answer time. Instead of asking the model to “remember everything”, RAG lets it look things up first, then answer.



  • Unikernels, without the marketing

    Unikernels, without the marketing

    It all started when I saw Prisma put “serverless” and “no cold starts” in the same sentence describing their “Prisma Postgres” product 🤔



  • Solving the Openfire Lab Blue team challenge

    Solving the Openfire Lab Blue team challenge

    As a cybersecurity analyst, you are tasked with investigating a data breach targeting your organization’s Openfire messaging server. Attackers have exploited a vulnerability in the server, compromising sensitive communications and potentially exposing critical data.



Looking for more posts?

Check out People or Computers

For contact options, head to About and Contact