Computers

  • RAG: A (mostly) no-buzzword explanation

    Retrieval-Augmented Generation (RAG) is a pattern that fixes the knowledge cutoff and hallucination problems by giving an LLM access to the right data at answer time. Instead of asking the model to “remember everything”, RAG lets it look things up first, then answer.

    Read more ﹥

  • Unikernels, without the marketing

    It all started when I saw Prisma put “serverless” and “no cold starts” in the same sentence describing their “Prisma Postgres” product 🤔

    Read more ﹥

  • Solving the Openfire Lab Blue team challenge

    As a cybersecurity analyst, you are tasked with investigating a data breach targeting your organization’s Openfire messaging server. Attackers have exploited a vulnerability in the server, compromising sensitive communications and potentially exposing critical data.

    Read more ﹥

  • Solving the ShadowCitadel Lab Blue team challenge 🫆

    Today, we dive into a host-based forensics investigation − a curious case of a breach inside the enterprise environment of a company called TechSynergy. They have detected an anomaly after an employee engaged with an unexpected email attachment. This triggered a series of covert operations within the network, including unusual account activity and system alterations.

    Read more ﹥

  • How to prevent token misuse in LLM integrations

    LLMs are powerful. And expensive. Every token counts, and if you’re building something that uses an LLM API (Claude, OpenAI, Gemini or PaLM, Mistral, etc.), malicious users can abuse it to burn through your credits. This is especially true for apps that take user input and feed it to the model. The trick is that

    Read more ﹥