The humble placeholder: A small documentation problem worth fixing

Mano TothSenior Technical Writer
April 14, 2026

Axiom is designed to be easy to set up. Most customers get from zero to ingesting data in minutes. But there is one small and reliably sticky point: placeholder replacement in code examples.

You copy a command from the docs, paste it into your terminal, and get an error. Not because anything is wrong with the command. Because you forgot to replace DATASET_NAME with your actual dataset. It happens to the best of us, and it tends to happen at exactly the moment when enthusiasm is highest and patience is lowest.

When we introduced edge deployments, one particular placeholder became more complicated. Edge endpoints involve a domain that can take different values depending on which edge deployment you are using. A single placeholder, AXIOM_DOMAIN, could mean several different things for different customers, and the info boxes below the code examples explaining the distinction weren't always enough.

The two-audience problem

There are two very different readers of documentation.

The first is a human. Humans benefit from interactivity. If you can type the name of your dataset into a field and watch it appear in every code example on the page simultaneously, you are much less likely to hit the terminal with a broken command. Humans want their placeholders replaced so they don't have to do it themselves.

The second reader is a large language model. An LLM reading your docs shouldn't see the name of your dataset. It should see DATASET_NAME. It should see the generic instructions about what to replace. The original, unmodified Markdown source is exactly what you want an LLM to consume so it can answer questions like "how do I set up edge ingest?" correctly, without being contaminated by one person's specific values.

These two audiences want something fundamentally different from the same content. And the solution had to work for both.

Code like a devling

I should confess something about my qualifications here.

I'm a technical writer. My primary area of expertise is documentation. I know my way around Markdown, MDX, HTML, and CSS.

I'm also a devling. I know enough about code to be dangerous. When I want to write basic programs, my modus operandi is simple: I read about it, I copy it, I break it, I fix it by copying something else. (Come to think of it, isn't this how most experienced developers write code in the first place?)

A few years ago, writing a complex component like the placeholder configurator from scratch would have been a major undertaking. It would have taken me a week, time that I couldn't afford to neglect regular documentation work in a fast-moving, high-paced environment like Axiom.

What has changed is that I now have LLMs to help me. Before, a task like "write a JavaScript component that intercepts the Mintlify copy button and replaces its behavior" would have meant many hours of trial-and-error. Now, I can describe what I need in plain language and iterate from a working starting point. I'm still the one who decides what to build, why to build it, and what the edge cases are. I still get my hands dirty with code. But if I break things and get stuck, I can ask the LLM to help me. For me, this is what AI-enabled workflows are all about.

Empowered by this, I sat down and built (or rather, vibe-coded) docs-placeholder-configurator.

What it does

The script runs in the browser, on page load, and scans every code block on the page for known placeholder strings: API_TOKEN, DATASET_NAME, AXIOM_DOMAIN, and ORGANIZATION_ID. If it finds any, it injects a configurator component directly below the code block.

The configurator renders a compact input field (or a dropdown, in the case of AXIOM_DOMAIN) for each placeholder found in that specific code block. You type your value. The placeholder in the code block immediately updates to reflect what you typed. Every code block on the page with the same placeholder updates at the same time.

A few details that I think make this more than a toy:

It hooks into the copy button. Mintlify has a built-in copy button on every code block. The script intercepts it using a capture-phase event listener, so when you click Copy, you get the code with your values already substituted. This was the part I was most pleased with. It means you never copy a placeholder by accident.

It highlights. Replaced values are highlighted in blue, both before and after replacement. Before you fill in a value, the placeholder itself is highlighted, so you can't miss that something needs your attention. After you fill it in, the substituted value is highlighted, so you can see at a glance what changed.

It handles the info boxes. As per convention, each of our code examples is followed by a callout box explaining what the placeholders mean. Once the configurator is in place, those explanations are redundant. The script detects these callout boxes, reads their content, and hides them. If the callout mentions placeholder types the configurator doesn't handle, those descriptions are absorbed into the configurator component itself as text, so nothing is lost.

Values persist for the session. Enter your dataset name once, and it populates every code block on every page you visit during that session. No re-entering. No storage on our servers. Values live in sessionStorage and never leave your browser. sessionStorage is tab-scoped and clears automatically when you close the tab, so your API token is never persisted beyond the current browsing session and is never transmitted to any server. For a docs convenience feature, this is the right trade-off between usability and security.

It works with Mintlify's SPA navigation. Mintlify renders documentation as a single-page app: clicking a link replaces the page content without a full reload. The script uses MutationObserver to detect when new code blocks appear in the DOM and processes them accordingly. It also handles the edge case where Mintlify's AI chat assistant, when closed, can trigger a re-render of the page content.

LLMs see the original source. Because all of this happens at runtime in the browser, the Markdown source files are completely unchanged. When you visit axiom.co/docs/send-data/methods.md, you see the raw Markdown with all the original placeholders and their descriptions. The configurator is entirely invisible to anything that reads the source.

The implementation

The script started life as a single vanilla JavaScript file embedded directly in the docs repository. The reason was that this is the only injection point Mintlify allows you to hook into. No module system, no bundler, no modern toolchain to plug into. A single file with no dependencies wasn't really a stylistic choice so much as the only option that actually works.

Once it was working, I extracted it into its own docs-placeholder-configurator package. The source is TypeScript, built with rolldown into two IIFE bundles: a development build and a minified production build.

If you use Mintlify and you have code examples with placeholders, follow these steps to use the script in your own docs project:

  1. Clone the docs-placeholder-configurator repository using git clone https://github.com/axiomhq/docs-placeholder-configurator.git.
  2. Adapt the source src/placeholder-configurator.ts to your needs. Change the PLACEHOLDERS object to match your own placeholder names and labels. The script supports two placeholder types: text and select.
  3. Install dependencies with npm install, and then run npm run build to build the bundles.
  4. Copy the production build dist/placeholder-configurator.min.js to your Mintlify documentation project.

What it looks like in practice

You can see the configurator in action at Methods or at Vector configuration. A human visiting either of those pages sees the configurator: input fields below each code block, placeholders highlighted and ready to fill in, values substituted live as you type.

The Markdown source at axiom.co/docs/send-data/methods.md tells a different story.

# Methods for sending data

> Explore the various methods for sending data to Axiom, from direct API calls and OpenTelemetry to platform integrations and logging libraries.

The easiest way to send your first event data to Axiom is with a direct HTTP request using a tool like `cURL`.

```shell  theme={null}
curl -X 'POST' 'https://AXIOM_DOMAIN/v1/ingest/DATASET_NAME' \
    -H 'Authorization: Bearer API_TOKEN' \
    -H 'Content-Type: application/x-ndjson' \
    -d '{ "http": { "request": { "method": "GET", "duration_ms": 231 }, "response": { "body": { "size": 3012 } } }, "url": { "path": "/download" } }'
```

<Info>
  Replace `AXIOM_DOMAIN` with the base domain of your edge deployment. For more information, see [Edge deployments](/reference/edge-deployments).

  Replace `API_TOKEN` with the Axiom API token you have generated. For added security, store the API token in an environment variable.

  Replace `DATASET_NAME` with the name of the Axiom dataset where you send your data.
</Info>

To send events continuously, Axiom supports a wide range of standard tools, libraries, and platform integrations.

What the human sees and what the machine sees are two different things, served from the same file. If you ask the Mintlify AI assistant about edge ingest, it reads the unmodified source and gives you generic, correct instructions. If you sit down to actually run the commands, the configurator is there to help you substitute your specific values before you copy anything.

What I learned

Building something useful at the intersection of documentation and code is genuinely more accessible than it was two years ago. The barrier isn't technical knowledge anymore, at least not in the way it used to be. It's clarity of thinking: knowing what problem you're solving, who you're solving it for, and what the constraints are. The code that implements the solution is something you can arrive at with enough patience and the right tools.

I found myself explaining the problem to an LLM, iterating on the approach, asking it to handle edge cases I hadn't anticipated, and arguing with it when it suggested solutions that didn't fit the constraints. That last part (arguing) was the important part. The LLM is very good at generating plausible implementations. It's less good at knowing which implementation is right for the situation. That judgment is still the job of the person building the thing.

As for support tickets: placeholder confusion has gone quiet. A small change, but a satisfying one.

Share:

Interested to learn more about Axiom?

Sign up for free or contact us at sales@axiom.co to talk with one of the team about our enterprise plans.

Get started with Axiom

Learn how to start ingesting, streaming, and querying data into Axiom in less than 10 minutes.