The Sortable interface: Teaching every column type to sort itself
Replacing a one-size-fits-all sort with column-aware algorithms delivered speedups ranging from 2x to 26x, without changing a single query.
Metrics are generally available
Metrics are now generally available. Hyper-cardinality, unified with logs and traces, and fully queryable by AI agents through MCP and a dedicated metrics skill.
Stop guessing. Ship AI products with confidence
Axiom launches AI observability features purpose-built for generative AI development. Capture rich telemetry, track costs automatically, and debug complex workflows with our new TypeScript SDK and pre-built dashboards.
Data residency without compromise: introducing Axiom's new edge architecture
Axiom now runs on an edge-based architecture. Data is ingested and queried locally at your chosen edge deployment, while a single global control plane handles auth, billing, and routing. One login, one bill, and your data where it needs to be.
From cost center to strategic asset: How Monks transformed observability with Axiom
Global digital services company Monks cut observability costs by 40%, eliminated security blind spots, and unlocked AI readiness, guided by strategic technology advisor Three Tree Tech.
Find what's failing and why: review and issues for AI capabilities
Review AI traces with your team. Unstructured feedback becomes categorized issues you can investigate and resolve.
Catch what tests miss: Online evaluations for AI capabilities
Score your AI capability's outputs on live production traffic. Run reference-free scorers as fire-and-forget, control cost with per-scorer sampling, and trace every score back to the request that produced it.
Teaching AI to speak Splunk, then proving it works
We built open-source agent skills for Splunk-to-Axiom migration. Then we built evals to measure them, so every change is tested before it ships.
Close the loop: User feedback for AI capabilities
Capture user ratings and comments on AI outputs, link them to traces, and see exactly what your capability did when users were unhappy. The missing signal for continuous improvement.
2025 recap: Reflections on building data infrastructure for the AI era
Building the foundations for AI engineering, complete observability, and intelligent investigation.
Introducing GenAI functions: Analyze AI conversations with purpose-built APL functions
GenAI functions bring purpose-built capabilities to APL for analyzing AI conversation data. Extract user prompts, calculate costs, analyze conversation flow, and understand tool usage without jumping through hoops to parse complex JSON structures.
Stop shipping on vibes: Offline evaluations for AI capabilities
Axiom’s AI engineering toolkit expands with offline evaluations. Test AI capabilities against curated collections, compare configurations with flags, and catch regressions before they reach production.
Introducing metrics: High-cardinality without the cost
Metrics complete Axiom’s observability story with purpose-built metrics storage that treats high cardinality as a design principle, not a limitation. Query metrics alongside logs and traces without switching tools or learning new query languages.
Convex + Axiom: Complete observability for reactive backends
Stream function executions and console logs from your Convex deployment to Axiom for powerful querying, visualization, and monitoring of your reactive backend.
Designing MCP servers for wide schemas and large result sets
Lessons in shaping MCP responses to be useful, predictable, and small by default.
Get started with Axiom
Learn how to start ingesting, streaming, and querying data into Axiom in less than 10 minutes.