ChanlChanl
Blog/Testing & Evaluation

Testing & Evaluation

Browse 25 articles in testing & evaluation.

Testing & Evaluation Articles

25 articles · Page 1 of 3

Warm watercolor illustration of an engineer reviewing A/B test scorecards and conversation analytics at a rooftop workspace during golden hour
Testing & Evaluation·12 min read

Every Conversation Is an Experiment You Didn't Run

Your agent already ran the A/B test you're scoping. Here's how to read the results in your logs with propensity matching, synthetic control, and diff-in-diff.

Read More
Watercolor illustration of an observation tower overlooking two parallel worlds, Blade Runner 2049 style in sage and olive tones
Testing & Evaluation·8 min read

Is AI Better Than Your Humans? Score Both on One Rubric

Most teams can't say whether AI beats humans because they score them differently. One rubric, run on both, sliced by segment, gives you an honest answer.

Read More
Watercolor illustration of two figures walking through a warm corridor of looping paths, Her style in warm plum tones
Testing & Evaluation·9 min read

Every Failed Call Is a Test Case You Haven't Written Yet

The gap between staging and production for AI agents is measured in surprise. Here's how to close the loop from live failure to regression gate.

Read More
Grid of test scenario cards with pass and fail indicators showing evaluation coverage distribution
Testing & Evaluation·13 min read

How Much Testing Is Enough for Your AI Agent?

Code coverage doesn't apply to AI agents. Here's a framework for thinking about evaluation coverage: how many scenarios you need, what distribution to target, and how to know when you've tested enough.

Read More
A person standing before multiple transparent evaluation panels in a semicircle, each showing a different lens on the same conversation
Testing & Evaluation·16 min read read

Your LLM-as-judge may be highly biased

LLM-as-Judge has 12 documented biases. Here are 6 evaluation methods production teams actually use instead, with code examples and patterns.

Read More
Control room with green monitoring screens, one cracked display unnoticed in the center, Minority Report style
Testing & Evaluation·14 min read read

Is monitoring your AI agent actually enough?

Research shows 83% of agent teams track capability metrics but only 30% evaluate real outcomes. Here's how to close the gap with multi-turn scenario testing.

Read More
Person examining a translucent board with connected note cards, verifying links between them
Testing & Evaluation·16 min read read

Memory bugs don't crash. They just give wrong answers.

Memory bugs don't crash your agent. They just give subtly wrong answers using stale context. Here are 5 test patterns to catch them before customers do.

Read More
Dashboard showing split-screen comparison of offline test results versus live production scorecard trends for an AI agent
Testing & Evaluation·18 min read

Online vs. Offline Evals: Close the Production Gap

89% of teams have observability but only 37% run online evals. Here's why that gap is where production failures hide, and how to close it with a practical online eval pipeline.

Read More
Open-source AI agent testing engine with conversation simulation and scorecard evaluation
Testing & Evaluation·14 min read

We open-sourced our AI agent testing engine

chanl-eval is an open-source engine for stress-testing AI agents with simulated conversations, adaptive personas, and per-criteria scorecards. MIT licensed.

Read More
Abstract visualization of a signal gradually losing coherence as it passes through layered processing stages, with early stages showing clean waveforms and later stages showing scattered, fragmented patterns
Testing & Evaluation·14 min read

Agent Drift: Why Your AI Gets Worse the Longer It Runs

AI agents silently degrade over long conversations. Research quantifies three types of drift and shows why point-in-time evals miss them entirely.

Read More
Watercolor illustration of a digital fortress under siege with abstract red and blue waves representing adversarial AI testing
Testing & Evaluation·15 min read read

NIST Red-Teamed 13 Frontier Models. All of Them Failed.

NIST ran 250K+ attacks against every frontier model. None survived. Here's what the results mean for teams shipping AI agents to production today.

Read More
Data visualization showing the gap between AI agent benchmark scores and production performance metrics
Testing & Evaluation·13 min read

Your Agent Aced the Benchmark. Production Disagreed.

We scored 92% on GAIA. Production CSAT: 64%. Here's which AI agent benchmarks actually predict deployed performance, why most don't, and what to measure instead.

Read More

The Signal Briefing

One email a week. How leading CS, revenue, and AI teams are turning conversations into decisions. Benchmarks, playbooks, and what's working in production.

500+ CS and revenue leaders subscribed