Closing Line Value: The Real Edge Signal in Prediction Markets

May 5, 2026 · 13 min read

Most prediction market traders track win rate. They watch their P&L curve and try to read tea leaves out of a 30-trade sample. They're flying blind. Win rate is a noisy, lagging, outcome-dependent metric that takes hundreds or thousands of trades to converge to its true value — and even then, it tells you whether you got lucky, not whether you have edge.

The metric that actually answers "do I have edge?" is Closing Line Value (CLV). It's the canonical leading indicator that sharp sports bettors have used for decades, and it transfers cleanly to prediction markets like Polymarket and Kalshi. After running automated bots across 10 sports for a year, we now track CLV per sport, per edge bucket, per bot — and treat it as the primary signal for whether a strategy is working. P&L is the lagging confirmation. CLV is the early warning.

This post walks through what CLV is, why it works, how to compute it on prediction markets specifically (with their proxy pricing quirks), and how to use it to validate or kill a trading model before P&L tells you anything definitive.

What CLV Actually Measures

Closing Line Value is the difference between the price you got and the market's final price right before resolution (or the final tradeable price, in markets that resolve on outcome).

CLV (cents) = closing_price - your_entry_price       (if you bought YES)
CLV (cents) = your_entry_price - closing_price       (if you sold / bought NO)

If you bought YES at 60c and the line closed at 68c, your CLV is +8c. If you bought at 60c and the line closed at 55c, your CLV is −5c. Positive CLV means the market moved toward your position after you entered. Negative CLV means it moved away.

Notice what's missing: the actual outcome. CLV doesn't care whether you won or lost. The closing line is the market's final, most-informed estimate of the true probability. If you consistently get prices better than that estimate, you have edge. The outcome is just one realization of a probability distribution — CLV measures the distribution itself.

Why CLV Is Superior to Win Rate

Three reasons. The first is variance. Win rate at +8c edge over 100 trades has a 95% confidence interval of roughly ±9.6 percentage points — meaning you can hit 50% WR or 70% WR purely by luck on the same edge. CLV averages converge much faster because every trade contributes a continuous-valued data point, not a binary 0/1.

The second is selection. Win rate looks fine when you're picking off slow money on heavy favorites (75% WR on contracts averaging 80c entry — but you make 5c on wins and lose 80c on losses, net negative). CLV doesn't have this problem: it measures pricing accuracy independent of which side of the contract you're on.

The third is leading vs lagging. By the time win rate moves enough to reject the null hypothesis "this strategy is dead," you've usually been bleeding for 3-6 weeks. CLV degradation shows up in days. Compare these two scenarios from our live tennis trading data:

PeriodTradesWin RateAvg CLV (c)P&L per trade
Mar 1-157261%+3.4c+12c
Mar 16-318459%+0.8c+9c
Apr 1-159157%−1.2c+4c
Apr 16-307852%−3.1c−6c

Win rate dropped 9 points across two months — statistically marginal, easy to dismiss as variance. CLV dropped from +3.4c to −3.1c, a 6.5c shift. That signal is unmistakable. It told us the model was being adversely selected six weeks before the P&L turned negative. With CLV monitoring we'd have retrained or pulled the strategy in early March instead of late April.

The Closing Price Problem on Prediction Markets

Sports betting CLV is well-defined: there's a closing line at game start, broker-aggregated. Prediction markets are messier because:

Pragmatic fix: define your "close" as the volume-weighted mid-price during the last 60 seconds before the contract becomes essentially binary (i.e., before the market knows the outcome). For a moneyline contract, that's typically 60-120 seconds before the final whistle. For a spread or total, it's whenever the score moves the line definitively.

Alternatively, capture the last in-band price — the last price at which the contract traded between, say, 5c and 95c. Once it leaves that band the market is making a settlement guess, not a probability estimate. We use this fallback for trades that resolve while the bot is offline, persisting candidate close prices to a small JSON file and consuming them on next-cycle settlement.

Building a CLV Tracker (Working Code)

Here's the pattern we use in production. A signal is recorded when the bot enters; the close is recorded when the contract resolves; pairs are joined and CLV is computed.

import json
from pathlib import Path
from datetime import datetime, timezone

CLV_LOG = Path("clv_data.jsonl")

def record_signal(token_id, side, entry_price_c, sport, edge_c, bot_name):
    """Called at trade entry."""
    rec = {
        "type": "signal",
        "ts": datetime.now(timezone.utc).isoformat(),
        "token_id": token_id,
        "side": side,                # "yes" or "no"
        "entry_price_c": entry_price_c,
        "sport": sport,
        "edge_c": edge_c,
        "bot": bot_name,
    }
    with CLV_LOG.open("a") as f:
        f.write(json.dumps(rec) + "\n")

def record_close(token_id, close_price_c):
    """Called when the market resolves (or hits the in-band cutoff)."""
    rec = {
        "type": "close",
        "ts": datetime.now(timezone.utc).isoformat(),
        "token_id": token_id,
        "close_price_c": close_price_c,
    }
    with CLV_LOG.open("a") as f:
        f.write(json.dumps(rec) + "\n")

The signal/close pair is keyed on token_id. When you compute CLV later, walk the log, build a dict of last-seen closes per token, then compute CLV for each signal whose token has a close:

def compute_clv_records():
    closes = {}
    signals = []
    with CLV_LOG.open() as f:
        for line in f:
            rec = json.loads(line)
            if rec["type"] == "close":
                closes[rec["token_id"]] = rec["close_price_c"]
            else:
                signals.append(rec)

    out = []
    for s in signals:
        if s["token_id"] not in closes:
            continue
        close_c = closes[s["token_id"]]
        if s["side"] == "yes":
            clv = close_c - s["entry_price_c"]
        else:
            clv = s["entry_price_c"] - close_c
        out.append({**s, "close_c": close_c, "clv_c": clv})
    return out

Aggregate by sport, by edge bucket, by bot, and by week. The dashboard you build from this is the most important monitoring tool in your stack.

Reading the Output: What CLV Tells You

A few rules of thumb after running this in production for a year:

Production benchmark: ZenHodl's automated bots publish per-sport CLV alongside win rate and P&L. The 7d-vs-30d CLV trend is the first thing we check on each strategy each morning — faster than reading P&L deltas, less noisy than win rate.

Using CLV as a Pre-Trade Gate

Once you have a few weeks of CLV history per (sport, edge bucket), you can use it as a filter on future trades. Aggregate your historical CLV by bucket, refresh nightly, and reject signals where the bucket has consistently lost market value to the close.

def clv_gate(sport, edge_c, history):
    """Reject if this sport-edge bucket's historical CLV is negative."""
    bucket = bucket_for(edge_c)               # e.g., "5-10c", "10-15c"
    key = (sport, bucket)
    if key not in history:
        return True, "insufficient_data"      # fail open, allow trade
    avg_clv, n = history[key]
    if n < 30:
        return True, "low_n"
    if avg_clv < -1.0:                        # losing >1c to close
        return False, f"clv_gate_block_{avg_clv:.1f}c"
    return True, "clv_gate_pass"

Run this in shadow mode for a week first — log decisions but don't actually block trades — so you can verify the gate isn't rejecting profitable buckets due to a stale snapshot. Then flip to enforce mode. This is the same pattern as a circuit breaker, but tuned at the sport-edge level rather than the entire-sport level.

Failure mode: if your CLV history is built from too few trades or a stale window, the gate becomes noise. Require at least 30 trades per bucket, refresh daily, and always fail open on missing buckets. A gate that overfires is worse than no gate.

CLV vs ROI: Which Is Canonical?

You'll occasionally see traders argue that ROI is the only metric that matters because it's the cash you take home. They're not wrong — but ROI is the output, not the diagnostic. CLV is upstream of ROI: positive CLV with reasonable position sizing produces positive ROI over a sufficient sample. Negative CLV will eventually produce negative ROI, even if a 50-trade hot streak hides it.

If your CLV is positive and your ROI is negative, the problem is execution, sizing, or fees — not your edge. If your CLV is negative and your ROI is positive, you're on a hot streak that will mean-revert. Manage by CLV; report by ROI.

The Adverse Selection Lens

The deepest reason CLV works: it measures whether the market agrees with you after digesting the same information you used. Sharp traders move lines; slow money does not. If your entry routinely beats the close, you're consistently faster or smarter than the marginal price-setter. If it routinely loses to the close, you're slower or dumber.

Your model can be brilliant in expectation and still lose money on traded games, because traded games are not a random sample — they're the games where your model disagreed with the market. In those specific disagreements, the market might be right more often than your model. CLV catches this immediately. Win rate doesn't catch it for months.

Common CLV Pitfalls

  1. Treating thin-liquidity closes as truth. If the last trade was 10 shares at a wide spread, that's not a closing price — it's noise. Use volume-weighted mid over a window, or fall back to last in-band price.
  2. Not handling the side correctly. CLV for a NO position is the negative of CLV for the equivalent YES position at the same complementary price. Easy to flip a sign and mislead yourself.
  3. Including post-resolution prices. Once the market knows the outcome, prices are 0 or 100. Cap your "close" at the moment the outcome is observable.
  4. Ignoring fee adjustment. A 2c CLV is theoretical — if your effective taker fee is 2c, real CLV is zero. Always report after-fee CLV alongside raw CLV.
  5. Single-bucket overfit. Slicing CLV by sport × edge × period × weekday gives you 200 buckets of 5 trades each, all within noise. Aggregate to the level your sample size can support.

Putting It Together

The full CLV-driven workflow looks like this:

  1. Log a signal record at every trade entry (token, side, entry price, sport, edge, bot)
  2. Log a close record either when the contract resolves or at the last in-band price
  3. Aggregate nightly into per-(sport, edge-bucket) CLV averages with sample size
  4. Surface 7d, 30d, and trend metrics on a dashboard
  5. Optionally use bucketed CLV as a pre-trade gate, in shadow mode first
  6. Treat 7d < 30d CLV degradation as the leading signal to retrain or pause
  7. Report ROI as the bottom-line outcome, but manage by CLV

This is a 200-line implementation that completely changes how you reason about your edge. We resisted it for months because win rate felt sufficient. It wasn't. Once we built the CLV pipeline, the question stopped being "is this strategy working?" and became "is the CLV trend stable?" — a much cleaner question with a much clearer answer.

See live CLV across 10 sports. Calibrated probabilities. Production-grade trading infrastructure.

Try ZenHodl Free →

Further Reading