FPL Core Logo
FPL Core
  • Home
  • Leagues
  • Manager Report
    • Player Statistics
    • Player Comparison
    • Position Tracker
  • Fixtures
  • Transfer Planner
    New
  • Team Builder
  • GW Reports
    New
  • Blog
Contact
Github

© 2025 FPL Core

Back to Blog

One Threshold to Rule Them All: Cracking the FPL Price Algorithm (Part 3 of 7)

olbaud
February 23, 2026
FPL AnalysisPrice Algorithm
One Threshold to Rule Them All: Cracking the FPL Price Algorithm (Part 3 of 7)

Part 3 of 7: Cracking the FPL Price Algorithm


Six months in, F1 of 0.55. Losing to a spreadsheet.

Something wasn’t wrong with the model. Something was wrong with how I was thinking about the problem.

I’d been asking “what predicts rises?” The better question was: “what rules does this code enforce?”

The FPL devs aren’t trying to be mysterious. They’re running a game for 11 million people, and they need prices to move with demand without letting one viral tweet crash the market. So I stopped thinking like a data scientist and started thinking like the person who wrote the code. What constraints would I build in? What would the code actually look like?

I tested six hypotheses. Three survived. The algorithm turned out to be simpler than I expected, and the simplicity was the insight I kept missing.


The counter reset

This one cost me months, because I had it backwards.

When a player’s price changes, the algorithm resets its cumulative transfer counter to zero. Not a hard lockout, just a reset. But a reset creates its own short-term suppression: the counter needs time to rebuild before it can cross the threshold again.

I originally thought there was a hard suppression rule, some coded block that prevented rises for days after a price change. Tested it across all four seasons. Wrong.

Rises happen on every day after a price change. Day 1: rise rate is 1.04%, about half the baseline, because the counter needs a day to start rebuilding. Day 3: 2.52%, already above baseline. Days 4-9: actually elevated, because recently-changed players tend to be high-momentum ones.

The real insight came from splitting by direction. After a rise, another rise is more likely, with a day-1 rate of 2.01% climbing to 6% by day 5. After a fall, a rise is genuinely rare: 0% on day 1, under 0.5% through day 5. Opposite signals, and lumping them together hid the pattern for months.

Rise rate after a price change, split by direction — rises beget rises, falls suppress them

Rise rate after a price change, split by direction — rises beget rises, falls suppress them

Multi-rise gameweeks are common: 108 doubles, 11 triples, one quadruple across four seasons. Players can and do rise on consecutive days if the pressure is extreme enough.

This is why you see a player rise, transfers keep pouring in, and nothing happens for a few days. The counter reset to zero and needs to rebuild. If the pressure stays high, they rise again in 2-3 days. If transfers dry up while the counter’s rebuilding, the moment’s passed.

Getting this wrong cost me months. The original model had a single feature: “days since last price change.” It treated post-rise and post-fall days identically. When I split it into three features (days since last rise, days since last fall, and whether the previous change was a rise), it produced one of the single largest feature-driven improvements in the project. days_since_last_rise became the most important feature in the entire model, overtaking cumulative transfer pressure. The direction of the last price change matters more than how much transfer activity is building.

One misconception, corrected, was worth more than dozens of new features.

Technical Sidebar: Counter Reset Mechanics

– Post-change rise rates: Day 1: 1.04%, Day 2: 1.61%, Day 3: 2.52%, Days 4-9: elevated (high-momentum players)
– After rise: Day 1 rate 2.01%, climbing to 6% by Day 5
– After fall: Day 1 rate 0%, under 0.5% through Day 5
– Multi-rise events: 108 doubles, 11 triples, 1 quadruple across 4 seasons
– Feature split (days_since_last_rise, days_since_last_fall, prev_change_was_rise) was one of the largest single feature-driven F1 improvements
– days_since_last_rise became the #1 most important feature in the model


The hard floor

Below 1% ownership: zero rises across four seasons and 532,000 player-days, including 95 days where those players got over 20,000 net transfers. Still zero.

That’s not a pattern. That’s a rule.

Rise count by ownership bucket — zero rises below 1% ownership across 4 seasons

Rise count by ownership bucket — zero rises below 1% ownership across 4 seasons

The lowest ownership at which a rise has ever happened is 1.2% (2022-23). In 2025-26, the floor has risen to 1.7%. The Clopper-Pearson 95% CI for zero rises in 532,000 player-days is [0.0%, 0.0007%], effectively impossible rather than just unlikely. Even looking only at the 95 high-transfer days: 0/95, CI [0.0%, 3.8%]. Zero even when demand was substantial.

Above the floor, the gradient is real but gradual. Rise rates at >5k net transfers by ownership bucket: 0-1%: 0%, 1-2%: 1.3%, 3-5%: 5.5%, 5-10%: 6.6%, 20+%: 10.5%. A discrete protection rule at the bottom, then a smooth gradient above it.

Think about what this means in practice. 20,000 net transfers for a 0.8% owned player who just scored a brace on Match of the Day? Doesn’t matter. The algorithm won’t even look at them until enough managers own them for a price move to be meaningful. The 5% ownership filter in the production model handles this, and adding explicit log_ownership or pct_of_base features produced no improvement. XGBoost captures the gradient natively.


The wildcard question

All of this assumes a transfer is a transfer. But is it?

The community has debated this for years: do wildcard and free hit transfers count towards price changes? Existing price prediction sites already account for this in their models. We wanted to quantify the mechanism with data.

The problem: the API doesn’t label transfers by type. transfers_in_event is just a number. 50,000 transfers could be 50,000 managers making one move or 3,000 managers on wildcards making 15 each, and from the outside they look identical.

Indirect test first. During wildcard-heavy windows (GW1-2, the second wildcard window) there are 29% fewer rises per million transfers than normal. Significant (p=0.04), but confounded by the fact that those are unusual periods for other reasons.

Wildcard-heavy windows vs normal: fewer rises per million transfers

Wildcard-heavy windows vs normal: fewer rises per million transfers

Then we found a way to test it directly.

fplcore.com tracks its users: their teams, leagues, transfers, and crucially, when they activate chips. This gave us chip-tagged transfer data for the 2025-26 season through GW27. Not the full 11 million managers (the sample skews more engaged than average), but large enough to compute what share of each player’s daily transfers came from managers on active wildcards or free hits.

Take two players in the same gameweek with similar total transfer volume. One has 30% chip-driven transfers. The other has 5%. Do they rise at the same rate?

No.

Players with more chip-driven transfers rise significantly less often. Wildcard share: p<0.0002. Free hit share: p<0.0002. The direction holds for high-skill managers too (top quartile by total points: r=-0.10, p<0.001), though the stratified effect size is smaller.

Then scraping the transfer histories of 7.2 million FPL managers cracked it open.

Rather than just knowing the share of chip-driven transfers, we could test the counting mechanism directly. The best-supported explanation: the algorithm counts unique managers, not total transfers. Cap of one per manager per day. A wildcard manager making 15 transfers counts the same as a normal manager making one. (Inferred from observed correlations at population scale; we don’t have access to the source code.)

At population scale that changes everything. Even during the heaviest wildcard windows, chip-activated managers contribute only about 1.4% of the total counted pressure, because however many transfers they make, they’re still just one person.

Adding chip-share features improved predictions consistently across more than 30 formal experiment configurations on 2025-26 data. Real signal.

During wildcard-heavy periods, raw transfer numbers lie. 100,000 transfers during wildcard week is less price pressure than 100,000 in a normal week. The algorithm knows this. Now you do too.

Technical Sidebar: Wildcard/Free Hit Testing

– Indirect test (A2): WC-window days show 1.18 rises/million transfers vs 1.67 normal. Permutation p=0.0423, Mann-Whitney p=0.0002. Confounded by period effects.
– Direct test: Chip-tagged transfer data from fplcore.com user base (2025-26 season). Matched within same GW and transfer-volume bin.
– WC share high-vs-low rise rate delta: -0.0126, permutation p<0.0002
– FH share high-vs-low rise rate delta: -0.0275, permutation p<0.0002
– High-skill subsample: r=-0.10, p<0.001. Stratified results methodology-dependent (p=0.04-0.55)
– Model impact: 33 experiment configs across 6 scripts. Consistent F1 improvement on 2025-26 data. F1 in low-chip vs high-chip: 0.89, but single season, ~56 rises in test set, ±0.30 std. Direction real, magnitude noisy.
– cap=1 mechanism: unique-manager count r=0.4830 vs raw transfer count r=0.4778. Normal-only unique count r=0.4955 (best single predictor). WC contribution at population scale: 1.4%. Mann-Whitney p=1e-39.
– Caveat: Single-season result from sampled proxy. Sample skews engaged. Chip data covers 1 of 4 seasons.
– Scripts: analysis/wildcard_freehit_analysis.py, analysis/wildcard_direct_test.py, analysis/chip_adjusted_rise_experiment.py, analysis/chip_mechanism_5m.py, analysis/build_unique_manager_features_7m.py, analysis/high_skill_wc_test.py


Putting it together

Three rules. One threshold.

Threshold: ~200,000-240,000 cumulative net transfers. Fixed for all players.

The median cumulative pressure at the moment of rise is around 262,000, based on Supabase ground-truth from the 2025-26 season.

Two conditions, both required:

The two conditions for a price rise

The two conditions for a price rise

1. Cumulative transfer pressure crosses the fixed threshold (~200-240k) 2. Daily net transfers above roughly 30,000-60,000, because there needs to be active demand today, not just residual pressure

Both. Not either. Massive cumulative pressure from last week but only 5,000 transfers today? No rise. 80,000 transfers today but the counter just reset after a price change? No rise.

Threshold scatter — non-rises cluster below the threshold line, rises spread around and above it

Threshold scatter — non-rises cluster below the threshold line, rises spread around and above it

The formula works better as a filter than a precise cutoff. 83% of high-activity non-rise days sit below the threshold, correctly ruled out. The scatter on the rise side is real, because this is a central tendency rather than a clean line.

Price effect? Negligible. Standardised beta of -0.005. A £4.5m player and a £12.0m player with the same ownership face the same threshold. Price matters for falls (Part 6), but for rises the algorithm is basically blind to it.

Technical Sidebar: Threshold Calibration

– Method: for each rise in 2025-26, extract Supabase ground-truth cumulative_ent and ownership% on the day before the price change
– Threshold: fixed at ~200-240k cumulative net transfers (median pre-rise: ~262k)
– Two necessary conditions: cumulative >= threshold AND daily_net >= 30-60k (median 60k for true rises vs 7k for false positives)
– Explicit threshold_ratio feature: no measurable improvement (+0.000 F1 in ablation)
– XGBoost already learns the split pattern implicitly from cumulative_decayed + ownership_percent


Where I went wrong

I tested six hypotheses. Three survived. The other three taught me more about my own biases than about the algorithm.

The market floor. I thought I’d found a circuit breaker at 1.1 million total daily transfers, below which prices barely moved. The chart looked compelling. But when I controlled for individual player transfer volume, the effect vanished (Cochran-Mantel-Haenszel p=0.51). A player with 60k net transfers on a quiet 500k market day rises at the same rate as one with 60k on a busy 3M day. Thin markets don’t suppress prices; they just produce fewer players with enough individual pressure to cross the threshold. I’d mistaken an artefact of aggregation for a coded rule.

Ownership scaling. I jumped from “ownership matters” (Part 2) to “higher-owned players must need more cumulative transfers to rise.” The bar chart showed a nice upward gradient from low to high ownership buckets, and I ran with it. Then I tested the slope properly: p=0.147, R²=0.01. Not significant. Higher-owned players naturally accumulate more transfers before they happen to cross the fixed line, not because the line is higher for them. The threshold is the same for everyone. I saw the pattern I wanted to see.

The volatility filter. I thought the algorithm discounted extreme transfer spikes, with a logistic regression coefficient of -0.26 that looked like real signal. Then I added cumulative transfer pressure as a control variable, and the coefficient flipped to +0.02. Textbook confounding. What’s actually happening is simpler: one-day spikes don’t sustain (only 54% maintain half their volume the next day), so they decay away before crossing the ~200k threshold. Removing the volatility_ratio feature from the model changed F1 from 0.7299 to 0.7310. There’s no volatility filter. Spikes just don’t last long enough to matter.

The decay rate was a subtler overclaim. I had 122,000 records of the algorithm’s internal cumulative state from Supabase, fitted an exponential decay, and got 0.85 per day. I called it the algorithm’s actual decay rate. But cumulative_ent is a simple running sum with no decay at all (best correlation at decay=1.0, r=0.99). The counter doesn’t decay; it resets on every price change, and the reset does the work I’d attributed to decay. Using 0.85 as a feature engineering choice gave the single biggest F1 improvement in the project, because it compresses the range and emphasises recent transfers. But I can’t claim I discovered the algorithm’s decay rate. I fitted a useful approximation and presented it as ground truth.

I also misread a quote from Ragabolly at LiveFPL. He’d said “one fixed threshold that everyone has to cross.” Fixed. The opposite of what I was arguing. I read his words through the lens of what I already believed, and found confirmation in a direct contradiction.

The common thread is confirmation bias. I built narratives, then interpreted evidence to fit them. The model didn’t care; XGBoost was learning the right patterns regardless of my bad explanations. But the explanations needed fixing, and they only got fixed because people took the time to prove me wrong after I published the first version of this piece. I’m grateful for that.


What’s next

A fixed threshold around 200,000 cumulative transfers, a counter that resets on every price change, and an algorithm that counts unique managers rather than raw transfers. Three real findings out of six attempts, and an algorithm simpler than I wanted it to be.

But knowing the rules and predicting the outcome are different things. I still needed to build something that worked in real time, on live data, before the algorithm ran. That meant turning all of this into a model: eleven versions, more dead ends than I’d care to admit, and one number that kept not being good enough.

Next: Part 4 — “Teaching a Machine to See the Future”


This is Part 3 of a 7-part series about reverse-engineering the FPL price change algorithm. The research behind this series powers fplcore.com.

Tags:AlgorithmData ScienceFPLPrice ChangesSeries
← View all posts