Proprietary TrueEdge™ Scoring Technology

See the Data Behind the Rankings

This is what TryoutEngine actually produces — not averages on a spreadsheet, but statistically validated rankings with confidence intervals, tier analysis, and evaluator bias correction. Synthetic demo data shown.

Player Rankings with Confidence Intervals
Each bar shows a player's TrueEdge score. The error bars show the 95% confidence interval — where overlapping bars exist, placements are borderline and flagged for review. This is how you answer "why not my kid?" with data.
Players Ranked
48
U14 Bantam division, 4 evaluators
Average CI Width
±6.2
Tight confidence = reliable rankings
Borderline Players
5
Flagged for manual review at tier cutlines
Natural Break Tier Analysis
Instead of drawing an arbitrary line between A-team and B-team, the system identifies natural breaks in the scoring distribution. Colored bands show where the data clusters — making tier assignments data-driven, not political.
Tier 1 (A Team)
Tier 2 (B1)
Tier 3 (B2)
Tier 4 (C)
Natural Break
Tiers Identified
4
Automatic cluster detection
Largest Gap
8.3 pts
Between Tier 1 and Tier 2 — clear separation
Tightest Gap
3.1 pts
Tier 3/4 boundary — flagged for review
Evaluator Bias Correction
Left: Raw scores from 4 evaluators. Evaluator C scores tough (avg 4.1) while Evaluator A scores easy (avg 6.8). A simple average would penalize every player Evaluator C watched.
Right: After TrueEdge z-score normalization, all evaluators are calibrated to the same scale.
Before: Raw Evaluator Scores
After: TrueEdge™ Calibrated
Evaluator Spread (Before)
2.7 pts
Avg 4.1 to 6.8 — massive inconsistency
Evaluator Spread (After)
0.3 pts
Calibrated to same scale automatically
Rankings Changed
31%
15 of 48 players moved position after correction