Traditional betting models predict from god’s eye view: complete game state, aggregate team statistics. But players don’t have god’s eye view. Each player decides based on their subjective information state - what they see, what they know, what they believe about other players’ beliefs. Game outcomes emerge from interaction of these distributed subjective models, not aggregate patterns. Build player-specific decision prediction from their POV across historical situations, compose these subjective probabilities, predict outcomes better than market without needing speed advantage. The edge comes from modeling intersubjective coordination at cognitive level: what each player thinks is happening, not what objectively happens.
Games are not played from objective view - they’re played from distributed subjective views.
def predict_outcome(objective_game_state, team_statistics):
"""
Predict from god's eye view:
- Complete information about all players
- Aggregate team patterns
- Historical success rates in similar situations
"""
return aggregate_probability_from_complete_information
Problem: No player has complete information. QB doesn’t know what defender will do. Defender doesn’t know QB’s read. Each operates on partial information and beliefs about others’ information.
def predict_outcome(objective_game_state):
"""
Predict from composed subjective views:
- What each player observes from their position
- Their historical decisions in similar info states
- Interaction of their probabilistic beliefs
"""
player_decisions = []
for player in all_players:
# Extract what THIS PLAYER can observe
player_info_state = extract_observable_information(
game_state=objective_game_state,
player_position=player.position,
player_sight_lines=player.field_of_view,
player_knowledge=player.accumulated_info
)
# Predict decision from THEIR perspective
decision_probability = predict_from_player_POV(
info_state=player_info_state,
player_history=player.past_decisions_in_similar_POV_states,
player_cognitive_model=player.decision_patterns
)
player_decisions.append(decision_probability)
# Outcome emerges from interaction of subjective beliefs
game_outcome = compose_subjective_probabilities(player_decisions)
return game_outcome
Key difference: Model what each player thinks is happening based on their information, not what objectively is happening with complete information.
Objective game state: Complete information, god’s eye view
Example: Football play
Objective_state = {
offense_formation: "11 personnel, shotgun",
defense_alignment: "cover 2 shell, 5 man box",
receiver_routes: [go, slant, out, cross],
QB_target: receiver_A,
defender_assignment: zone_covering_receiver_A,
down_distance: "3rd and 6",
time_remaining: "2:47 Q4",
score: "down 3"
}
Player subjective information states: Partial information, their perspective
QB information state:
QB_observes = {
pre_snap_read: "cover 2 shell" (from safety alignment),
receiver_A_route: "slant" (knows the play call),
protection: "5 man protection, expect 4 rushers",
time_to_throw: ~2.5 seconds (mental clock),
down_distance: "3rd and 6" (aware),
game_situation: "need 1st down, trailing"
# Does NOT observe:
# - Exact defender drop (won't know until post-snap)
# - Whether receiver will get separation
# - If protection will hold
# - Defender's actual intent/assignment
}
Defender information state:
Defender_observes = {
offense_formation: "11 personnel, shotgun",
receiver_A_alignment: "slot left",
my_assignment: "drop to zone, cover slants",
QB_pre_snap_look: "staring at receiver A",
game_situation: "3rd and 6, trailing"
# Does NOT observe:
# - Actual route receiver will run (until breaks)
# - QB's intended target (until throw)
# - Other defenders' exact positioning
# - QB's decision process
}
Game outcome = interaction of what QB thinks based on his info and what defender thinks based on his info
Not = objective outcome from complete information
Traditional statistics: Aggregate over all situations
Player_X pass completion: 65.3%
Player_X yards per attempt: 7.2
Player_X TD:INT ratio: 2.1
Player-POV statistics: Conditioned on their subjective information state
Player_X_decision_model = {
# When QB observes specific pre-snap read
"When_QB_sees_cover_2": {
"throws_to_slot": 0.42, # 42% of time in this POV state
"checks_to_RB": 0.31,
"throws_outside": 0.18,
"scrambles": 0.09
},
"When_QB_sees_man_coverage": {
"throws_to_slot": 0.28,
"throws_outside": 0.47, # More outside throws vs man
"checks_to_RB": 0.15,
"scrambles": 0.10
},
"When_QB_sees_blitz": {
"hot_route": 0.61, # Immediate adjustment
"checks_to_RB": 0.25,
"deep_shot": 0.09,
"scrambles": 0.05
}
}
Key: Categorize history by what player observed (their info state), not objective situation
Why this matters:
Same objective situation can produce different info states:
Objective: "Cover 2 defense"
QB with experience: Sees cover 2, knows tendencies, confident
QB without experience: Sees cover 2, uncertain, hesitant
Same defense, different info states → different decisions
Model must capture player’s interpretation of what they see, not just what objectively exists.
Game outcomes emerge from interaction of player beliefs:
Traditional aggregate model:
P(completion_to_receiver_A) = receiver_A_target_share × situation_modifier
= 0.22 × 1.3 (3rd down) = 0.286 (28.6%)
Player-POV composition model:
Step 1: QB’s subjective probability of throwing to A
QB_info = {sees: "cover_2_shell", receiver_A: "slot_left", down: "3rd_and_6"}
# From QB's historical decisions when he saw similar info
QB_history_in_similar_info_states = query_database(QB_id, QB_info)
P(QB_throws_to_A | QB_info_state) = 0.42 # 42% based on QB's POV history
Step 2: Defender’s subjective probability of covering A
Defender_info = {
formation: "11_personnel",
receiver_A_alignment: "slot",
assignment: "zone_drop",
QB_eyes: "looking_at_A_pre_snap"
}
# From defender's historical decisions when he saw similar info
Defender_history_in_similar_info_states = query_database(Defender_id, Defender_info)
P(Defender_covers_A | Defender_info_state) = 0.73 # 73% stays in zone on A
Step 3: Receiver’s subjective probability of route execution
Receiver_info = {
route: "slant",
defender_alignment: "off_coverage",
QB_tends_to_throw: "on_time"
}
P(Receiver_beats_coverage | Receiver_info_state) = 0.35 # 35% gets separation
Step 4: Compose subjective probabilities
P(completion) = P(QB_throws_to_A) ×
P(Receiver_gets_separation) ×
(1 - P(Defender_disrupts | stays_on_A))
= 0.42 × 0.35 × (1 - 0.73 × 0.80)
= 0.42 × 0.35 × 0.416
= 0.061 (6.1%)
Market consensus: 28.6% (from aggregate model)
Player-POV model: 6.1% (from subjective composition)
Edge: 22.5 percentage points underpricing of incompletion
Bet on incompletion - market doesn’t understand player POV interaction creates low completion probability.
Market has same information access:
But market models wrong level:
Market: “What percentage of time does this outcome happen in this situation?”
Aggregate over all games with similar:
- Down/distance
- Field position
- Score differential
- Offensive/defensive personnel
→ Produces aggregate probability
You: “What does each player observe, and how do they historically decide in similar information states?”
For each player:
- Extract their observable information
- Query their historical decisions when they saw similar info
- Predict their probabilistic decision from their POV
Compose interactions:
- QB believes receiver will be open (based on his read)
- Defender believes he'll cover (based on his assignment)
- Outcome = interaction of these beliefs
→ Produces composed subjective probability
Your edge = modeling at cognitive level (what players think) vs aggregate level (what teams do)
No speed advantage needed - edge comes from better model of decision-making process, not faster observation.
For each player, build database of historical decisions indexed by their information state:
player_decision_database = {
player_id: {
# Information state as key
info_state_hash: {
"situations": [
{
"game_id": "2024_week_3",
"play_number": 42,
"player_observed": {...}, # Their POV at decision moment
"player_decision": "threw_to_receiver_A",
"outcome": "completion_12_yards"
},
# More situations with similar info state
],
"decision_distribution": {
"threw_to_receiver_A": 0.38,
"checked_to_RB": 0.31,
"threw_to_receiver_B": 0.22,
"scrambled": 0.09
}
},
# More info state hashes...
}
}
Key: Hash is computed from what player could observe, not objective game state
From objective game state, extract what each player observes:
def extract_QB_information_state(objective_state, QB_position):
"""
What can QB see/know from their perspective?
"""
return {
"pre_snap_defensive_alignment": observable_from_QB_position(
objective_state.defense_positions,
QB_position.sight_lines
),
"offensive_play_call": QB_position.called_play,
"receiver_routes": QB_position.knows_routes,
"down_distance": objective_state.down_distance,
"time_remaining": objective_state.clock,
"score_differential": objective_state.score,
# NOT included (can't observe):
# - Defender actual assignments (learns post-snap)
# - Receiver separation (learns post-snap)
# - Exact pass rush timing (estimates from experience)
}
def extract_Defender_information_state(objective_state, Defender_position):
"""
What can defender see/know from their perspective?
"""
return {
"offensive_formation": observable_from_Defender_position(
objective_state.offense_alignment,
Defender_position.sight_lines
),
"my_assignment": Defender_position.zone_responsibility,
"receiver_in_my_area": Defender_position.coverage_target,
"QB_eyes_direction": observable_QB_gaze(objective_state.QB_position),
"down_distance": objective_state.down_distance,
# NOT included:
# - QB's intended target (learns after throw)
# - Other defenders' exact positions (peripheral awareness only)
# - Offensive play call (infers from formation)
}
When predicting decision, find similar historical information states:
def predict_decision(player_id, current_info_state, database):
"""
Find similar info states from player's history, predict decision.
"""
# Find k-nearest neighbor info states
similar_states = find_similar_info_states(
current_info_state,
database[player_id],
k=20,
similarity_metric=info_state_distance
)
# Aggregate decisions from similar states
decision_counts = Counter()
for state in similar_states:
for situation in state["situations"]:
decision_counts[situation["player_decision"]] += 1
# Normalize to probability distribution
total = sum(decision_counts.values())
decision_probability = {
decision: count / total
for decision, count in decision_counts.items()
}
return decision_probability
Similarity metric must respect information structure:
def info_state_distance(state_A, state_B):
"""
Distance between two information states.
Weight components by relevance to decision.
"""
distance = 0
# Critical features (high weight)
distance += 5.0 * feature_diff(state_A.defensive_coverage, state_B.defensive_coverage)
distance += 4.0 * feature_diff(state_A.down_distance, state_B.down_distance)
# Important features (medium weight)
distance += 2.0 * feature_diff(state_A.receiver_alignment, state_B.receiver_alignment)
distance += 2.0 * feature_diff(state_A.score_differential, state_B.score_differential)
# Contextual features (low weight)
distance += 1.0 * feature_diff(state_A.time_remaining, state_B.time_remaining)
distance += 1.0 * feature_diff(state_A.field_position, state_B.field_position)
return distance
Connection to mesh intersubjectivity:
Football play = distributed coordination problem
QB’s decision process:
QB believes: "Defender will drop to zone on receiver A" (65% confidence from his read)
QB decides: "Throw to A has 70% success" (conditional on his belief)
QB action: Throws to A with 0.65 × 0.70 = 0.455 (45.5% subjective probability of success)
Defender’s decision process:
Defender believes: "QB will target receiver A" (80% confidence from QB eyes)
Defender decides: "Stay in zone to cover A" (90% probability given his belief)
Defender action: Covers A with 0.80 × 0.90 = 0.72 (72% subjective probability of coverage)
Objective outcome = interaction of subjective beliefs:
QB throws to A: 0.455 probability (from his POV)
Defender covers A: 0.72 probability (from his POV)
Actual completion: 0.455 × (1 - 0.72 × success_rate_when_covered)
= 0.455 × (1 - 0.72 × 0.25)
= 0.455 × 0.82
= 0.373 (37.3%)
But QB doesn't know defender will actually cover (he estimates 65%, actual is 72%)
And defender doesn't know QB will actually throw (he estimates 80%, actual is 45.5%)
→ Beliefs are imperfect estimates of each other's beliefs
→ Coordination is imperfect (intersubjective, not objective)
→ Outcome emerges from belief interaction, not perfect information game theory
This is exactly intersubjective coordination:
Market models this as aggregate: “Team completes 65% on 3rd down” You model this as intersubjective: “QB believes X, defender believes Y, outcome = interaction”
Market inefficiency emerges from aggregate modeling:
Market sees:
Historical data: Team completes 65% of passes on 3rd and 6
Current situation: 3rd and 6
Market odds: Imply ~62% completion (slight adjustment for specifics)
Your model sees:
QB info state: Sees cover 2, historically throws to slot 42% in this read
Defender info state: Zone on slot, historically stays 73% with this alignment
Receiver info state: Running slant, historically beats this coverage 35%
Composed probability: 0.42 × 0.35 × (1 - 0.73 × 0.80) = 6.1% completion
Market: 62%
Your model: 6.1%
Edge: 55.9 percentage points!
Why market is so wrong:
Aggregate model misses:
These patterns only visible at player POV level, not team aggregate level
1. Video data pipeline
- Extract objective game state from broadcast/tracking
- Timestamped: pre-snap, post-snap, throw, reception
2. Information state extractor
- For each player, determine observable information
- Hash information state for database lookup
3. Player decision database
- Historical decisions indexed by info state
- Updated continuously with new games
4. Prediction engine
- Query database for each player's similar info states
- Compute decision probability from their POV
- Compose subjective probabilities into outcome prediction
5. Market odds scraper
- Current betting market consensus
- Compare to player-POV prediction
6. Expected value calculator
- Identify mispriced outcomes
- Position sizing based on edge magnitude
Not real-time feeds - just comprehensive historical data:
# For each game
game_data = {
"plays": [
{
"situation": objective_game_state,
"player_observations": {
QB_id: QB_info_state,
receiver_A_id: receiver_A_info_state,
defender_id: defender_info_state,
# For all relevant players
},
"decisions": {
QB_id: "threw_to_receiver_A",
receiver_A_id: "ran_slant_route",
defender_id: "stayed_in_zone"
},
"outcome": "completion_8_yards"
},
# All plays in game
]
}
Build this database across seasons:
No speed needed - prediction quality comes from model depth, not observation latency
def find_betting_opportunities(upcoming_game):
"""
Predict play outcomes from player-POV, compare to market.
"""
opportunities = []
for play_situation in upcoming_game.likely_situations:
# Predict from player POV composition
player_pov_prediction = predict_from_subjective_composition(
play_situation,
player_databases
)
# Get market consensus
market_odds = scrape_market_consensus(play_situation)
# Calculate edge
edge = abs(player_pov_prediction - implied_probability(market_odds))
if edge > threshold: # e.g., 10% edge
opportunities.append({
"situation": play_situation,
"prediction": player_pov_prediction,
"market": market_odds,
"edge": edge,
"bet_direction": "over" if player_pov_prediction > market else "under"
})
return opportunities
# Kelly criterion based on edge
def position_size(edge, odds, bankroll):
"""
Optimal bet size given probability edge.
"""
probability = edge_adjusted_probability(odds, edge)
kelly_fraction = (probability * odds - 1) / (odds - 1)
# Use fractional Kelly for safety
bet_size = bankroll * kelly_fraction * 0.25
return bet_size
Macro bets (easier to access):
Micro bets (if available):
Edge exists at all granularities - player POV composition works for macro aggregation too
Requires different thinking:
Conceptual shift: From aggregate to subjective
Data complexity: Information state extraction
Modeling difficulty: Composition of probabilities
Not obvious: Market uses aggregate because it’s simpler
But the edge is real:
Games are played by individuals with partial information, not teams with god’s eye view. Model the individuals’ subjective states, compose their decisions, predict outcomes better than aggregate models.
Any game/situation with:
Universal pattern:
1. Extract what each agent observes (information state)
2. Build decision model from their POV (historical patterns)
3. Compose subjective probabilities (interaction of beliefs)
4. Predict outcome better than aggregate models
Traditional AI/ML prediction:
Input: Complete objective state
Model: f(state) → outcome
Training: Learn f from historical (state, outcome) pairs
Player-POV prediction:
Input: Objective state
Step 1: Extract subjective information states for each agent
Step 2: For each agent, predict decision from their POV
Model_agent: f_agent(info_state_agent) → decision_probability_agent
Step 3: Compose subjective decisions into outcome
Outcome = interaction(decision_probability_agent1, decision_probability_agent2, ...)
Why this works better:
Agents don’t operate on complete information. They operate on their beliefs formed from their observations.
Modeling beliefs > Modeling aggregate outcomes
This is modeling intersubjective coordination at cognitive level.
Same pattern as mesh coordination with lag:
Player-POV prediction = modeling the cognitive substrate of game dynamics
Not faster observation, not more data - better model of how distributed agents coordinate under partial information.
#PlayerPOV #SubjectiveInformationStates #CognitiveModeling #DistributedDecisions #PartialInformation #IntersubjectiveCoordination #PlayerBeliefs #ComposedProbabilities #InformationAsymmetry #DecisionPrediction #BettingEdge #GameTheory #ImperfectInformation #BehavioralPatterns #SubjectiveProbability #BeliefInteraction #PlayerHistory #POVModeling #CoordinationPrediction #AgentBasedModeling