Monday math links

What exactly are all those real numbers? This reminds me of one of my favorite parts of studying math.  There are wayyyy more real numbers than rational numbers. It’s really hard to get a handle on what those “extra” numbers are, and yet you need all of them to do analysis. (Here “analysis” is a specific term meaning, very roughly, “the stuff you need to make calculus work.”)

For a bit of background, the Hilbert Hotel, a classic thought exepriment.  What the hell does it mean for one infinite thing to be “bigger” than some other infinite thing?


Shots on goal part 2: the binomial distribution

Let’s say you and a hundred friends each have a quarter.  After marveling at your newfound wealth, you each flip your coin ten times and write down how many times heads came up.  A bunch of you end up with five heads, with four and six not far behind.  People with three or seven heads are pretty rare.  And just one person in your crew got heads only once.  She nonchalantly explains that she’s a “clutch flipper” with “ice water in her veins.”

At this point your BS detector should be ringing. But remember that if your friend really could bias her coin towards tails, the results would look exactly the same– you’d just demand stronger evidence to believe it.

OK, now change the scenario.  Instead of 100 friends, you have 100 MLS teams.  Instead of flipping coins with a 50-50 chance of landing heads, teams take shots, each of which has a 26% chance of going in.  The one difference is, instead of 10 flips each, the teams take a different number of shots– between 100 and 200, say.  What would we see?

Both these hypothetical scenarios would follow the binomial distribution, which is just a basic statistical tool for describing these kinds of models.  What it lets us do is take a real-world event and say exactly how likely that event would be, if our simple model were accurate.

For example: take your friend with all the tails as our real-world event.  Our model is that each coin flip has a 50-50 chance of heads.  With our model, we can plug in 1 heads, 10 tries, p = 0.5 to a formula and find that there’s only a 1.07% chance of getting one or fewer heads in ten flips.  That’s pretty unlikely– does that mean your friend really can bias the coin?  Not at all.  Remember that this was the most extreme of all the trials.  You’d expect something with p = 0.01 to happen roughly once in a hundred tries.

This is a really roundabout way of getting back to the shots on goal data from last week.  Take DCU, with 4 goals in 28 shots.  If their true shooting percentage were 26.2%, they’d have a 10.7% chance of scoring four goals or fewer in 28 tries.  Again, that’s low, but the only reason we’re looking at DCU in the first place is because they’re the worst shooting team of the season.

For each MLS team since 2005, I calculated the probability of seeing that team’s goal count given a 26% shooting percentage; statisticians sometimes call this figure a “p-value.”  Here are the most extreme teams:

year club goals sog shot pct p-value
2007 TFC 25 152 16.4% 0.0055
2008 LA 55 161 34.1% 0.0249
2005 RSL 30 157 19.1% 0.0456
2010 Hou 11 26 42.3% 0.0737
2005 Hou 53 164 32.3% 0.0762

(Note: if you’re playing along at home, you may notice that I used the two-tailed test here, whereas the examples above are based on one-sided test.)

Given that we have 82 total seasons in our data set, it’s really hard to just look at the most extreme p-values and figure out if they’re fishy or not.  Fortunately you can construct a formula for aggregating these p-values, and even more fortunately a fellow named Lou Jost did so.  Using Jost’s formula we get an aggregate p-value of 0.843– i.e., there is an 84.3% chance of seeing a distribution like this, or more extreme, given our model.  In other words, in aggregate, MLS shooting percentages are consistent with our model.

What does this mean?  Do I really believe that every on-target shot in MLS has a 26.2% chance of going in?  No.  My claim is that the shot-to-shot variances are small and cancel out over the course of a season.  This model (like all models) is a simplification, but it’s close enough to reality that we can use it as a starting point.

For the soccer infovis hall of fame

Check this fantastic World Cup visualization from Section Design:

The World Cup predicted from Section Design

I love this visualization– gorgeous and subtle.  But what of the model?

The population/GDP thing is basically a way to measure a country’s resources.  It’s got nothing to do with soccer per se; you could use a similar model to estimate the number of Olympic medals a country will win, for example.

But check the massive fudge factor!  The writeup says that the model uses “mostly economic data,” but the coefficient for “experience” dwarfs population and GDP.  What does that mean, exactly?  Consider Group E.  The Netherlands are slightly wealthier than Japan, but Japan has over seven times the population.  Yet the Oranje are projected to do much better, clearly because of “experience.”  Now that is consistent with what the typical soccer fan would expect in a match between those countries.  But compare that to Group D.  Germany have about 10x the population of Serbia and around 7x the per-capita GDP.  Yet Serbia are projected to win the group, which is opposite the conventional wisdom.  So clearly Serbia are a powerhouse in the “experience” department.

From the knockout round projections, we can deduce that Serbia is second only to Brazil in experience.  Argentina is way down near the bottom with 14% of Brazil’s experience.  I’m scratching my head to arrive at a definition that meets those criteria.

Finally, if you subtract out the population and GDP components from the knockout round projections, you only change the outcome of one match: England-Germany, which is a statistical tie either way.  Presenting these projections as “based on mostly economic data” is pretty misleading.

(HT: mikeyk)

MLS Week 8 preview

FC Dallas at Philadelphia Union. I find FC Dallas to be a sort of forgettable team, so I guess that’s something I have in common with most Texans.  So I was pretty surprised to see that they have the highest offense projection according to my crude “shots matter” model.  They should have a good chance to demonstrate that again Philly, who really aren’t much good at anything.

San Jose Earthquakes at New England Revolution. In late 2006 I was convinced that Bobby Convey and Benny Feilhaber were going to be indispensable players by South Africa.  In late 2009 I was looking pretty stupid.  Curiously, Feilhaber has played his way back in to the USMNT picture, and Convey has come back to life a bit recently, to the point where you can kinda-sorta remember that he once was a starter on an EPL team.  There is no moral to this story.  BTW, Convey-to-Ryan Johnson against the Red Bulls last week was the most beautiful goal I’ve seen in a while (around 2:20 in):

Colorado Rapids at DC United. Oh man.  I don’t want to think about DCU any more at this point.  Instead, let’s just reminisce about happier times:

Toronto FC at Los Angeles Galaxy. LA are scoring at an insane pace, and the “shots on goal” model rates them to slow down a lot.  They certain look like a fabulous offense though, and they seem like just the kind of team to make me look stupid.  Landon Donovan is as good as advertised at this point.  On the other hand, Edson Buddle is older than I am and has never been better than solid until now.  He can’t possibly keep up his 34-goal pace, right?  (Just to save you the trouble of looking it up, Roy Lassiter has the record with 27.)

Chivas USA at Columbus Crew. This “previewing every match” thing turns out to be pretty tough.  My cunning plan was to stockpile data so I could comb through it and bring subtle analysis.  But I’ve barely gotten started crunching the numbers.  So … C-bus look pretty good so far, but they’ve also played the fewest matches in the league.  I got nothing here.

When should you bench a player in foul trouble?

(In basketball, that is.)  You shouldn’t, says Jonathan Weinstein at Leisure of the Theory Class.

This is actually pretty obvious, and benching your starters fails the opposing coach test.  Namely: will the opposing coach be happy or sad when it happens?  The fun part of the article is coming up with situations where the conventional wisdom makes sense.  Here’s my best attempt: can we imagine a situation where minutes at the end of the game are especially valuable?

Suppose two teams are separated by a modest gap in average skill, so that team A would be expected to outscore team B by (let’s say) 5 points over 100 possessions.  In this case, the underdogs would benefit from a high variance in skill.  If they spend enough time above their average skill, they might be able to pull off the upset.  On the other side there’s no downside risk if they underperform: they’ll just lose by more.  By the same reasoning, team A would want to minimize their variance.

Now suppose it’s late in the game and the underdogs are holding a lead.  This lead is large enough that team B is now favored to win.  (Team A is still expected to outscore B slightly over the rest of the game, but there’s not enough time left for A to make up the gap.)  In this case, team B would maximize their chances to win by minimizing their variance.  So a low-variance player is now especially valuable.  And team A has the opposite problem.

In other words: you always want to maximize the average skill on the floor, but the optimal amount of variance changes with game situation.  So you may want to reserve some low- or high-variance players for the end of the game, if you can do so without drastically changing your average skill.

P.S. In soccer, you could actually make a pretty legitimate case for substituting a player with an early yellow card, since you can’t replace the player when he’s sent off.  Yet I have never heard anyone advocate this strategy.  (Of course, you need the probability of getting a second yellow to be pretty high before this makes sense.)

(HT: Marginal Revolution)

Is it June 12 yet?

The Fink Tank’s annual highly scientific yet lightly documented  list of the most valuable players in the Premier League is out.  Notice anything?

Gun-toting, head-banging, bald-headed, Seattle-born Marcus Hahnemann is the top rated keeper.  And third overall!

Better yet, the man from a trailer park in Nacogdoches, Mr. Clint Dempsey himself, ranks 21st.  He is not even a goalkeeper.

Hell yeah.  Don’t tread.

Do goals matter?

Last week I watched D.C. United’s frustrating match against the Red Bulls.  Early on DC had many excellent shots but couldn’t get a goal; then NYRB got two late goals to win.  I couldn’t get this week’s game against FC Dallas on TV, but word on the twitternets was that it was a similar story.  I wondered if DC’s offense could really be as dismal as it seemed, or if they had a run of bad luck with good shots.

The good news: no, DCU probably aren’t as bad as they seem.  The bad news: they probably are pretty bad.

Here’s why.  DC have converted a dismal 14% of shots on goal this season, by far the worst in the league.  See here:

2010 goals vs shots on goal(All data from the official MLS site, and I use per-game stats everywhere to account for the changing season lengths)

The line represents the long-term trend of 26.2% of shots on goal going in the net.  You can see that DC are far below the trend. Can this low rate be sustained?  I speculated that shots on goal are a better measure of a team’s offense than actual goals scored. The Fink Tank has often hinted that shots on goal are important, and DCU’s maddening performance against NYRB prompted me to investigate further.

Of course, goals are a much better measure of, you know, winning the match– but in the long term, better offenses score goals by taking more quality shots.  Whether any shot actually goes in is basically random.

I pulled data on goals and shots on goal from the last five MLS seasons. Take a look at this plot, showing the distribution of shot percentage by team:

MLS shot percentage by year

Notice anything?  Not only does the worst shooting percentage happen this year (yup, that’s DCU), so does the third worst.  And the five best!  Either this season is seriously wacky, or (more likely) it’s simply the effect of small sample size.  For full seasons, shot percentages are pretty tightly clumped in the 20% – 30% range, and we can expect this season to converge as it goes on.  Basically, MLS teams put roughly 1 in 4 shots on goal in the net.  Contrast the plot of shots on goal per game:

MLS shots on goal per gameFirst, the historical distribution is spread out evenly instead of clumped around the middle.  Furthermore, the 2010 season does have 3 outliers on the low end, but it looks like a better fit than the shot percentage graph.

We can also look at year-over-year correlations, a trick I copped from Football Outsiders. If shots on goals are a better indicator of a team’s offensive quality, then shots on goal this year should be a better predictor of goals next year than goals this year are.  And the data back this up.  The correlation of goals to goals next year is 0.165; the correlation of shots on goal to goals next year is 0.307.  Correlation is a pretty coarse tool, but this is a fairly solid indicator that shots on goal are a more stable measure of offensive quality.

If we wanted a more rigorous analysis, we could bust out the binomial distribution.  And in fact I did so, but the details are longish so I’ll save it for a future post.

For the moment, let’s suppose that all teams will converge on the magic 26.2% shot percentage.  What would the rest of the season look like?  We can project the number of goals this season using a couple extremely simple models:

  • The “goals matter” model: assume each team will score the same number of goals per game for the rest of the season.
  • The “shots matter” model: assume each team will take the same number of shots on goal per game for the rest of the season and convert on 26.2% of them.  Ignore their current shot percentage.

Here are the results:

club games goals sog shots matter goals matter diff
Cbus 5 9 23 39.1 54.0 -14.9
LA 8 15 39 43.1 56.3 -13.2
RSL 7 12 31 38.7 51.4 -12.7
Hou 8 11 26 29.7 41.3 -11.5
TFC 7 11 29 36.0 47.1 -11.2
SJ 6 11 32 44.5 55.0 -10.5
NYRB 7 8 24 28.7 34.3 -5.6
Chivas 8 10 34 34.5 37.5 -3.0
Sea 8 8 28 28.2 30.0 -1.8
NE 8 10 36 35.9 37.5 -1.6
Colo 7 8 32 35.5 34.3 1.3
Phi 6 6 25 32.2 30.0 2.2
Chi 7 9 37 40.9 38.6 2.3
FCD 7 9 42 45.2 38.6 6.6
DCU 7 4 28 28.1 17.1 11.0
KC 6 6 35 42.7 30.0 12.7

So DCU are on a 17-goal pace, which is so bad there’s nothing in my data set even close.  The worst offense of the past 5 years was TFC’s dismal 2007 season (25 goals).  But based on shots, they look more like a 28-goal team, which is, uh, still pretty bad.  But just regular bad, like Real Salt Lake 2005 or the Pink Cows from last year.  On the flip side, the Galaxy are on a 56-goal pace, which would match DCU’s 2007 Supporters’ Shield season (the best in my data set).  But shots indicate more like 43 goals, which is still quite solid, on par with Houston’s MLS Cup season.

So this model would suggest that Columbus, LA, Real Salt Lake, Houston, Toronto, and San Jose are all overperforming on offense and will probably slow down some.  DC and Kansas City are way below trend and are likely to improve.  In Kansas City’s case, they’re already at a reasonable 1 goal per game pace and could evolve into a pretty scary offense.

Of course, we can only talk about probabilities, not facts. My most confident prediction is that at least one of the above predictions will be wrong.  But I’ll follow this over the course of the season and we’ll see how the model fits.

I pulled data on goals and shots on goal from the last five MLS seasons.