Following up on my previous post, I thought it might be interesting to look at the strength of the opposing lineups that individual players faced, rather than looking at five-man units as a whole. The basic idea is the same. I used lineup data from BasketballValue, and for each player I calculated a weighted average of the season defensive ratings of the opposing lineups that they faced, weighted by the number of possessions they played against each opposing lineup. In the tables below I excluded players who were on the court for less than 1000 offensive possessions.
Players who faced the weakest defenses:
Player Team(s) Poss oppDRtg
------------------- ------- ---- -------
Linas Kleiza DEN 3943 110.8
Sasha Vujacic LAL 2482 110.5
Vladimir Radmanovic LAL 2981 110.3
J.R. Smith DEN 2994 109.9
Kelenna Azubuike GSW 3574 109.8
Carl Landry HOU 1341 109.7
Andris Biedrins GSW 4278 109.6
Carlos Boozer UTA 5584 109.5
Stephen Jackson GSW 5914 109.5
Al Harrington GSW 4528 109.5
Carlos Arroyo ORL 2425 109.5
Jordan Farmar LAL 3330 109.4
Baron Davis GSW 6635 109.4
Dikembe Mutombo HOU 1152 109.3
Andrei Kirilenko UTA 4386 109.3
Steve Nash PHX 5641 109.3
Shelden Williams ATL/SAC 1508 109.3
Maurice Evans LAL/ORL 3329 109.2
Deron Williams UTA 6002 109.0
Hedo Turkoglu ORL 5910 109.0
That’s a pretty interesting list. There are a lot of players from great offensive teams. Maybe this is saying that those offenses weren’t so much great as they were lucky - they had the good fortune of facing weaker defensive lineups than other teams faced. But I don’t think this conclusion is warranted. I can think of a few other theories to explain some of the entries on this list.
Recently, 82games put up some pages listing the top five-man lineups from this past regular season in terms of plus/minus and points scored and allowed per possession. You can find similar rankings on BasketballValue. I wanted to go a step further and adjust each lineup’s ranking based on the quality of the opposing lineups that it faced during the season.
To do this I started with lineup data from BasketballValue. To adjust each lineup’s offensive rating, I calculated a weighted average of the season defensive ratings of all the opposing lineups that that lineup faced. These defensive ratings were weighted by the number of possessions the original lineup played against that defensive lineup. This meant that for each lineup I had its offensive rating and its average opponents’ defensive rating. I subtracted the second from the first to get an adjusted measure of the lineup’s offensive production. So if a lineup had a good offensive rating but played against poor defensive lineups, its rating was decreased, while if a lineup had a poor offensive rating but played against good defensive lineups, its rating was increased.
The adjustments I made were only one level deep. In college football ranking systems you sometimes see similar multi-level adjustments for strength of schedule that take into account a team’s record, its opponents’ records, and its opponents’ opponents’ records. The same thing could be done here - I’m adjusting each team’s offensive ratings for their opponents’ defensive ratings, but I could first adjust the opponents’ defensive ratings for their opponents’ offensive ratings. Theoretically, one could do this infinitely, and I think the results would ultimately be similar to what you’d get from a regression-based method like Dan Rosenbaum uses for his adjusted plus/minus. But I’m just going to do one level of adjusting, partly because it can be calculated pretty quickly with some pivot tables in Excel, and partly because you just don’t gain that much the deeper you go. This is because over the course of a season, things tend to even out, and most lineups end up facing a similar mix of good and bad opposing lineups. The variance in opponents’ defensive ratings is a lot less than the variance in lineup offensive ratings, and the variance in opponents’ opponents’ offensive ratings would be even smaller.
Below are the adjusted rankings for offensive rating, defensive rating, and point differential. I excluded lineups that played together for less than 200 offensive possessions (or 200 defensive possessions). “ORtg” is the lineup’s offensive rating (points per 100 possessions), “oppDRtg” is the weighted average of the defensive ratings of the opposing lineups faced. “offDiff” is the additional points scored per 100 possessions over what would be expected based on the quality of the defenses faced. “DRtg”, “oppORtg”, and “defDiff” are the defensive counterparts to those stats. “totDiff” is the sum of “offDiff” and “defDiff”, which represents the additional point differential per 100 possessions over what would be expected based on the quality of the offenses and defenses faced.
Best and Worst Offensive Lineups:
I’m working on a longer study, but for now here are a few links.
Last week, the New York Times had an article on using Chernoff faces to visualize data about baseball managers. The Arbitrarian followed that up with a post that used Chernoff faces to compare some star players in the NBA. Chernoff faces are a way to display data by mapping it onto simplified human faces - you can read more about them here and here. The main reason I’m linking to these is because this method of visualizing data was invented by my great-uncle, Herman Chernoff. He’s made a lot of contributions to the field of statistics in his career, but most of them (like this) aren’t as fun as the faces that he came up with thirty-five years ago.
Here’s a long Q and A with Bill James that’s well worth reading. This response in particular caught my eye:
Q: Generally, who should have a larger role in evaluating college and minor league players: scouts or stat guys?
A: Ninety-five percent scouts, five percent stats. The thing is that — with the exception of a very few players like Ryan Braun — college players are so far away from the major leagues that even the best of them will have to improve tremendously in order to survive as major league players — thus, the knowledge of who will improve is vastly more important than the knowledge of who is good. Stats can tell you who is good, but they’re almost 100 percent useless when it comes to who will improve.
In addition to that, college baseball is substantially different from pro baseball, because of the non-wooden bats and because of the scheduling of games. So … you have to pretty much let the scouts do that.
These issues seem to me to be important in basketball as well, and I think they are a good starting point for thinking about the statistical analysis of sports. Taking them in reverse order, here’s one way of framing James’ points: