### Offensive and Defensive Adjusted Plus/Minus

I mentioned in my last post on calculating adjusted plus/minus that the next thing I wanted to do was split it into offensive and defensive adjusted plus/minus. Lior and Cherokee_ACB had some good suggestions about how to do that in the comments, but first I wanted to see if I could replicate Dan Rosenbaum’s original methodology. This was a little tricky because Dan didn’t spell out his process in detail, but after some trial and error I think I’ve been able to duplicate what he did. As a result, I’m able to calculate 2007-08 player rankings for offensive and defensive adjusted plus/minus, metrics that have not been available publicly since Rosenbaum last presented them in 2005.

#### The Method

Here is all Dan originally said about his method of splitting adjusted plus/minus into offensive and defensive components:

“I also present offensive and defensive ratings that are based on the pure adjusted plus/minus rating plus an “efficiency” rating that measures how many points per possession are scored by both teams when a given player is one the floor. By combining these two measures, I create offensive and defensive ratings. However, given that I am using two imprecisely estimated ratings to arrive at these offensive ratings, I suspect these rating are measured with quite a bit of error.”

I think I’ve been able to piece together what he meant, and the methodolgy is actually very clever. Instead of doing one regression for offense and one for defense, or one large regression containing both, this method starts with the regression for total adjusted plus/minus and then combines the results of that with a second regression that tells what portion of each player’s value came from their offense versus their defense.

The first step is to run the adjusted plus/minus regression as described in my previous post. The dependent variable is MARGIN (point differential per 100 possessions) and the independent variables are all the players in the league (other than the omitted low-minute reference players). Players are coded 1 if they were on the court for the home team for that observation, -1 if they were on the court for the away team, and 0 if they were not on the court. After the regression is run the player coefficients are re-centered so that the league average adjusted plus/minus is zero.

The second regression is similar to the first, but with two big changes. Instead of using MARGIN as the dependent variable, this one uses what Dan called “Efficiency”, but which I will call “DiffOD”.

DiffOD = 100*(HomePts/HomePoss) + 100*(AwayPts/AwayPoss)

DiffOD measures the difference between a team’s offensive strength and defensive strength. If a lineup is good offensively (high PtsScored/Poss) and bad defensively (high PtsAllowed/Poss), it will have a high DiffOD. If a lineup is good on both offense and defense, then their DiffOD will be lower (the value will be brought up by their good offense but brought down by their good defense - the same holds in reverse if the team is bad on both offense and defense). If they are bad on offense and good on defense, then their DiffOD will be very low (low PtsScored/Poss and low PtsAllowed/Poss). So DiffOD can be seen as offensive strength minus defensive strength.

The other change in this regression is that both the home team’s players and the away team’s players that were on the court are coded as 1, with all players not on the court coded as 0. This is because unlike MARGIN, where one lineup’s high adjusted plus/minus players would be neutralized by an opposing lineup’s high adjusted plus/minus players, in the case of DiffOD, one lineup being made up of good offense/bad defense players combines with an opposing lineup of good offense/bad defense players to lead to an even higher DiffOD (more points per possession by both teams).

The coefficients from this regression (once they are re-centered around the league average) give each player’s adjusted DiffOD. A high DiffOD means that controlling for teammates and opponents, that player leads to his team being better offensively than they are defensively. A player with a low DiffOD leads to his team being worse offensively than they are defensively. So in theory, a high DiffOD player contributes more to his team through his offense than his defense. Another way of saying this is that most of his value comes from his offense, or that he’s better on offense than he is on defense.

So now we have each player’s adjusted plus/minus and their adjusted DiffOD. We know their overall contributions, and the proportion of those contributions that came from their offense versus their defense. Now we just have to figure out how to combine these two numbers to calculate their offensive adjusted plus/minus and defensive adjusted plus/minus. The way I figured out to do this (which I’m guessing is the way Dan did it) is actually pretty simple.

First I went back and looked at the lineup-level dependent variables, MARGIN and DiffOD:

MARGIN = 100*(HomePts/HomePoss) - 100*(AwayPts/AwayPoss) DiffOD = 100*(HomePts/HomePoss) + 100*(AwayPts/AwayPoss)

We can look at these in terms of offensive and defensive efficiency:

MARGIN = ORtg - DRtg DiffOD = ORtg + DRtg MARGIN + DiffOD = 2*ORtg ORtg = (MARGIN + DiffOD)/2 DiffOD - MARGIN = 2*DRtg DRtg = (DiffOD - MARGIN)/2

Converting this back to player adjusted plus/minus and player adjusted DiffOD, we get the following formulas:

Offensive Adjusted Plus/Minus = (Adjusted Plus/Minus + DiffOD)/2 Defensive Adjusted Plus/Minus = (Adjusted Plus/Minus - DiffOD)/2

You’ll notice that adjusted plus/minus and DiffOD flip-flopped in the formula for defensive adjusted plus/minus compared to the relationship between MARGIN and DiffOD in the DRtg formula. This is because while a lineup that plays good defense will have a low DRtg, I wanted to present defensive adjusted plus/minus so that a player who is a good defender has a high (rather than low) value.

Using these formulas we can take each player’s adjusted plus/minus and use their DiffOD to split it into offensive adjusted plus/minus and defensive adjusted plus/minus.

#### The Results

The first thing to note about the results is that they are noisy. Just as each estimate of adjusted plus/minus has a margin of error, so does each estimate of DiffOD. By combining these two estimates to get offensive and defensive adjusted plus/minus we are combining the magnitudes of these errors. Furthermore, the same players that have large standard errors for adjusted plus/minus have large standard errors for DiffOD. Thus in addition to there being a large degree of uncertainty regarding Dwight Howard’s overall contributions, there is also a large degree of uncertainty regarding how to divide up those contributions between offense and defense. These errors would be reduced by looking at a larger sample of data, which I hope to do in the future now that I’ve got the methodology down. The alternate methods suggested by Lior and Cherokee_ACB may also result in smaller standard errors.

All of these tables are for the 2007-08 season and include only players who played at least 388 minutes (the cutoff used by BasketballValue). The position designations are those used by Doug’s Stats (I haven’t altered any of them though some are definitely inaccurate). See here for a spreadsheet with the data for all players.

#### Best and Worst

**Offensive Adjusted Plus/Minus, Top and Bottom Ten:**

The top four are probably the same four that many people would rank as the four best offensive players in the league, in some order. I can’t explain Chris Quinn.

**Defensive Adjusted Plus/Minus, Top and Bottom Ten:**

Ronnie Price is almost definitely a fluke, considering his low minutes and the huge difference between his defensive rating and those of other point guards (which can be seen in PG defense table below). The Hornets players have some interesting ratings, with West and Stojakovic (see the SF defense table below) looking great defensively while Paul looks awful.

#### Offensive Adjusted Plus/Minus by Position

**POINT GUARDS, sorted by Offensive Adjusted Plus/Minus:**

**SHOOTING GUARDS, sorted by Offensive Adjusted Plus/Minus:**

**SMALL FORWARDS, sorted by Offensive Adjusted Plus/Minus:**

**POWER FORWARDS, sorted by Offensive Adjusted Plus/Minus:**

**CENTERS, sorted by Offensive Adjusted Plus/Minus:**

#### Defensive Adjusted Plus/Minus by Position

**POINT GUARDS, sorted by Defensive Adjusted Plus/Minus:**

**SHOOTING GUARDS, sorted by Defensive Adjusted Plus/Minus:**

**SMALL FORWARDS, sorted by Defensive Adjusted Plus/Minus:**

**POWER FORWARDS, sorted by Defensive Adjusted Plus/Minus:**

**CENTERS, sorted by Defensive Adjusted Plus/Minus:**

#### Positional Averages

Here the averages for offensive, defensive, and total adjusted plus/minus by position (weighted by possessions played):

Pos Off Def Tot --- ---- ---- ---- PG 0.8 -2.0 -1.2 SG 0.5 -0.8 -0.3 SF 0.9 0.2 1.0 PF -0.8 1.0 0.3 C -2.1 2.5 0.5

These orderings are similar to those reported by Dan Rosenbaum in 2005. The high offensive adjusted plus/minus average for small forwards may be an anomaly.

Outstanding contribution.

It is pretty close but the edge goes to the front court having more positive impact than backcourt.

Instead of it being a little man’s / PG league right now, now more than ever according to some based largely on the offensive performance / press of the top PG names, PGs as a group are having the least positive net impact of any position across the league, by this measure. 2nd best on offense but last on defense.

Somewhat surprisingly that SF came out best.

Comment by Mountain — June 4, 2008

It would be interesting to see what the alternative method(s) produce. One better for certain types of players over the other? Would a blend of the values found by the 2 methods have smaller errors than either alone?

My impression is this general +/- approach does a better job of finding the top and bottom players than sorting the middle or finding the magnitude of impact of the best and worst.

Dan Rosenbaum’s net statistical +/- can be calculated and has been broken into offensive and defensive sides. You’ve indicated a desire to calculate your own. Using either and your newly available “pure” offensive and defensive adjusted +/- it will then be possible to split the offensive and defensive impacts into 4 boxes - those produced as the lead actor and those “produced” as a member of a team / 5 man lineup by subtracting the net individual statistical +/- part from the pure total for offense and defense.

I think the same approach developed here to find offensive / defensive impacts could be used to address each of the 4 factors, constructing something similar to DiffOD for FGs made, turnovers, FT made or frequency and rebounds on offensive and defensive side of action. What do you think?

If this were done and net statistical +/- were broken down into 8 parts as well then subtracting the parts of statistical from the pure parts you’d have a 16 aspect portrait of a player’s individual and team impacts. Something I’ve pointed to as a goal now within reach, to the public.

Look forward to the further advances you are leading.

Comment by Mountain — June 4, 2008

More fuel for the debate on whether or not Chris Paul is a good defender.

Comment by JTapp — June 4, 2008

(Assuming the positional averages are based on position assigned to a player generally and is not based on sum of all like positional assignments in lineups across the league)

How much are positional averages affected by the accuracy of positional assignment and how much by the use of non-traditional lineups (2 PGs together or no designated C or 2 SFs, etc.)? The two are related and there probably isn’t a definitive answer agreeable to all. But I wonder how well unconventional lineups perform on average, adjusted.

Looking at 05 Rosenbaum defensive study and this list, a few curious cases:

Mark Blount went from one of worse starting defensive centers to one of best? Change in rank related more to change in team and team context or his dramatically better defensive play?? “Defensive rating” went the other direction (heavily because of the different teams involved)

Dampier went from one of best to one of worst? (Change in coach, pace, subs)

Jamison from one of the worst to one of the best.(Change in coach, pace)

Battier from 1st to just average? (change in team but not pace or team defensive quality and even the same sub much of the time Wells). I can accept some slippage but error may be overstating the change.

Banks from 2nd best to 11th worst.

Probably more cases of agreement then disagreement (I saw a fair number in agreement in my spot check) and there can be several reasons for disagreement of course. But worth a more thorough check.

Your intended study of team change cases could help. Change of coach an dpace may be other useful studies. Change of teammates and especially sub of course has impact too.

Comment by Mountain — June 4, 2008

I have to think more about what the positional averages mean. What does it mean to say that on average, centers have a larger positive defensive impact than point guards? For one thing, does it make sense to say centers typically have a positive defensive impact? If most centers are good defenders, then what does “good” mean? What is it relative to? It can’t mean that centers would defend opposing point guards better than point guards would defend opposing centers.

Maybe the only thing the defensive positional averages are saying is that when bigger positions are subbed in for smaller positions, the defense improves. And the offensive positional averages are saying that when teams play small, they are better offensively.

Generally adjusted plus/minus has a lot of issues with players moving between positions.

Comment by Eli — June 5, 2008

LUg0hF http://www.MHyzKpN7h4ERauvS72jUbdI0HeKxuZom.com

Comment by horny — January 15, 2014