Recently I took a look at the diminishing returns of rebounds, assists, steals, and blocks. As you may or may not have noticed, one common type of statistic was missing: shooting. Today I’m going to fill in the blanks using the same approach as last time.
If you haven’t read the previous article, the premise is simple. For each lineup in the NBA last year that appeared in at least 400 plays, I project how they will do in each stat using the sum of their individual stats. For example, to predict a lineup’s offensive rebound rate, I simply add the offensive rebound rates of each of the five players in the lineup. I then compare this projection to the actual offensive rebounding rate of the lineup. These steps are followed for each lineup and for each statistic.
If there are diminishing returns (i.e. in a lineup of five good rebounders, each player ends up stealing a little bit from his teammates), the correlation between the projected rates and the actual rates will be significantly lower than one. In other words, for each percentage of rebounding rate a player has individually, he will only add a fraction of that to the lineup’s total because some of his rebounds will be taken away from teammates.
If this still isn’t clear to you, be sure to check out the old article. Once you’ve done that, this article will make more sense.
Back to shooting. I’ve decided to take a look at the diminishing returns of eight aspects of shot selection/efficiency: three-point shooting percentage, three-point attempt percentage (the percentage of a player’s total attempts that are threes), close (dunks/layups) shooting percentage, close attempt percentage, midrange shooting percentage, midrange attempt percentage, free throw shooting percentage, and free throw attempt percentage.
To project a lineup’s percentage in one of those categories, I can’t simply add up the five individual percentages. For example, a lineup of five 30% three-point shooters is not going to shoot 150% from beyond the arc. Instead, I have to calculate a weighted average for the lineup. Therefore, each player’s three-point shooting percentage is weighted by the amount of threes he took. The same approach can be taken with attempt percentages.
For some statistics, such as free throw percentage, we shouldn’t expect to see any diminishing returns. After all, adding a great free throw shooter to a lineup shouldn’t make the other players in the lineup shoot worse from the foul line. However, with other stats (especially attempt percentages), diminishing returns seem more possible.
To start, let’s take a look at the diminishing returns of three-point shooting percentage:
Here we see the slope is just about 1. However, the standard error for this slope is 0.21, so the results are pretty inconclusive.
How about three-point attempt percentage?
Again the slope is just about 1. This time, though, the standard error is just .04. Therefore, we can say with pretty good certainty that there are no diminishing returns for three-point attempt percentage. In other words, adding a player to your lineup that likes to shoot threes is going to add a proportional amount of three-point attempts to your lineup total.
Up next we have close shooting percentage:
The slope is actually above 1 this time, although it’s less than one standard error away from 1. There definitely is no clear evidence of diminishing returns for close shooting percentage. Adding an efficient player around the basket to your lineup will probably not make your other players less efficient around the basket.
Close attempt percentage:
The standard error for this slope is just .05, so we may be seeing slight diminishing returns. But not much.
Midrange shooting percentage:
The standard error for this one is pretty large (0.15), but again there are no real signs of diminishing returns.
Midrange attempt percentage:
These results are pretty similar to those of close attempt percentage. The slope is less than 1 and the standard error is pretty small. Again, though, the diminishing returns effect appears to be quite small.
Free throw percentage:
As I mentioned in the beginning of the article, we shouldn’t expect to see diminishing returns on free throw percentage, and we don’t.
Free throw attempt percentage:
Just like the rest of the stats we looked at, we don’t really see a hint of diminishing returns for free throw attempt percentage.
Unlike statistics such as rebounds, assists, steals, and blocks, shooting (in all of its forms) doesn’t seem to have the problem of diminishing returns. A player’s shooting efficiency will have a proportional impact on a lineup’s shooting efficiency, and his shooting tendencies will have a proportional impact on a lineup’s shooting tendencies. There are other ways to attack this question, though, and in the future I plan on doing just that.
One thing many people have wondered is whether or not there are diminishing returns for rebounds. Basically, what that would mean is that not all of a player’s rebounds would otherwise have been taken by the opponent; some would have been collected by teammates. Therefore, starting five league leaders in rebounds would probably be overkill because eventually they’d just steal them from each other. At some point, there are only so many rebounds a team can grab, and some are just bound to end up in the hands of the opponent.
This principle is very important to statisticians who wish to develop player ratings systems. These ratings often assign weights to different statistics (including offensive and defensive rebounds), so knowing that a defensive rebound collected by one player would most likely otherwise have been collected by a teammate makes that stat less “valuable” in terms of producing wins.
To test the effect of diminishing returns of rebounds, I decided to go through the play-by-play data (available at Basketball Geek) and compare each lineup’s projected rebounding rates (the sum of each player’s individual rebound rates for the season) to their actual rebounding rates (what percentage of rebounds that lineup grabbed while it was on the floor). After doing some research, I found out a very similar study was done by Eli Witus (formerly of CountTheBasket.com, currently of the Houston Rockets). Before you proceed with the rest of my article, you should read his. Although my method is slightly different, he provides a great explanation of why it’s useful to do the research this way and he also lists some advantages and disadvantages of this method.
Before I show you the results, I should explain the intricacies of my research and also some of the differences between Eli’s study and mine. The individual rebound rates I used were taken from the rebound rates I calculated myself using the play-by-play data. Because both the individual rates and the lineup rates were calculated from the same data, there’s less risk of error due to silly things such as differences in calculations or incomplete data. Also, to reduce the effects of small sample sizes due to lineups that didn’t receive a lot of minutes together, Eli chose to group lineups into bins based on their projected rebound rates. He then regressed each bin ‘s (a collection of different lineups with similar projected rebound rates) projected rebound rate to its actual rebound rate.
When I was coming up with my idea, I chose to do things a little differently, although the purpose is the same. Instead of grouping the lineups into bins, I simply only selected the lineups that met a minimum qualification for plays. Only lineups that appeared in at least 400 plays were included in my study. This left me with a sample size of 475 lineups. Like Eli, I then regressed the projected rebounding rates versus the actual rebounding rates. One final difference between us two is that his article was written in February of 2008, so I’m presuming he used data from the 2007-08 season. I’m using data from the 2008-09 season.
Offensive Rebound Rate
The graph for Offensive Rebound Rate is below:
The key to understanding this graph is looking at the slope of the line. Here, it is 0.7462 (close to the 0.77 number he got). If there were no diminishing returns for offensive rebounds, the slope would be 1. This would mean that for each additional rebound a player could offer to his lineup, he would actually add one rebound to the lineup’s total. If the slope is less than one (such as in this case), it means that each additional offensive rebound by the player adds about 0.75 to the lineup’s total, because some of those would have been taken by his teammates anyways. The slope I have here is pretty high, though, indicating that the diminishing returns effect for offensive rebounds isn’t too strong.
Defensive Rebound Rate
In his study, Eli found that the diminishing returns effect was much stronger for defensive rebounds. Can I replicate his results? Below is the graph for defensive rebounds:
Eli found a slope of 0.29. Mine was close, but slightly higher at 0.3331. Regardless of the minor difference, we both can come to the same conclusion: there is a much stronger diminishing returns effect at play with defensive rebounds than there is with offensive rebounds. While each offensive rebound adds 0.75 to the lineup’s total, each defensive rebound only adds 0.33, indicating that many defensive rebounds are taken away from teammates. Of course, individual cases can vary.
These results help explain why a lot of player rating systems make defensive rebounds “worth” less than offensive rebounds. Eli has a good explanation of it at the end of the article here. For example, in his PER system, John Hollinger assigns offensive rebounds a value more than double the value of defensive rebounds. This is partly due to the diminishing returns effect we found here today and originally in Eli’s work. As it turns out, my numbers indicate that offensive rebounds are in fact worth a little more than double the value of defensive boards. So hats off to Hollinger and his many contemporaries who have managed to weight rebounds appropriately.
I could stop here, but I’d like to take this research a little further and see what other insights we can come up with. First, I’d like to break down the data by location (home and away).
One thing to note is that the projected rebounding rates for the lineups are based on overall individual ratings, not just for home games. If rebounding was usually in favor of the home teams, this would lead the projected lineup rebounding rates to usually underestimate the actual rates in this case. However, since it would presumably do this for all lineups, we can still take a look at the effect of diminishing returns.
With that being said, how does the home data compare to the overall data? For offensive rebounds, the slope is flatter, indicating a stronger effect of diminishing returns. However, for defensive rebounds, the slope is slightly higher, indicating a lesser effect. The differences are minor, though.
We can also take a look at the away data:
As you would expect given what we now know about the home data, the effect of diminishing returns appears to be much weaker on the road for offensive rebounds. In fact, as we can see, the slope is getting close to 1. This indicates that there isn’t much in terms of diminishing returns for this type of rebound. Intuitively, this makes sense. If teams rebound the ball better at home, there are less offensive rebound opportunities for the visiting team. Therefore, it is more likely that an offensive rebound by a visiting player would otherwise have been grabbed by the opponent as opposed to one of his teammates, which in turn makes good offensive rebounders more valuable on the road. The same pattern doesn’t follow for defensive rebounds, though. In both cases, the difference isn’t gigantic, so we should be hesitant to draw any serious conclusions.
The one difference that is large and consistent is the difference in slopes between offensive and defensive rebounds, no matter the location. Confirming what Eli found in his original studies, this data says that the effect of diminishing returns is much stronger on defensive rebounds than it is on offensive ones. Therefore, offensive rebounding is a more “valuable” skill in terms of how you rate players, and some of the best player rating systems do take this into consideration.
So far, this whole article has been about the diminishing returns of rebounds. However, we can also use the same lineup-based approach to look at other statistics. Today I’ll also explore the diminishing returns of blocks, steals, and assists. Eli already used his method to take a crack at the usage vs. efficiency debate, and I recommend you read that article for some fascinating insight.
Block Rate, for a lineup, is defined as the percentage of shots by the opposing team that is blocked by one of the players in the lineup.
Blocks are an interesting statistic to examine. After all, there are only so many block opportunities around the basket and occasionally on the perimeter. When you also take into consideration the fact that teams often funnel players into the waiting arms of a dominant shot-blocker, it seems as though the diminishing return for blocks should be relatively strong. That is, if you add a shot blocker that normally blocks 4% of the opposing team’s shots to your lineup, you shouldn’t expect to block nearly that many more as a team because of diminishing returns. To see if this is true, I used the same methodology that I did for rebounding and came up with this graph:
As it turns out, the slope is at 0.6015. This puts Block Rate somewhere in the middle between Offensive Rebounds and Defensive Rebounds. A lineup full of good shot blockers will almost certainly block more shots than a weaker lineup, but the difference may not be as much as you might think due to effects of diminishing returns.
Up next we have Steal Rate. For an individual, it is defined as the number of opponent possessions that end with the given player stealing the ball. Therefore, for a lineup, it would be defined as the number of opponent possessions that end with a steal by anyone from that lineup. The graph for Steal Rate is below:
Here, we see the slope is nearly 1. This indicates that there is practically no diminishing returns effect on steals. If you add a player 2% better than average in terms of steals to your average lineup, you should expect to steal the ball almost 2% more than you currently do. Another way to put it is that usually, if a given player steals the ball, it’s not likely that someone else would have stolen the ball if he failed. Of course, like with every graph so far, the R^2 is still very low. This means that we can’t really predict how many steals a lineup will get simply by adding the Steal Rates of all of its players.
Finally, we have Assist Rate. For an individual, it would mean the number of field goals made by a player’s teammates that he assisted on. For a lineup, it means the percentage of made field goals that were set up by an assist. The graph is below:
Of any graph presented on this page so far, this one has by far the lowest slope. Normally this would indicate that there is a huge diminishing returns effect for assists. However, I’m not sold on this explanation just yet for various reasons, so for now I will just present the data as is.
I discussed a number of different issues today, so I think it’s good to recap what I’ve presented. First, using a method similar to the one Eli Witus used at CountTheBasket.com, I found that there is a large diminishing returns effect for defensive rebounds that is significantly larger than the effect for offensive rebounds. This confirms the common belief that offensive rebounds are “worth” more than defensive ones. When we split the data into home and away, it appears that individual offensive rebounding skill is particularly important on the road, indicated by a very high slope on the graph. Finally, I took a look at the diminishing returns of a few other advanced statistics and found the strongest effect on assists and a weaker but still significant effect on blocks.
If you have suggestions or comments about my work, please e-mail me at email@example.com. And again, much credit must go to Eli Witus, who originally thought of these ideas well before I did.
To wrap up my series of articles on the impacts of players on their teammates’ three-point shooting, I thought I’d take a look at perhaps the most important aspect: can we use the available data to evaluate players and predict the future? Predicting the interactions of players is nearly impossible, but how close can we get to modeling certain aspects of these interactions?
I’m going to take a few different approaches today. The first approach is to see if we can predict a player’s impacts on his teammates’ three-point shooting based on other advanced statistics. For example, if we know an individual’s Player Efficiency Rating, can we estimate what kind of impact he’s having on his teammates? I ran a series of simple linear regressions between 12 different advanced stats and the impacts on three-point shooting. The results are displayed below:
The correlations can be interpreted as follows: if Player A scores one more point per 40 minutes than Player B, then Player A increases his teammates’ three-point attempt percentage (how often they shoot threes) by 0.16% more than Player B. Because increases in three-point attempts and three-point percentage generally are good things, it makes sense that most of the correlations in the table are positive. Players that perform better in these advanced statistics have a more positive impact on their teammates, with the exception of Rebounding Rate. For the statistically inclined, all of these are significant at the .01 level, with the exception of Rebounding Rate.
So far, so good. The numbers seem to agree with common sense: better players help their teammates more. But how much can these statistics really tell us? In other words, if we only know Chris Paul’s Assist Rate, can we predict how much influence he has on his teammates’ three-point shooting? To answer these questions, I turned to the R-squared values (http://en.wikipedia.org/wiki/R-squared) of each correlation. R-squared values range from 0-1, and they essentially tell us how much of an outcome (in this case, impact on three-point shooting) is explained by an independent variable (in this case, PER or Assist Rate or any of the other stats). The results are in the table below:
With the exception of Minutes Per Game (which may just be a reflection of overall ability), the R-squared values are all very low. In a hypothetical and easier world, they’d all be higher. Unfortunately this is the real world, and basketball is much too complicated for us to be able to predict complex player interactions based on a simple stat or two.
Before I go any further, let’s recap what we know so far. There is a significant correlation between most advanced statistics and interaction effects on three-point shooting, so we know that these things aren’t random. But these stats explain only a very tiny part of the story, so we know that interactions are very complex.
The next step I took was to attempt to develop a model that would predict a player’s impacts on three-point shooting using a combination of the different statistics. After playing with the numbers, I was able to achieve an R-squared value of 0.26 for Impact 3PA and 0.36 for Impact 3PCT. These numbers were boosted to 0.45 and 0.51, respectively, if we took the other impact number into account (using Impact 3PCT in the Impact 3PA regression and vice versa). But this defeats the purpose of the study, so we’ll ignore those most recent numbers.
What does this next step tell us? Within the limits of linear regression, we can only explain about 26% or 36% of a player’s impact on his teammates’ three-point shooting using various available advanced stats. In the real world, those numbers aren’t horrific, but they’re far from being the keys we need to truly figure out the game of basketball.
Finally, let’s switch gears and examine how consistent these interaction effects are from one year to the next. After all, if they are totally random, there should be no correlation. Using the numbers from 07-08 and 08-09, I ran regressions for Impact 3PA and Impact 3PCT. Both regressions resulted in statistically significant, positive correlations. That’s good. But the R-squared values for the regressions were .05 and .1, respectively, which is not so good. In other words, knowing how a player affected his teammates’ three-point shooting last year will only tell you a little bit about how he’s going to do this year.
Another interesting thing to look at is the impacts for players that switched teams. If players have similar impacts no matter what team they’re on, we may be on to something. When we limit the sample to these players, we again get statistically significant but not particularly informative results. Both regressions are significant at the .02 level but produce R-squared values under .1.
No matter which way you slice it, you get the same idea: good players make the players around them shoot threes more often and more efficiently, but that’s all we can say for sure. We can quantify what’s already happened, but we can’t predict the future.
If you’re still reading now, you’re undoubtedly more interested in this stuff than the average fan, and you may have some suggestions for me. If so, send me an e-mail at firstname.lastname@example.org.
Just as I did for last season, I recently calculated the numbers for the 2007-08 season. You can find them here:
The following is part of a weekly series I will be doing at the Orlando Magic blog, Third Quarter Collapse.
Last week I took a look at how Dwight Howard impacts the effectiveness of close attempts for opponents. Specifically, we saw that opponents take it to the basket less often when Howard is in the game, but they are slightly more efficient in making those attempts and getting to the free throw line.
As promised, today I’m going to show which opposing players are the most affected by Howard. To do this, I again turned to the play-by-play data. First, I narrowed the list down to players who attempted at least 10 close shots (dunks or layups) against Howard last season. For each player, I calculated their field goal percentage in close shots and close attempt rate (amount of close attempts divided by total attempts), as well as their free throws per field goal attempts rate. All of these were calculated when they weren’t facing Dwight Howard. I then calculated the same three statistics when Howard was in the game, and compared the differences. The results are available in the spreadsheet below:
There are a couple of important things to note. First, we have the weighted averages, located at the bottom of the sheet. Unlike the last study, in which we had mixed results, Howard has a decidedly negative impact (a good thing for the Magic) in each category for these players. He has the strongest impact on their free throw rate, but he also reduces their close field goal percentage by a decent amount as well.
We still need to separate these results so that they’re showing the impact of just Dwight Howard and not the entire Orlando Magic team. Therefore, for each player, I looked at how their stats changed against the entire Magic team. These players increased their close attempt percentage by 0.2%, but saw their field goal percentage go down 2.6% and their FT/FGA go down by 3.3%. What additional benefit did Dwight Howard (or at least the lineups featuring Dwight Howard) provide? He lowered the close attempt percentage by an additional 1.2%, the close field goal percentage also by an additional 1.2%, and the free throw rate by an additional 4.3%.
I’m not exactly sure why the results are different from last week. The last study had the risk of being too broad, but this study has the risk of being too narrow and including small sample sizes.
With that being said, just for fun, let’s look at which players were “intimidated” the most by Dwight Howard last year. Among the players that qualified, LeBron James shied away from close attempts more than any other player when Howard was in the game (of course, LeBron still takes it to the hole more than most players). Dwyane Wade is not too far down the list at #5, so perhaps these former Olympic teammates of Howard showed some respect.
Despite challenging him less, those two players still were more efficient around the rim when Howard was in the game. The same cannot be said for Bobcats Boris Diaw and Gerald Wallace, who saw their field goal percentages plummet when they took the ball inside against Howard. Finally, let’s discuss free throws. A pair of big men, Anderson Varejao and Zach Randolph, saw their free throw rates decline the most when opposing Howard.
Who wasn’t intimidated by Howard? An unlikely pair, Rashad McCants and Russell Westbrook, appeared to love a challenge and attempted considerably more dunks and layups than normal when Howard was in the game. They didn’t experience much success, though. Marquis Daniels became much more efficient at the rim against Howard (as did James, T.J. Ford, and Wade), while Brandon Roy was the most adept at increasing his fouled rate versus Howard. For the record, Wade had the most total close attempts versus Howard during the season, followed by David Lee.
Next week I’ll take a look at something that’s been hotly debated: Hedo Turkoglu vs. Vince Carter.
In my last look at this subject, I calculated each player’s impact on his teammates’ three-point shooting tendencies (for a description of my methodology, check the article). We often believe certain players create open outside looks for others, and the numbers seem to confirm that.
There was one minor flaw in my calculations, though. Today I’m releasing an updated version of my numbers with some minor improvements. First, only players with 50 three-point attempts or more can have an impact on a teammate’s percentages. Secondly, to account for potentially large discrepancies between on-court and off-court attempts, the new weighting system takes into account the standard deviation of the difference between a player’s on-court and off-court numbers.
With the update, among players not named Aaron Gray (I really don’t know what his deal is), Dwyane Wade is the new leader in impact on teammates’ three-point attempt percentage. Jason Terry is now second, followed by LeBron James in third and Chris Paul in fourth.
You can find all of the numbers here:
There are a few more things I want to do with this data, because stats such as this one may be useful in telling us how players interact with each other. If we can accurately measure and predict player interactions statistically, we’ll know much more about the game of basketball.
Dwight Howard is a defensive presence in the middle, that’s for sure. His size and athleticism make him quite the deterrent for offensive players trying to take it to the rim. But how big of an impact is he actually having? Does he make opponents less efficient around the rim, or does he simply scare them from attempting close shots in the first place?
When I’m trying to answer questions such as these, the first thing I turn to is the play-by-play data (specifically, the play-by-play data available at Basketball Geek). I will use this data to determine three things:
With those three calculated differences, we can start to get an idea of how good Howard is at deterring shots around the basket. The data is presented in the graph below:
Here we see two negatives and one positive. When Howard is in the game, opponents get to the free throw line more often and surprisingly convert more of their close attempts. On the other hand, they tend to attempt less layups and dunks. This is a good thing for the Magic because of how effective those shots are.
For the sake of comparison, here are two other great interior defenders:
Both players have results opposite of Howard’s. Duncan and Garnett limit their opponents’ ability to get to the line and also make them less effective around the basket, but are challenged more often than Howard.
To be honest, I’m surprised that Howard looks as ordinary as he does according to the numbers. I expected him to have a dramatic impact on all categories, but that doesn’t appear to be the case. One thing to keep in mind is that these statistics don’t take into account the level of teammates, so if Howard is often paired with a weaker interior defender such as Rashard Lewis he may be underrated.
We can do much more with these numbers, though, and next time I’ll do that. Specifically, I’ll take a look at which players are affected the most by Howard’s presence.
One thing I should have done in my study was take teammates more into account. I took a look at the top subs of some of the players, which was hinting at the issue, but I didn’t delve into it further. A player’s impact on his teammates’ three-pointers could be deceiving if he tends to be on the floor with certain teammates frequently. For example, I pointed out that one of Dwyane Wade’s top subs was Daequan Cook, a long-range bomber. This makes Wade’s impact on team three-point shooting look smaller than it actually is. What about the players who often play next to long-range shooters? Dwight Howard plays a lot of minutes with Rashard Lewis, so it makes sense that the Magic shoot a lot of threes when Howard is on the court (but not necessarily because of Howard).
To get around this, you have to look at it on a player-by-player basis. Below I have tables for four of the players in the original study. Each table includes the superstar’s impact on the three-point attempt frequency and efficiency for every player on his team. At the bottom, I calculate a weighted average.
When we use this approach, the results tell us something different. Dwight Howard still has a large impact, but it’s no longer the biggest of the bunch. On average, Dwyane Wade increased each Heat player’s three-point attempt percentage by 11 percent. Even Steve Nash, who I originally said had no effect, creates an average increase of 4.1% for his teammates. Additionally, Nash makes his teammates shoot three-pointers more efficiently, raising their 3PT% by 4.3%.
This new data may change the specifics, but it further confirms the original idea: superstars create good three-point looks for their teammates.
A common approach for NBA teams is to surround their superstars and offensive playmakers with efficient shooters who are more than willing to hit the open three-pointer when given a chance. The playmakers draw much of the defense’s attention and command double and triple teams. Therefore, it is prudent to collect efficient shooters around them.
Do the numbers support this way of thinking? Do teammates of superstars take more threes when the superstar is on the floor? To answer these questions, I turned to the play-by-play data available at Basketball Geek. I picked 10 superstars and broke down their team’s three-point shooting data to see how often the team shoots three-pointers when the player is on the court versus how many it shoots when the player is off the court. The superstar’s own three-point shooting was not counted. To measure three-point frequency, I calculated the total number of three-point attempts divided by the total number of field goal attempts.
Let’s take a look at the chart:
As you can see, most of the players increase the three-point frequency of their teammates while they are on the court. The most extreme case is Dwight Howard, who increases the Magic’s three-point attempt percentage from 32% when he’s off the court to a staggering 43% when he’s on the court. In other words, if you’re playing with Dwight Howard, there’s a 43% chance your next shot will be a three-pointer. Big men in general tended to have the largest effects on three-point frequency, supporting the common strategy of having a dominant post player who demands double teams and can kick out the ball for open threes. The Magic, in particular, killed teams with this strategy.
Two of the three point guards I looked at had similar effects as the big men, while swingmen were a mixed bag.
Why do some superstars not increase the three-point frequency of their teammates? I had a hypothesis about this: when those particular players subbed out of the game, three-point shooters were replacing them. This, of course, would make the three-point frequency when those superstars were off the court deceptively high.
To test my hypothesis, I went through the play-by-play data of the four superstars that did not increase three-point frequency: Dwyane Wade, Kobe Bryant, Brandon Roy, and Steve Nash. Which players were substituting in for them? The following tables report the top subs for each player and the three-point attempt percentages for those subs:
Except for Steve Nash, it appears as though my hypothesis was correct. Dwyane Wade and Kobe Bryant, in particular, had subs that can be considered three-point specialists. If these superstars had subs that were equally as fond of two-pointers, I imagine their impact on team three-point frequency would be similar to the other superstars I looked at.
So what’s the deal with Nash? His subs weren’t particularly in love with three-pointers, yet Nash did not increase team three-pointers. I think there could be two things behind this. For one, Nash is not a typical superstar that overwhelms his opponents with his physical abilities. He may draw the attention of defenders, but he’s certainly not demanding many double teams. Second, Nash may just be really good at setting up open looks by the basket, as I imagine Amare Stoudemire and Shawn Marion can attest. In the end, if one player doesn’t fit in with the rest of the superstars, I’m not very surprised if it’s Steve Nash.
It’s nice when the statistics confirm the common dogma among NBA teams. Teams collect three-point shooters to surround their superstars, and these strategies appear to be valid. These same shooters, when the superstar is off the court, don’t shoot nearly as many threes. I believe the results of this study should serve as further encouragement to NBA front offices that they should continue to acquire efficient, three-point shooting role players if they have a superstar on the roster.
If you’ve been reading Basketball-Statistics.com over the last few weeks, you know that I’ve been examining the individual shot selection of a number of the game’s superstars using the play-by-play data at BasketballValue. So far, I’ve taken a look at LeBron James, Dwyane Wade, and Kobe Bryant. How different will the results be when we take a look at a big man such as Dwight Howard? Let’s take a look:
There are two changes from last time: three-pointers are not included (for obvious reasons), and as we will see later, I included the efficiency of trips to the free throw line to go along with the efficiencies of shots from the field.
With Howard, we can see that he generally favors dunks/layups (not a bad choice). His frequency of those attempts dips a bit in the fourth quarter, but it generally stays about the same. Midrange/post shots (presumably mostly post) start equal with close attempts but decrease considerably as the game goes on. Where are those shots going? To the free throw line. In the first quarter, Howard ends up at the free throw line on only 23% of his possessions, but by the fourth quarter that number has nearly doubled. In fact, it is how he does most of his damage late in the game. Are the last two trends I mentioned smart decisions by Howard or do they work in the opposing team’s favor? Let’s take a look at the efficiencies of the three shot types:
First of all, his shooting percentages do not change much as the game goes on. There are slight changes in each shot type, but they don’t seem very significant. However, this graph does confirm one thing: Howard is much more efficient by getting to the line (despite his poor free throw percentage) than he is by taking shots from the midrange/post. The latter is an area in which he continues to improve, but like most players it is still his least efficient shot by far.
Up next, I’ll take a look at another superstar big man and see how he stacks up to Howard.