Well AI, DJ and Rubio are all elite defenders at their position. Wins achieved also benefits players who play a lot. The more you play the more you help your team win. Iggy is integral to the Warriors defense. This was made evident when the Warriors struggled on D with him injured. Rubio is one of the reasons why the T-Wolves are not putrid defensively but Rubio can't make up for the Wolves horrid rim protection. He can only help at the perimeter. DJ is averaging a double double and is leading the league in rebounds per game and FG%. Not exactly a scrub. 3rd in blocks too I believe. Overall, I am not saying any of those guys are top 20 players but this stat really does make people see how unappreciated some players can be. Stats like PER barely measure defense and overvalues high volume scorers.
The original goal of xRAPM was to predict future nba results (scores) based on scores from previous seasons and/or contests. In fact, many nba statisticians still believe that xRAPM is the best predictor of future outcomes (scores) when specific lineups are set and known beforehand. The excerpt below shows the xRAPM formula as developed by Jeremias Englemann, the Guru that created this metric. You can find the entire backstory here: http://apbr.org/metrics/viewtopic.php?f=2&t=8025&start=30#p13830 xRAPM = 0.85*(0.65*last_years_RAPM+ 0.35*last_years_box_rating) The first term (blue) utilizes the previous season's regularized adjusted Plus/Minus (RAPM) as the basis set. The second term (red) is not technically based on adjusted Plus/Minus at all; rather it is calculated from the previous season Box Score. The Box score is calculated as a weighted average of twenty different individual player statistics as shown below. Finally, the two terms are then added together and multiplied by 0.85. The 0.85 is a constant that Jerry found through mathematical computation and iteration as it was shown to reproduce the pure data, where "pure data" is a comprehensive collection of actual nba results (scores) from previous seasons. The Box Score term is a weighted average of the twenty variables shown below and is scaled to a player’s height. By the way, “exp”” (below) = experience. I copied and pasted Jeremias' explanation (quoted) below (also taken from the website shown above): “Here are the weights I found for offense and defense. Everything's scaled to the influence of height on offense height 1 0.8931422196 exp-0.065287672 0.0537292936 GS -0.0185391812 0.1017509227 MP -0.04221383 0.1703430321 FGM 1.0019454696 -0.3614711184 FGA -0.7713214889 -0.0802363173 FG% 0.2848825566 -0.0243612852 3FG 0.2288358653 -0.0252052757 3FGA 0.0349326238 -0.0660744428 3-% 0.3220324436 0.1719594885 FT 0.3366949904 0.1357251166 FTA 0.0736131376 0.0098560925 FT% -0.0252195806 -0.8179984551 OREB 0.296998827 -0.0366778245 DREB -0.1021371556 0.4586301605 ASS 0.4790862015 -0.0081967213 ST 0.0581352101 0.3665636712 BLO 0.0205847853 0.2253025491 TO -0.4038566074 -0.0454896575 FOULS -0.2794180757 0.016693846 I divided the boxscore totals of each player first by (team pace adjusted) minutes. League average is subtracted from a player's per minute totals, the result is being weighed by the listed weight. So if a player had more than average amount of statistic X, which also had a positive weight, that's good for his rating. And having less than average amount of statistic X, which had a positive weight, is bad for his rating. And having less than average amount of statistic X, which had a negative weight, is good for his rating. When I use this new BoxScore metric to build a better prior-informed RAPM, tests seem to suggest that I should use 0.85*(0.65*last_years_RAPM+0.35*last_years_box_rating) as a prior to compute next seasons RAPM (which then again gets combined with the BoxScore rating to form a prior for the next season, etc).” J.E. (Jeremias Englemann)
Jordan leads the league in rebounding - by 1.1 per game , almost 9% more per game than the #2 guy . Is 3rd in blocks , 1st in FG% , gets an awful lot of easy buckets & averages a double double.
And when he's in the game, opposing offenses tend to score a lot more efficiently because he has bad defensive awareness. I no longer have League Pass so maybe he has improved this year. Terrence Jones is in the top 20 in FG% and blocks and has the same defensive awareness problems. Jordan is just a bigger version of Jones that can rebound.
One thing I never liked when analyzing Rondo is that his numbers were inflated due to playing next to Garnett, Pierce and Ray Allen. That and he intentionally inflated his stats by playing garbage minutes (especially when chasing the assist record). I could rack a ton of assists too if I was throwing the ball to Ray Allen. Also, I've felt for some time that his defense was a bit over-rated. Not in that he isn't a good on-ball defender and he's excellent at patrolling the passing lanes. It's just that I felt like his front line erased a lot of high risk defensive play that he could never get away with on many teams. https://www.youtube.com/watch?v=wGZkwGw7rfo I guess we'll see as the season progresses.
I do agree that xRAPM has a very good track record in terms of predictive team scores (probably one of the best out there). I also think it's not a very good tool in assessing players on an individual level. One major issue I've had is that it's not positionally weighted. It tends to overemphasize defensive rebounding and player size when taken on an individual context basis. While this works when applied to an overall team as a unit, I believe this undermines it's effectiveness in judging individual player value. And the weightings for age and experience are pretty much black box stuff. I've still yet to see how he derives the weighting for those factors. While it may work "on the whole", I wonder how much of that is biases cancelling each other out vs actually having correct biases is up for debate. For example, according to their weightings it's better to just let the shot clock run out and eat a turnover than take a contested jumper. That said I think xRAPM is very good if used selectively, ie, to predict future 5v5 line up performance, but I wouldn't try to use it to judge individual player value across roles, let alone across rosters. For example, this year's xRAPM has James Harden rated below Chris Anderson, Nick Collison, Robin Lopez and Channing Frye. http://stats-for-the-nba.appspot.com/ratings/xRAPM.html xRAPM has it's place, but definitely not within the realm of player vs player comparison.
PER is still the best stat to judge individual player ability. RPM can reveal if the player fit the system better.
I wouldn't go that far. PER is okay as a quick eye test for measuring offensive contribution, but it really doesn't factor in defense and it has a formulaic flaw that rewards volume shooters regardless of efficiency. For example, if Lin wanted to raise his PER, all he would need to do is take 10 more shots. The more shots you take, the more your PER goes up regardless of whether the shots are good shots or not as long as your shooting percentage isn't downright awful. Yes, your PER goes up more for efficient vs inefficient shooting, but the fact that the break even on shot efficiency is so low means that you'll look great if you just chuck all night (ala JR Smith) and worse if you take few shots at high efficiency. Now, if you take PER and use it as part of a broad spectrum of metrics, then you can point to them as a whole and say something that might be meaningful. Taking just PER to compare any player is a dodgy practice at best.
But JR Smith is individually a talented player. The fact that he takes so many shots, and his team allows him to take so many shots, reflects this. I think this is what haoafu meant. Individual ability does not necessarily translate to helping your team win, however. Its true that one could artificially boost their PER by chucking up a lot of shots, and this is a popular criticism from the "Wins Produced" crowd. But this argument only goes so far. In practice, players will not unconsciously chuck up 30 shots a game while shoot 35% from the field. Even if a player literally had no self-awareness as to how poorly he's playing, his coach or teammates would not allow this unless they have abandoned the goal of winning games.
Doesn't have to be "all star" just inflated. PER doesn't really break down at the "all star" level. Those players tend to be inherently somewhat efficient. Where it breaks down is at the mid level and lower level players, where efficiency + volume are more rare commodities.
As has become obvious crowd sourcing wins. The Vegas line almost every year beats every other method. If you are wagering money chances are you have researched the topic. You have a personal stake. Put thousands of those together ... I have been paying much closer attention to the Vegas line these last few months.
Beverley's Defensive RPM: 1.42 Lin's Defensive RPM: 0.41 So no Lin fans, they are not equal defenders. I think Knicks fans will rejoice to see that Lin is only two spots ahead of Raymond Felton overall :grin: Howard and Harden not in the top 20? This is why I consider both players to be all-stars but not superstars. If we find a way to get Love next year, he will immediately be the best player on the roster.
Keep in mind that Kyle Lowry is also listed as a DRPM -0.10... so are we accepting that Beverley > Lin > Lowry defensively? Or are going with my view which is, "Let's see the formula and data assumptions please." :grin:
xRAPM and now RPM are not transparent stats. There is no way to understand them, we can only check their accuracy against historical data. But what if the historical data is used to modify the occasional release results of xRAPM and RPM data? Then you have not a stat that can predict the future, but a stat that reflects the past. That is something that sticks in my craw.
This. Very much. The lack of transparency really bugs me as it eliminates any chance of peer review. Given how easy (and tempting) it is to retro-engineer a calculation vs do the actual correlation calculations, and the possibility that you end up with unexpected outcomes is higher when you apply proper correlation weighting, it makes me question all the base assumptions. For example, what IF your correlations all point out Lebron as NOT the best player? Then you take tremendous flak as it not meeting the "smell" test. However, if the formula was public then you could attack the assumptions and try to find flaws in the reasoning and weightings. As it stands now, it's way too black box for my taste. It's like, "I did a bunch of this and this, and this is what I got. And look, it matched what already happened so it must be good! And no, I can't show you the wizard behind the curtain." Until I see more transparency, I just can't have any faith in them when it comes to using them in a discussion, especially given the inherent problems with using PM as your base starting point.
Whichever stat fits the agenda that Lin sucks (of course we ignore Lowry because it only makes sense). Just like the Lins stats against the top 9 teams. (of course we completely skip the other starters because all of them are horrendous)