It really seems to be that way. I am hoping the majority of MVP voters are the silent ones who will actually consider more than raw numbers into consideration.
People forget that the MVP doesn't always reward the player with the best year, but many times makes up for that in the future. If the MVP was always awarded to the best overall player then Jordan would have handed the trophy over to Shaq who then would have given it to LeBron. Curry was voted MVP 2 years ago because his Warriors came out of nowhere. Then last year they had 73 wins and he led the league in scoring. LeBron has won multiple times in the past the same as Jordan, but when you have a consistent team like that then those players usually get voted in when they are the clear cut favorite. Leonard most likely won't win it because his Spurs will be just as good in the years to come giving him many more opportunities to duplicate his #s. Westbrook's team is not good enough to warrant him winning it this year as the voters would like to see him do this again with a better team surrounding him. If in a few years Westbrook had 29ppg, 11 apg and 8rg while winning 55+ games then he would win the MVP most likely that year as a reward for the season he is having this season. If this was Harden's first year of scoring like this then I'd say he'd be in the Westbrook and Leonard category. But he arguably should have won it 2 years ago (as the voters had no idea GS would continue being a juggernaut). Because of that Harden will win it this year.
I don't consider it double-counting because I also divided the wins by the team's expected win total. If I didn't do that, and I just multiplied wins and the Seeding together (as another poster had suggested), I would agree with you. You can look at it this way. MVP rating = X * Y X = percentage of the team's success the player is responsible for Y = number capturing the team's success One way I could define 'X' and 'Y' would be this: X = Individual_Win_shares / Team_wins Y = Team_wins Hence, MVP rating would just end up being Individual_Win_shares. That's a legitimate way to assess a player's worthiness for MVP. I just happen to think that "team success" in the regular season is better captured by a seeding score rather than total wins. Now, how we define that seeding score is another topic of debate. Many people object to the fact that I started off basing it just on conference seeding. In a follow up post, I also looked at a seeding based on overall rank, and then also combining the two. As I wrote earlier, I'm not wedded to one approach over the other. That's not really the point, for me.
It isn't a more precise measurement. It's just different. I will concede that it is a judgment call to prefer conference rank over overall rank. But it is no less a judgment call to prefer overall rank over conference rank, so I'm not sure that argument leads anywhere. I, myself, don't prefer one over the other. I stated the benefits of looking at conference seeding, since that's what people were objecting to. I recognize the benefits of looking at overall rank as well. In fact, I looked at each, and then also looked at the result when combining the two, in my follow up post: http://bbs.clutchfans.net/index.php?goto/post&id=10976335#post-10976335 Again, depends on how we judge team success. You seem to think that overall win/loss ranking is all-important. It is important, sure, if a team can get to the Finals, but until then it is less relevant than playoff seeding in the conference. Not sure what this demonstrates. The second stat looks to essentially be: 16 * Team_wins / 82 * Individual_wins / Expected_Team_wins. It should be roughly proportional to Individual_wins. That's fine, I guess, but what does this have to do with the point you were making?
The formula I used was the same one you did, replacing seeding with win %. Also, by using 17 as the base, your formula is using the total playoff teams as a base for conference specific seeding. It should have used a base of 9. The point was that using win% rather than seeding significantly alters the result, which it did. Clearly using playoff seeding has a much different outcome than using win %. Yours has LeBron way ahead, and the win% formula had Westbrook way ahead.
True. I assigned a score 9 through 16 depending on the team's conference seed. I could have done something else. My purpose in doing it t his way was to uniformly credit each team based on their seed. I considered this appropriate, since how many more wins you have compared to the team below you in the standings really doesn't matter at regular season's end -- only the order matters. How you actually distribute the credit comes down to how much you want to skew the rating to favor higher seeded teams. A 1 through 8 distribution very heavily skews the rating to higher seeded teams. Far more so than you would get by just using win total. Let's consider typical win totals for playoff teams in a conference, assuming an even distribution: Code: seed wins %credit 1 60 100% 2 57 95% 3 54 90% 4 51 85% 5 48 80% 6 45 75% 7 42 70% 8 39 65% The last column is how much credit a player would get relative to a player on the top seed team. Compare that to the results you get with my distribution: Code: seed score %credit 1 16 100% 2 15 94% 3 14 88% 4 13 81% 5 12 75% 6 11 69% 7 10 63% 8 9 56% And now with your proposed distribution: Code: seed score %credit 1 8 100% 2 7 88% 3 6 75% 4 5 63% 5 4 50% 6 3 38% 7 2 25% 8 1 13% My approach is closer to the results one would get if they just used win totals, though I skew credit somewhat more towards the top teams. A more interesting way of doing this could be to compute how advantaged a team is based on where they fall in the playoff bracket. Put all teams on an even playing field in terms in terms of likelihood to advance in each series, except for the HCA they may or may not enjoy. Let's suppose HCA is worth +x% chance of winning the series. One can then compute a probability of winning the championship depending on where they fall in the bracket and their likelihood to have HCA in the Finals. We could base our "seed_score" on this probability.
I tried the approach proposed above. Based on per conference seeding and overall seeding, I computed the probability that each team would win the championship, accounting only for the advantage due to HCA in each series and not the strength of the teams as per regular season wins. I assumed, in a given game, the home team has a 60% chance of winning. That means, for a 7 game series, the team with HCA has approximately a 53% chance of winning the series (again, leaving aside team strength). From that, I derive the following "seed scores" -- just take the probability of winning the championship, multiplied by a factor of 160. I chose 160 because if each series outcome was 50/50 instead of 53/47, it would result in each team getting a "seed score" of 10. Here is the result, using the standings as of today. The P column is the team's actual chances of winning the championship (remember, all teams have equal strength), and then %credit column has the same meaning as in my previous post: Code: team score P %credit CLE: 12.2 7.6% (100.0%) BOS: 11.6 7.2% (94.8%) WAS: 10.8 6.8% (88.6%) TOR: 10.3 6.4% (84.2%) ATL: 8.8 5.5% (71.6%) IND: 8.5 5.3% (69.4%) DET: 8.0 5.0% (65.1%) MIA: 7.7 4.8% (63.2%) team score P %credit GSW: 12.9 8.1% (100.0%) SAS: 12.5 7.8% (96.5%) HOU: 11.6 7.3% (90.1%) UTA: 11.0 6.9% (85.4%) LAC: 9.3 5.8% (72.2%) OKC: 8.9 5.5% (68.7%) MEM: 8.3 5.2% (64.5%) DEN: 7.6 4.8% (58.9%) That's pretty close to the %credit that followed from my initial simple formulation of seed scores, so I feel somewhat validated with that choice. Even though CLE is a #1 seed, they do end up slightly behind SAS in seed score, since they would be on the road against them if they meet in the finals. Being a #1 vs a #3 seed still gives CLE an advantage over us in overall chances of winning the championship, however.