This is the Reference Site of the Quest, featuring all reference guides in one place where they are easy to access.

Sunday, September 5, 2010

User Guide for the Real Player Rating Calculators on Quest for the Ring Toolbox, September 2010

Welcome. Exactly how good players are does not have to be a mystery anymore! The Quest for the Ring Toolbox is the only known place on the Internet where anyone can find out almost exactly how good basketball players are. The Toolbox enables you to rate players by entering game or season performance measurements. The most important rating calculated is called the Real Player Rating (RPR). That and the three other ratings that you can calculate on Toolbox are explained in detail at the User Guide for Real Player Ratings on the Quest Reference page. That and almost all User Guides are periodically updated and only the latest versions are kept. Look for and find the latest version there.

The Toolbox is ahead of its time in many ways, including because, as of 2010, even most document sites on the Internet including Google Documents do not allow for interactive spreadsheets to be placed on Internet pages. But we found a site which does provide that capability. Due to how tricky it is to use that source and for other reasons, it took awhile to get all of this perfected. For awhile there was an error on Toolbox we were blissfully unaware of. But as of summer 2010 we are sure we have finally achieved the capability to provide full spreadsheet interactivity on web pages and we are sure Toolbox is working perfectly.

Most of what you can do with any Excel file you can do on the calculator that appears in the embedded Excel at the Quest for the Ring Toolbox site. For example and to the point, you can quickly calculate player ratings right on the Quest Toolbox Web page.

As of January 2010 there were two calculators on Toolbox which are almost identical. One of them included the Hidden Defending Adjustment (HDA) and the other one did not. As of September 2010 having two was regarded as more confusing than it was worth and so now there is just one Real Player Rating (RPR) calculator on Toolbox.

Real Player Ratings with no HDAs are relatively crude (but still valuable). HDA makes RPRs state of the art and world class. HDA requires a large separate section in this User Guide. The HDA section will follow the following section, which gives you basic instructions on how to use the calculator on the Toolbox.

========== BASIC TOOLBOX CALCULATOR INSTRUCTIONS ==========

SIMPLY REFRESH THE PAGE TO START OVER
Sometimes with Excel, the mouse will "do something" unintended and will foul up a cell. It’s as if you made a mistake even though you really didn’t make a mistake. Sometimes in other words you may lose control over what the Excel worksheet is doing. If you can not correct the malfunction any other way, you can refresh the entire Toolbox page and start over. So let’s start by saying that if you ever make a mistake and you don't know how to reverse what you did using Excel, you can simply refresh the entire Toolbox page with your browser and start over.

How to use Excel at a high level is beyond the scope of this Guide. But even if you know nothing about Excel, you should be able to nevertheless calculate Real Player Ratings and the associated measures using the Toolbox page. You definitely do not need to know much of anything about Excel to be able to calculate Real Player Ratings using the Toolbox Internet page.

On the other hand, if you are well versed in Excel, you can make changes because the spreadsheet is fully (or almost fully) interactive. Specifically for example, you can change the formula used for calculating Real Player Ratings to one you for whatever reason think is more appropriate.

HOW TO USE THE REAL PLAYER RATING CALCULATOR
IMPORTANT FIRST STEP: Before you start entering points, rebounds, and so on, you must click "click to edit" at the very top of the calculator (which is a spreadsheet.) The spreadsheet will not be interactive until you click this.

You need the items shown on the calculator to find out what the Real Player Rating is for one or more players for a single game or for multiple games. Specifically, you need:

-Minutes
-Points
-3-Point Shots Made
-3-Point Shots Attempted
-2-Point Shots Made
-2-Point Shots Attempted
-Free Throws Made
-Free Throws Attempted
-Offensive Rebounds
-Defensive Rebounds
-Assists
-Steals
-Blocks
-Turnovers
-Personal Fouls
-Hidden Defending Adjustment (HDA)

The last item, HDA, is very recommended but not required. How to enter Hidden Defending is explained in great detail shortly. If you are skipping HDA than simply leave the cells for it blank.

Simply enter all of the above items in any order you wish to enter them in the cells. When calculating RPR for multiple games you enter the combined totals for all games for each item. When calculating RPR for a single game you enter the counts for that single game for each item and for each player.

If you make a mistake in any of the item cells, simply click the cell and then click delete and enter the correct or revised data.

Type the first name initial and the last name of the player(s) you are rating just above where you enter the counts, where it says "Name of Player >>>>>". Very long names will not entirely fit in the cell but presumably you will know who it is from just most of the name.

Below where you enter the items you see the performance measures starting with Real Player Rating itself. Stay clear of this area with your mouse, do not click any of these cells, and definitely DO NOT enter anything into any of the cells corresponding to these performance measures. These cells are formatted to show you the ratings based on what you enter in the items above them. The whole point of this tool is that it will calculate these things for you based on the counts for the basic basketball actions entered. If you enter anything in any of the four performance measure rows, the spreadsheet will no longer calculate that item in that cell anymore and you then might have to refresh Toolbox and start all over. At the least, you will have “lost” that column.

When all the items above have been entered for all players the following will be automatically calculated for you:

-Real Player Rating
-Real Player Production
-Offensive Sub Rating
-Defensive Sub Rating

Complete explanations of these four ratings are at the User Guide for Real Player Ratings on the Quest for the Ring Reference Page.

The calculator on Toolbox is set up to allow for as many as twenty players to be calculated at a time.

High level evaluation of ratings requires knowledge and experience. See the evaluation section in this Guide, which is one of the later sections below, and you may also want to see the evaluation section of the overall User Guide to Real Player Ratings.

YOU CAN USE THE CALCULATOR FOR ANY TIME FRAME YOU NEED
Provided you have the correct statistics, you can look at a player's performance for an individual game, for his or her entire career, or for anything in between, such as a season.

YOU CAN USE THE CALCULATOR TO COMPARE TEAMS
You can also use the tool to rate and compare entire teams, simply by using the combined measures for all the players. Suppose you have two teams in a League that were considered extremely close, and they play in the Championship, and the Championship is decided in overtime. In such a case you might not be convinced that the team that won the Championship was really the better team. To investigate, you could compare the team RPRs of the two teams to try to get at which was really and truly the better team.

One interesting idea for Team RPR is to use combined team RPR (the sum of the player RPRs) to compare the same team from one year to another, which would go a long way towards answering a question that everyone asks all the time but that often no one ever has a very good answer for: which team was better: last year's or this year's?

CUSTOMIZED RATING
What if you have a formula you want to use instead of ours? If you know Excel well you can simply change the formula in the interactive spreadsheet. Or, you can request a customized calculator by emailing thequestforthering1 at Gmail.

========== THE HIDDEN DEFENDING ADJUSTMENT ==========
The following instructions are for how you supply a Hidden Defending Adjustment (HDA) so that you will have an overall rating very close to perfect. If you are opting to skip the HDA, though, you can simply leave the cell(s) where the HDA is supposed to go blank.

HDA is basically what is left out from the everyday scorekeeper counts of points, assists, blocks and so on. Unfortunately what is left out by scorekeepers is very important. Scorekeepers can not possibly calculate HDA during a game so you can not blame this situation on them or on those who mange them or on the League commissioner or on anyone else.

For its’ regular NBA coverage, Quest for the Ring (QFTR) uses a multi-step, statistically valid process to fairly and competitively rate NBA players on their “hidden defending,” which are all actions NOT recorded by scorekeepers that succeed at preventing scores by the opponent. Here are many of the things that HDA measures:

--effective man to man defending
--effective rotation / switching on defense, especially off screens and picks
--effective pick and roll defense
--effective defensive recognition
--quickness of defensive reaction
--energy and hustle on defense
--effective taking of charges (causing a driving offensive player to be called for an offensive foul)
--effective hustling after loose balls
--effective calling of time-outs, for example, to avoid a jump ball being called

These things would be counted by scorekeepers if it were possible. Not only can these things individually not be counted exactly, but also there is in general no way to know exactly how many shots a defender has changed from being a score to a miss. But you can indirectly and relatively find out and we have a way to do that.

In this Guide we are giving you many but not all details about HDA. See the HDA section of the User Guide to Real Player Ratings for full details about the HDA and about RPRs.

TO USE OR NOT TO USE THE HDA, THAT WAS AND IS THE QUESTION
Back in 2008 and 2009, the accepted doctrine was that HDAs would be used only when more than 300 minutes of playing time data was available. This is because HDA uses the basic sampling theory of statistics and a 300 minute sample is the minimum needed for high statistical validity.

For more about exactly how HDA is calculated, see the full Real Player Rating User Guide on the Quest for the Ring Reference page.

Since 300 minutes obviously covers multiple games, the conception was that HDA would be associated with and also be mandatory for partial or full season RPR calculations. Therefore, RPR with HDA included could not be calculated for full teams until roughly mid January because it would take until then before all of the main reserves had played 300 minutes or more. RPR for single games (and technically for whenever less than 300 minutes of playing time data is available) would be without HDA. RPR without HDA is generally called Base RPR.

The problem is that Base RPR may be a valuable thing but it is not quite an extremely value thing. HDA on average constitutes about one fifth of a players’ RPR. For the defensive specialists, HDA can constitute as much as two fifths (40 percent) of the RPR. So HDA is so important that leaving it out makes reporting RPRs for playoff games limited in value. In general, without the HDA included, Real Player Ratings are not a complete and totally accurate representation of basketball players.

QFTR is striving to make every single Report we do very or extremely high value so we decided in the spring of 2010 to somehow bring HDA into RPR calculations for single NBA playoff games. This is not yet accepted procedure for regular season single games; for them Base RPR is still the by the book way. For regular season games we will probably be supplementing Base RPR with a separate reporting of players’ HDAs from the prior or possibly the current season.

But for NBA playoff games the "HDA doctrine" was modified as of Spring 2010. It was decided that HDA would be included in RPR calculations for single playoff games.

But how did we do it, given that by are own admission HDA can not be validly calculated for a single game? (In fact, not only can it not validly be calculated but calculating it at all for a single game apparently requires a large investment of time at a little known advanced basketball site and we are not totally sure it can be done at all.)

We had to compromise so we did. For the NBA playoff games, we decided to use HDAs from the full regular season just prior, which are of course statistically valid for that season.

In most cases, the value of a players’ defending in the playoffs will be close to the value of his defending in the regular season. But not in all cases, so unfortunately in some cases a players’ RPR for a playoff game will be either too high or too low. There will sometimes be players who do not defend quite as well in the playoffs as they did in the regular season and there will sometimes be players who defend a little better in the playoffs than they did in the regular season. Worse still (and I say worse because the magnitude of this problem will often be greater than the other problem I just mentioned) in a particular game a player might defend much worse, or much better, than he did on average in the regular season.

So in summary there are two problems with transferring regular season HDAs to playoff game RPRs. The first problem is that players will sometimes in general and overall be better or worse in the playoffs compared with the regular season. The other problem is that in individual games players will sometimes be much better or much worse defensively than they were on average in the regular season. Therefore, including HDAs in playoff RPRs is controversial.

However, not including HDA at all is worse than including it knowing that in some cases it is inaccurate. If you don’t include it at all then obviously all the good defensive players come up looking worse (less valuable) than they are and vice versa. Also, it needs to be noted that HDA is only about one fifth of the average players’ overall RPR, so if it is wrong for a particular game it is not going to mean that the overall RPR is wildly inaccurate. In general it would be very rare for the RPR to be distorted up or down by more than .100. For comparison, the average RPR is about .700.

YOU ALMOST CERTAINLY CAN NOT DO HDA THE WAY WE DO
That extended excerpt from the full User Guide for Real Player Ratings was provided mainly to impress on you the importance of the HDA. Exactly how we validly calculate HDA for NBA teams is explained in that full Guide.

Unfortunately the method we use for the NBA can not be used by you because most likely the data needed is not available to you. The data needed is how many points scored by opponents while the player is on the court for at least 300 minutes of playing time. There is no known place to find this data for any League other than the NBA. And we are lucky, actually, to have the needed data for the NBA. The needed data is only available from 2004-05 on.

Even that data is not enough because then you would also have to be able to translate that data into a valid HDA. QFTR uses several sophisticated Excel worksheets which contain numerous formulae to do this. This is very high technology and is not as of yet completely explained in total detail even in the full Guide. The bottom line of this discussion is that you need HDAs to make your Toolbox calculations high value but it is completely unrealistic to think that you can calculate HDAs the way we do it for the NBA.

But does that mean you should get out the white flag? No it does not. Just as QFTR compromised a little when it started including HDA in single playoff games, we are going to instruct you to compromise a little statistical validity so that you can have high statistical value. We are instructing you to not let the perfect be the enemy of the good.

HOW TO CORRECTLY ESTIMATE HIDDEN DEFENDING ADJUSTMENTS
First let’s look at the actual final product of the QFTR HDA and eventually we will end up giving you exact instructions on how you can include HDAs in your calculations.

The Quest for the Ring Hidden Defending Rating has a scale running from 0 to .330. The ratings more or less follow a “bell curve” statistically. The vast majority of NBA players have ratings between .030 and .290. Only about the top 1% of all defenders have hidden defending ratings higher than .300. Only about the bottom 1% of all defenders have hidden defending ratings lower than .020. At least 95% (19 out of 20) basketball players have hidden defending ratings between about .040 and .275. The average hidden defending rating is about .140, which is about 20% of the average overall RPR which is about .700.

In order to incorporate hidden defending into Real Player Ratings (and into defensive sub ratings) you should use your knowledge of how well the player stops scores using hidden defending actions, which include the following:

--effective man to man defending
--effective rotation / switching on defense, especially off screens and picks
--effective pick and roll defense
--effective defensive recognition
--quickness of defensive reaction
--energy and hustle on defense
--effective taking of charges (causing a driving offensive player to be called for an offensive foul)
--effective hustling after loose balls
--effective calling of time-outs, for example, to avoid a jump ball being called

You need to make the most reasonable statistical estimate you can make even though you lack hard data. So you simply look at any player you are rating and ask yourself: how good is that player, compared with other players, in the above (and perhaps a small number of other related) actions that prevent the other team from scoring points it would have scored.

Notice I said “compared with other players”. This is very important. You are making relative statistical estimations. In order to give any player an HDA which is a good estimate, you need to be aware of how good that player is compared with as many other players as possible.

In fact, the best way to do this (at least the first time you do it) is to estimate HDAs for many players simultaneously and then bring those HDAs to the calculator. When you do this you want to keep changing your HDA estimations until they all “fit together,” until in other words they make as much sense as possible and seem to be as close to perfect as possible.

After you have experience you will not necessarily have to do it this way; once you instinctively know the scale and once you are extremely familiar with how players stack up defensively, you can instantly rate one player alone without doing a lot of HDA estimations and corrections beforehand.

THINGS YOU MUST NOT CONSIDER WHEN YOU DO YOUR HIDDEN DEFENDING ESTIMATES
This is very, very important. When correctly estimating HDA you MUST avoid bringing in things that are not part of HDA.

Be very careful not to simply rate a player’s defensive or overall style: this is a relatively common mistake that many basketball fans and sometimes coaches make. Managers, though, seldom consider a player’s style when deciding on acquisitions and contracts and that is one of the reasons they are managers.

For about the same reason, be careful not to consider a player’s personality when you estimate his hidden defending. Remember, styles and personalities are completely irrelevant: the only thing ultimately relevant is whether and to what extent what the player does on defense prevents what would have been scores from being scores.

You also must NOT include tracked defensive actions in your estimations:

--Defensive Rebounds
--Steals
--Blocks
--Personal Fouls

You must DISREGARD all of these while estimating hidden defending. It is crucial that these things not be thought of or considered in your estimates, because these things are already included in the calculator (outside of HDA). Be warned that there are some players who get a lot of the above but are actually not very good hidden defenders and vice versa: there are some players who don’t make many defensive rebounds, steals, or blocks but are actually very good as far as hidden defending is concerned. There is some correlation between HDA and those four items, but less than you think, and for some players there is virtually no correlation at all.

To emphasize, when you estimate how good a player's hidden defending is, do not be biased either for or against players who make a lot of defensive rebounds, blocks, and/or steals.

In fact, players who make a large number of defensive rebounds and blocks often have lower hidden defending ratings than do "defensive specialists" who do not make a truly large number of defensive rebounds and blocks. This makes sense insofar as that it is not automatic or all that easy for players to be extremely good at rebounding and blocking and at for example man to man defending at the same time. To some extent with defending, it is an either/or proposition. Great defenders can be either great rebounders and blockers or alternatively they can be great man to man defenders and defensive recognizers and rotators. Only a small number of great defenders are great at both tracked and hidden defending.

There can be any number of combinations. For example, there will also be players who are average in rebounding and a little above average in man to man defending. It's just that it would be rare for a player to be an outstanding rebounder, blocker, and man to man defender all at the same time.

And obviously, you should avoid bias for or against good offensive players. Or for or against bad offensive players. Quite honestly, how good or how bad a specific player is on offense has almost nothing to do with how well or bad that player is on defense, although broadly speaking across the whole universe of players there is a limited degree of correlation.

NOW THAT YOU UNDERSTAND EXACTLY WHAT YOU ARE DOING IN THEORY, THIS IS HOW TO PROCEED
What you want is your best estimate of the combined effect of the quantity and the quality of the player’s hidden defending actions. Both the quantity and the quality must be considered, not just one or the other. The best defenders use high quality hidden defending most of the time. Defenders who are just “ok” will be for example high quality hidden defenders but they are too lazy or whatever to show the high quality very often. Other defenders who are just “ok” will be players who try hard most of the time but they simply don’t at this time have the skills needed for high quality hidden defending. The higher the quality of the defending, the more often it will turn what would have been scores into stops.

The most important thing, of course, is to be objective and fair, which is really saying about the same thing with two different words. To sum this up in one sentence, you have to judge how good a player is, relative to other players, in terms of the quantity and the quality of his hidden defending.

Once you have in your head how good the player is relative to all other players, use the following to give that player a hidden defending rating. The percentage shown on each of the following lines is how the player stacks up compared to all other players with respect to hidden defending:

HIDDEN DEFENDING ESTIMATION SCALE
1% > better than 99% of other players: about .320
2% > better than 98% of other players: about .310
5% > better than 95% of other players: about .295
10% > better than 90% of other players: about 275
20% > better than 80% of other players: about .250
30% > better than 70% of other players: about .220
40% > better than 60% of other players: about .180
50% > better than 50% of other players: about .140
60% > better than 40% of other players: about .110
70% > better than 30% of other players: about .85
80% > better than 20% of other players: about .65
90% > better than 10% of other players: about .45
95% > better than 5% of other players: about .30
98% > better than 2% of other players: about .20
99% > better than 1% of other players: about .10

If you are estimating more than one player, when you are done, if you have not already done so (as recommended above) review all your estimates by making sure that your players correctly rank according to who really is better and who is worse with respect to hidden defending.

VERY HIGH, VERY LOW, AND VERY AVERAGE RATINGS
Theoretically, a player who never changes any shots from makes to misses would have a hidden defending rating of as low as .000. But even most of the bad defensive players in terms of "made them miss" defending, via untracked actions will generally have hidden defending ratings of between about .040 and .060. Exactly in the middle players in terms of hidden defending will have hidden defending ratings of between .130 and .150. And the best defensive players in terms of hidden defending will generally have hidden defending ratings of between .250 and .280, although the absolute best such player in your League can theoretically deserve a rating of up to an absolute maximum of .330.

========== EVALUATION OF CALCULATED RATINGS ==========
The following evaluation scales are as of 2010 the same ones used for the high level professional players of the NBA. Since obviously the players in your League might not be as great, you may want to adjust the scales (unless you want to compare them relative to NBA players). You will need to compare and contrast many players at the level you are looking at in order to come up with a completely valid evaluation scale that will be customized to the level of players you are looking at. To make things easier, you can if and when you construct your own scale keep the descriptions and change only the numbers. Of course, you will probably be lowering the numbers (thus making it easier for players to reach categories).

Every Quest for the Ring Evaluation Scale uses terms that the vast majority of basketball fans, coaches, and managers understand as important descriptions of just how valuable players are to the team and also as explaining the usual role of players.

At one time there was just one QFTR evaluation scale but now there are more than a dozen. In giving you the following scales, we will keep things as simple as possible without sacrificing high value and quality.

EVALUATING A SINGLE GAME OR A SMALL NUMBER OF GAMES
Use the following scale if:
--You are using HDA
--You want to rate players in general, without regard to position
--You are rating a single game or more than a game but less than 300 minutes of playing time

Perfect Player for all Practical Purposes / Major Historic Super Star 1.200 and more
Historic Super Star 1.080 1.199
Super Star 0.960 1.079
A Star Player / A well above normal starter 0.860 0.959
Very Good Player / A solid starter 0.780 0.859
Major Role Player / Good enough to start 0.700 0.779
Good Role Player / Often a good 6th man, can possibly start 0.620 0.699
Satisfactory Role Player / Generally should not start 0.540 0.619
Marginal Role Player / Should not start except in an emergency 0.460 0.539
Poor Player / Should never start 0.380 0.459
Very Poor Player 0.300 0.379
Extremely Poor Player and less 0.299

Use the following scale if:
--You are NOT using HDA even though it is very recommended
--You want to rate players in general, without regard to position
--You are rating a single game or more than a game but less than 300 minutes of playing time

Perfect Player for all Practical Purposes / Major Historic Super Star 1.060 and
Historic Super Star 0.940 1.059
Super Star 0.820 0.939
A Star Player / A well above normal starter 0.720 0.819
Very Good Player / A solid starter 0.640 0.719
Major Role Player / Good enough to start 0.560 0.639
Good Role Player / Often a good 6th man, can possibly start 0.480 0.559
Satisfactory Role Player / Generally should not start 0.400 0.479
Marginal Role Player / Should not start except in an emergency 0.320 0.399
Poor Player / Should never start 0.240 0.319
Very Poor Player 0.160 0.239
Extremely Poor Player and less 0.159

EVALUATING A SEASON OR AT LEAST MANY GAMES
Use the following scale if:
--You are using HDA
--You want to rate players in general, without regard to position
--You are rating at least 300 minutes of playing time up to an entire season. But if you are rating a player for more than a season (for two seasons or for a career for example) then do not use this scale, there is a better one below to use.

Perfect Player for all Practical Purposes / Major Historic Super Star 1.100 and more
Historic Super Star 1.000 1.099
Super Star 0.900 0.999
A Star Player / A well above normal starter 0.820 0.899
Very Good Player / A solid starter 0.760 0.819
Major Role Player / Good enough to start 0.700 0.759
Good Role Player / Often a good 6th man, can possibly start 0.640 0.699
Satisfactory Role Player / Generally should not start 0.580 0.639
Marginal Role Player / Should not start except in an emergency 0.520 0.579
Poor Player / Should never start 0.460 0.519
Very Poor Player 0.400 0.459
Extremely Poor Player and less 0.399

Use the following scale if:
--You are NOT using HDA even though it is very recommended
--You want to rate players in general, without regard to position
--You are rating at least 300 minutes of playing time up to an entire season. But if you are rating a player for more than a season (for two seasons or for a career for example) then do not use this scale, there is a better one below to use.

Perfect Player for all Practical Purposes / Major Historic Super Star 0.960 and
Historic Super Star 0.860 0.959
Super Star 0.760 0.859
A Star Player / A well above normal starter 0.680 0.759
Very Good Player / A solid starter 0.620 0.679
Major Role Player / Good enough to start 0.560 0.619
Good Role Player / Often a good 6th man, can possibly start 0.500 0.559
Satisfactory Role Player / Generally should not start 0.440 0.499
Marginal Role Player / Should not start except in an emergency 0.380 0.439
Poor Player / Should never start 0.320 0.379
Very Poor Player 0.260 0.319
Extremely Poor Player and less 0.259

EVALUATING MULTIPLE SEASONS AND CAREERS
Use the following scale if:
--You are using HDA
--You want to rate players in general, without regard to position
--You are rating more than a season (generally two or more seasons, up to and including a career).
--Note, HDA is considered mandatory for multiple season and career evaluations; therefore, there is no scale shown here for HDA not being used.

Perfect Player for all Practical Purposes / Major Historic Super Star 1.000 and more
Historic Super Star 0.925 0.999
Super Star 0.860 0.924
A Star Player / A well above normal starter 8.000 0.859
Very Good Player / A solid starter 0.750 0.799
Major Role Player / Good enough to start 0.700 0.749
Good Role Player / Often a good 6th man, can possibly start 0.650 0.699
Satisfactory Role Player / Generally should not start 0.600 0.649
Marginal Role Player / Should not start except in an emergency 0.550 0.599
Poor Player / Should never start 0.500 0.549
Very Poor Player 0.450 0.499
Extremely Poor Player and less 0.449

ADJUSTING FOR POSITIONS: HOW TO “WASH OUT” POSITON BIASES WHEN EVALUATING PLAYERS
Not all positions are created equal. These are the average ratings by position among all NBA players who play 300 minutes or more. There are very few small forwards and shooting guards who are superstars. Most (but definitely not all) superstars are players who can play point guard, power forward, or center.

Point Guard .750
Shooting Guard .640
Small Forward .640
Power Forward .720
Center .750
All Positions / All Players (NBA Overall Average) .700

As you can see, point guards and centers on average have RPRs about .050 higher than the NBA average. Power forwards average out to about .020 higher than the NBA average. Shooting guards and small forwards average out to about .060 below the NBA average.

What if you want to evaluate players after taking out the position advantages and disadvantages shown just above? What if you want to, in other words, compare all players at all positions on a completely even plane? When you do this, you will be adjusting reality a little for the cause of getting a direct, fair comparison of all players.

If you want to rate your players after removing any advantage or disadvantage they get from their position, you could adjust the scales above by the difference between the average for the position and the overall NBA average. Quest for the Ring of course has these position-specific evaluation scales.

But you don’t need them; you can accomplish the same thing by changing the Ratings you calculated themselves. Then you can use the same scales above with your new. To do it this way, add or subtract the following from your players’ ratings:

Point Guard: Subtract .050; for example, a .750 becomes a .700
Shooting Guard: Add .060; for example, a .640 becomes a .700
Small Forward: Add .060; for example, a .640 becomes a .700
Power Forward: Subtract .020; for example, a .720 becomes a .700
Center: Subtract .050; for example, a .750 becomes a .700

Now you can in effect compare all of your players without regard to position. For example, now you can fairly compare a shooting guard with a center.

SAVING DATA TO YOUR OWN COMPUTER
You can save your data (your ratings) all you wish but the calculator is copyrighted and it is illegal to place a copy of the calculator on any website.

CAUTIONS ABOUT REAL PLAYER RATINGS
See the User Guide for Real Player Ratings for more detailed information about how to evaluate the ratings, and also for cautions about using the Ratings. The latest Guide will be found on the page that the above link leads to.

As the main User Guide will inform you, although Real Player Ratings are very valid and extremely valuable, there are nevertheless reasons why they are not absolutely perfect and why they can not be the absolute final word on basketball players. See the cautions section of the User Guide for complete details on this subject.

Sunday, May 23, 2010

User Guide for Real Player Rating Reports, May 2010

REAL PLAYER RATINGS BY TEAM USER GUIDE
SECTIONS UPDATED WITH THIS UPDATE
--Strategically Using RPR (Most of the Evaluation scales were slightly improved.)
--Defensive and Offensive Sub Ratings Section (The procedure for determining accurate and unbiased Hidden Defending Ratings has been extensively improved.)

SECTIONS
This guide has the following main sections, with sub sections as highlighted within each section.

Introduction Section
Cautions Section
Strategically Using RPR Section
Mechanics of Real Player Ratings and Real Player Production Section
Defensive and Offensive Sub Ratings Section
Summary of Primary Formulas Section

========== INTRODUCTION SECTION ==========

INTRODUCTION TO THE CONCEPT OF REAL PLAYER RATINGS
The Real Player Rating (RPR) is a very carefully constructed all inclusive performance measure. Most things of value that a basketball player can do are carefully recorded by official NBA scorekeepers who sit right along the edge of the court, mid-court, and who are trained to observe and record everything that happens in a game.

Since these days all of these counts are immediately input into continually updated public data bases online, such as at ESPN, it is possible to in real time combine everything together into an overall performance measure for each player that is intended to evaluate how valuable each player is toward winning games. This is what the RPR does.

Real Player Rating or RPR is everything tracked by scorekeepers that a player does, good and bad, added and subtracted (with negative things such as turnovers and missed shots being subtracted). Very carefully calibrated factors, or weights, are applied to the different elements.

The calibration, as you would expect, is done to reflect the different value toward winning games that different actions on the court have. These factors are subject to very small annual adjustments as knowledge about how games are won and lost is fine tuned.

Then, all of the good and bad combined together is divided by minutes, yielding RPR, which is really the rate per unit of time of the good minus the rate per unit of time of the bad. This is what we need to determine the overall quality or value of the player toward the objective of winning basketball games.

QUALITY (RPR) AND QUANTITY (RPP} SUMMARIZED
RPR reports show for each player the RPR (Real Player Rating) which tells you how good a player did (all the good things minus all the bad things) out on the court per unit of time. The RPP (Real Player Production) report tells you how much in total (the sum of the of the good things minus the sum of the bad things) a player did out on the court, without regard to playing time.

Many and maybe most sports watchers and an unknown but probably disturbingly large number of sports managers make the mistakes of exaggerating the importance of quantity and overlooking to some extent quality. These reports allow you to expand your horizons. These reports put quantity and quality side by side, which is extremely valuable, because both are roughly equally important in explaining accurately why and how the team is playing the way it is.

SIMPLICITY, RELIABILITY, TRANSPARENCY, AND FOCUS ONLY ON "WINNING POWER"
Like everything statistical we do at Quest, we have kept this process as simple and reliable as possible, while at the same time spending as much time as necessary on design, quality control and performance evaluation. Unlike some other practitioners, we avoid what you might call layered complexity, which leads to formulas which can not be understood without studying them and which high traffic sites will not show on any of their web pages for fear that the public will rebel against the statistic. At Quest, we think that our rating systems can be understood and evaluated by most high school graduates, and we keep everything out in the open through User Guides such as this one.

Basketball statistical gurus frequently forget that no matter how intricate their formulas are, they are very heavily manipulating process items such as assists and rebounds while most likely spending very little time on how these things fit together to produce wins and losses. We think that they are making the mistake, whether or not they are aware, of injecting value adjustments regarding how they think the game should be played and value adjustments about which playing styles are better than others.

Whereas, the primary objectives of the relative simplicity (small number of formulas, to be more precise) of the Quest RPR is to avoid all how the game should be played and how players should play value judgments. We don't care about the styles, only about the results. The RPR is concerned first and in fact exclusively with the impact each player has on the potential for winning games.

Quest thinks it makes more sense to minimize the manipulation of process items, and to focus much more on coming up with the best possible estimation of how the process items impact points for and points against in games, which in turn of course determines wins. Whereas other "advanced statistics" might give you more depth and flavor regarding how a particular player plays (his style) the Quest RPR is a way for the reader to, in a very quick and easy way, determine what the overall value of the player is with respect to producing wins or losses.

In other words, the foundation of RPR is and will always be measurement of a player's power to help win basketball games, whereas the foundation of other, more complicated statistics may include preferences about how the game should be played and about the style of players, with winning power measured less accurately as a result of those focuses.

IMPORTANCE OF PER UNIT OF TIME
Because it is per time, RPR is immediately in the running to be the best possible measure of the net quality of a basketball player, or simply "how good" the player is (on average) for each minute of playing time. All per game statistics are inferior to any reasonably good per unit of playing time measure. For example, points per minute (or per 40 minutes or any number of minutes) is a much better thing to look at than points per game.

REAL PLAYER RATING REPORTS CAN BE FOR THE WHOLE NBA, FOR A TEAM, FOR A GAME, OR FOR A CAREER
With a Real Player Ratings Report for the entire NBA, you can see very rapidly who the best players in the NBA have been during the course of the season.

With a Real Player Ratings Report for a Team for the Regular Season, you can see very rapidly who the best players on the team have been during the course of the season. You can use this information to investigate the possibility that the coach is not perfect. Well, we know that no coach is perfect. So really, with the benefit of 20/20 hindsight, we can investigate and determine what mistakes the coach has apparently made with regard to rotations and playing times. Furthermore, by using the Ratings, basketball knowledge, a little creativity, and logical deduction, we can also investigate and perhaps determine whether the coach has made incorrect decisions regarding which strategies and plays are best for his team's offense and defense.

Real Player Ratings for games are a major part of Reports called Ultimate Game Breakdowns.

Real Player Ratings for a player's career, year by year and in total, are obviously very valuable looks at how the player changed over the years. Of course, most players get better from how they started in their rookie years.

[End of Introduction Section.]

============ CAUTIONS SECTION ============

To be completely honest and clear, although it is the best possible overall real life measure, RPR is still not a perfect or absolute, "final word" measure on any player. In general, you must remember that all performance measures including this one for the NBA are relative rather than absolute measures. The ratings are relative to the team context. Players do not exist in a vacuum, especially in basketball.

Several specific cautions will now be described.

RPRs ARE RELATIVE TO TEAMS, AND ARE SUBJECT TO THE CROWDING OUT EFFECT
Because basketball is a team game, and more so than most other sports, players who are on really good teams might have their own performances "crowded out" to some extent by just as good players and especially by even better players. So paradoxically, ratings of players of all ratings levels who are on better teams will generally have slightly lower ratings than they would have if they were on a not as good team. Conversely, players (at all ratings levels) who are on bad teams will have slightly higher ratings than they would have if they were on a better team. Numerically, a player on the best NBA team could easily have a RPR that is 20% less than what it would be if he was a player on the worst NBA team.

Always remember this important point, which we restate for emphasis. If a good player is on a good team where there are a number of players as good and even better than he is, than his RPR will likely be lower than it would be if he were on a not as good team.

Position in the team context can impact RPR as well. If a good player plays a certain position for which his team has an even better player, then it's probable that the better player will crowd out the lesser player to one extent or another, so that the lesser player's RPR will be lower than what it would be if he were the best player at the position on the team. Conversely, the best player at a position on a bad team can have a RPR which is higher than what it would be on many other teams.

ACTUAL RPR DIFFERENCES BETWEEN TEAMS ARE GREATER THAN APPARENT DIFFERENCES
An important implication of crowding out and relativity is that the average RPR among the best five, six, or seven players of the best teams in most cases will understate the real "potential RPR" of those players, where potential RPR is RPR with the least possible crowding out. In other words, the potential RPRs of players on the best teams is higher than their actual RPRs. Conversely, the long-run, true potential RPRs of the apparently better players on bad teams is actually lower than their actual RPRs.

This plays out at the team level in a very important way. Always remember this: the actual underlying gap in the real quality of the players between good and bad teams is greater than the actual RPRs are indicating. The true RPR differential between the best and the worst NBA team could easily be 20-30% greater than the apparent differential. In other words, team RPR averages understate real quality differences between teams.

PLAYERS NEED THE BALL FOR HIGHER RPRs
Players need not only playing time but possession of the ball in order to produce many of the things that count in the ratings. So if, for whatever reason, a player does not get the ball as often as he would on a different team, or with a different coach, or with whatever other circumstances you can dream of, then his RPR will be lower than what it could or would be.

DO NOT FORGET WHAT THE RATINGS YOU ARE LOOKING AT ARE MEASURING
Many ratings that you see on Quest are only for the current season. It has recently been discovered that many player's ratings often change up or down by 10% from year to year even on the same team, and ratings can change by about 15% up or down without too much trouble from one year to the next even on the same team. Moreover, over the course of a player's entire career, RPR ratings by year can and often do vary by 50% or even more when you compare the highest year or two to the lowest year or two. Although there are a fairly good number of exceptions, many NBA players have much lower RPRs in their first year or two in the NBA than they will eventually average out to.

INJURIES AND RECOVERIES FROM INJURIES
Players often play with minor injuries. They also often start playing again before they are 100% recovered from an injury. They sometimes even postpone surgery that has become necessary due to injury until the off-season, and play with some type of impairment in the meantime. In all of these situations, RPR will be lower than it would be were the player not dealing with any injury.

MAGNITUDE OF THE ADJUSTMENT FOR HIDDEN DEFENDING
Those who think defense in basketball is much more important than offense will consider the magnitude of the defensive adjustment to be inadequate. They will contend that defensive specialists who are poor offensive players should have a higher rating.

While we realized that we needed to adjust the ratings for defending not tracked by NBA scorekeepers, and while we put in a huge effort to come up with a valid adjustment system, we continue to believe that players who are great defensive specialists but poor or undeveloped offensive players should in most cases rank no higher than the Major Role Player/Good Enough to Start level, which is the level just below the Solid Starter level. In a few relatively rare cases, defensive specialists who have decent offensive games will be ranked as Solid Starters.

None of this is to say that having a "defensive specialist" is a disqualifier to winning the Quest. It is merely a caution that coaches often make the mistake of giving them too much playing time.

AVOID BEING CONFUSED BETWEEN RPR AND RPP AND DO NOT MINIMIZE THE IMPORTANCE OF RPP
Do not forget that RPR is a per time measure. RPP and not RPR measures total impact of a player. RPR measures how valuable a player has been toward winning basketball games, per unit of time.

Do not make the mistake of ignoring the importance of RPP, now improved to TRPP. Players with the highest TRPP are showing they have the stamina, knowledge, and trust of the coaching staff to be able to get all the playing time needed to produce that. So even if their RPRs are a little lower than you might expect, players with the highest TRPPs should still be considered as extremely important and valuable players.

Having said that, one of the most important objectives for any top Coach must be to make sure that his highest RPR players are also found at or close to the top of the TRPP list.

THE CLASSIFICATION SCHEME IS RELATIVE TOO
The classification scheme, like the ratings, is relative. A role player on a really good team might be a solid starter on a bad team. A star on a bad team might be just a major role player on a really good team. And so on and so forth. A player is a star, a role player, or whatever only in the contexts of the particular season and the particular team involved. If he was on a different team, or if it was a different year, his classification could easily be different.

So to conclude the Cautions section of this guide, don't think of RPR as the ultimate gospel or bible on how good players are. But do think of it as an extremely accurate and reliable summary of how good the players actually have been in real life in the specific time (season or playoffs) and place (team) involved.

[end of cautions section]

===========STRATEGICALLY USING RPR SECTION============

RELATIVITY ADJUSTMENT FOR PROJECTED RPR FOR PLAYERS CHANGING TEAMS
When you are trying to judge how good a player might be if he were on another team, you need to, due to the relativity factor discussed previously, adjust the expected RPR upwards if the player is moving to a lower quality team and to adjust the expected RPR downwards if the player is moving to a higher quality team. The absolute maximum such adjustment necessary is believed to be about 20%, with that full amount applied only when the player is moving from one of the very worst one or two teams to one of the very best one or two teams, or vice versa.

For players changing teams, RPR chamges in the 5-15% range will be much more common than changes of about 20% simply because most teams are neither among the very worst nor among the very best teams.

On top of RPR changes due to different teams, remember that RPRs often change by 10-15% from year to the next regardless of team. The combined RPR change for a player changing teams could therefore be as much as about 35%. This would be true if a player in the same year was intrinsically 15% better, and he moved from one of the very best teams to one of the very worst teams.

GREAT VARIABILITY OF PLAYER RPRs FOR INDIVIDUAL GAMES
Not as many breakdowns of individual game ratings are going to closely track the overall average for the roster as you might think. This is because one of the interesting things about basketball that makes it different from most other sports is that "how good" a player is from game to game varies radically. The best players sometimes have terrible games where they do almost nothing, while players who normally do not do much can every once in a while have outstanding games, at least if you measure it per minute on the court anyway. If you just looked at actual production, and never at a reserve player's Real Player Rating, you would hardly notice any of his unusually outstanding games, since players who normally do not do much will normally not have much playing time.

INTERACTIONS BETWEEN PLAYING TIMES, RPRs, TRPPs, AND THE NEEDS OF TEAMS
There are certain things that only certain players can do very well, and if those things are crucial for the team, than those players will have to play more minutes than they might otherwise play. The extra minutes might tend to reduce the player's Real Player Rating, while his total production will rise with the additional minutes. So to fairly and completely evaluate any player, you must always look at both the Real Player Rating (RPR) and the Real Player Production (RPP).

Furthermore, it is strongly suspected that, in order to compete in the playoffs, a team must have as many players of as high a quality (RPR) as possible, while at the same time having at least one or two players whose actual production is among the highest in the NBA regardless of exactly how high the RPRs happen to be. (All high RPP players will be relatively high RPR players; some will be higher than others.) Specifically for example, LeBron James' actual massive amount of production is most likely just as important to the Cleveland Cavaliers as is his RPR or, in other words, as is his rate of production. Similarly, Kobe Bryant's quantity is probably at least as important to the Lakers as is his quality.

Whereas, teams such as the Denver Nuggets, who have instructed a possible huge producer, Carmelo Anthony, to "not worry about scoring," may have made a fatal mistake relative to the playoffs, because teams with no extremely high rate producers may be generally doomed to lose quickly in the playoffs even if they have an unusually large number of high quality players as shown by RPR. This is because extremely high RPP players can by themselves "dominate a game" to some extent, meaning they can by themselves possibly win the game for their team, without worrying about complications that come in to play if you need to coordinate several high RPR but ultimately and theoretically limited RPP players.

Players who over the course of a season appear to rank higher in RPR (quality) but lower in RPP (quantity) may not be getting enough playing time. Players who over the course of a season appear to rank lower in RPR (quality) but higher in RPP (quantity) may be getting too much playing time. But as alluded to earlier, you must not automatically conclude this, because some skills are needed out on the court most of the time, but yet may be available only from a small number players on the roster. Such players may have to get more playing time due to that critical skill in short supply, even if their overall quality does not seem to justify all of that playing time.

A relatively common reason for unusual playing time will be players who are either truly outstanding defenders (who get extra playing time) or truly bad defenders (who get their playing time reduced).

Another common reason for extra playing time will be if a team has a point guard who has many more turnovers than the average point guard has. Because the point guard is so important, a good coach has to play his best guard who can make plays at the position for a full set of minutes every game, and he or she must do so almost regardless of how many turnovers that player makes. If you take out your designated point guard due to "too many turnovers," it may end up sort of like cutting your foot off because you have a bad case of athletes foot!

EVALUATION OF REAL PLAYER RATINGS

EVALUATION SCALE FOR REGULAR SEASONS
--Meaningful regular season ratings with high statistical validity are not possible until about Jan. 20 of each year.
--The following scale assumes that the Hidden Defending Adjustments have been correctly done and included

Major Historic Super Star / "Perfect Player" 1.100 and more
Historic Super Star 1.000 1.099
Super Star 0.900 0.999
A Star Player / A Well Above Normal Starter 0.820 0.899
Very Good Player / A Solid Starter 0.760 0.819
Major Role Player / Good Enough to Start 0.700 0.759
Good Role Player / Often a Good 6th Man 0.640 0.699
Satisfactory Role Player 0.580 0.639
Marginal Role Player 0.520 0.579
Poor Player 0.460 0.519
Very Poor Player 0.400 0.459
Extremely Poor Player and less 0.399

SHOULD PLAYERS WITH LOW RATINGS BE PLAYING IN THE PLAYOFFS?
For the two teams that play in the Championship, players rated below about .560 are almost always a drag on the Championship run. However, such players sometimes get playing time based largely on factors outside of RPR, but valued by coaches and other players, such as:

--Great energy, effort, and hustle
--Toughness, such as diving after loose balls and taking charges
--Leadership and/or knowledge, especially in the case of veterans.
--Perceived potential for future improvement in terms of real basketball production, especially in the case of young players

But keep in mind also that the value of these qualities may be and often are overestimated, particularly with respect to playoff games. In general we see that players below .560 are often getting too much playing time in playoff games.

Many playoff teams are forced to play players with ratings below .560, especially shooting guards and small forwards, simply because they would not have anyone at the position or because they would not have at least eight players available to play if they didn't play any players with ratings below .560. The fewer players below .560 a team has to play the better. One of the worst playff mistakes a coach can make is to play a player whose rating is lower than .560 for more minutes at a postion than another player at that position whose rating is above .560.

The advice regarding players rated even lower is simple and clear. Players rated below .500 should not be playing at all in the playoffs (except in garbage time) for teams that are serious about winning the Quest for the Ring. Coaches who play players with ratings lower than .500 in the playoffs when any player was available at the position whose rating was higher than .500 should in most cases be fired.

EVALUATION SCALE FOR SINGLE GAMES
There are two scales for a single game, one for if no hidden defending adjustment is included and one if the new as of June 2010 hidden defending adjustment for a playoff game is included.

EVALUATION SCALE FOR BASIC REAL PLAYER RATINGS FOR A SINGLE GAME WITH NO HIDDEN DEFENDING ADJUSTMENT
Major Historic Super Star / "Perfect Player" 1.060 and more
Historic Super Star 0.940 0.979
Super Star 0.820 0.939
A Star Player / A Well Above Normal Starter 0.720 0.819
Very Good Player / A Solid Starter 0.640 0.719
Major Role Player / Good Enough to Start 0.560 0.639
Good Role Player / Often a Good 6th Man 0.480 0.559
Satisfactory Role Player 0.400 0.479
Marginal Role Player 0.320 0.399
Poor Player 0.240 0.319
Very Poor Player 0.160 0.239
Extremely Poor Player and less 0.159

EVALUATION SCALE FOR REAL PLAYER RATINGS FOR A SINGLE GAME WITH THE SINGLE GAME HIDDEN DEFENDING ADJUSTMENT INCLUDED
Major Historic Super Star / "Perfect Player" 1.200 and more
Historic Super Star 1.080 1.119
Super Star 0.960 1.079
A Star Player / A Well Above Normal Starter 0.860 0.959
Very Good Player / A Solid Starter 0.780 0.859
Major Role Player / Good Enough to Start 0.700 0.779
Good Role Player / Often a Good 6th Man 0.620 0.699
Satisfactory Role Player 0.540 0.619
Marginal Role Player 0.460 0.539
Poor Player 0.380 0.459
Very Poor Player 0.300 0.379
Extremely Poor Player and less 0.299

EVALUATION SCALE FOR A CAREER (OF A PLAYER)
Remember that many players have lower ratings in their first one to three years than they will have ultimately. Remember also that players in their last season or two before they retire will generally have lower ratings than their career average.

All Career Real Player Ratings require a Hidden Defending Adjustment (HDA). For active players, the adjustment will be the average of the two HDA adjustments from the most recent two seasons. For retired players, the adjustment will be the average adjustment for the third to last and the fourth to last season (in other words, the final two years of the players' career are skipped and the two years prior to those years are considered).

EVALUATION SCALE FOR A CAREER (OF A PLAYER)
Perfect for all Practical Purposes / Major Historic Super Star 1.000 and more
Historic Super Star 0.940 0.999
Super Star 0.870 0.939
A Star Player / A Well Above Normal Starter 0.800 0.869
A Very Good Player: A Solid Starter 0.750 0.799
Major Role Player / Good Enough to Start 0.700 0.749
Good Role Player / Often a Good 6th Man 0.650 0.699
Satisfactory Role Player 0.600 0.649
Marginal Role Player 0.540 0.599
Poor Player 0.480 0.539
Very Poor Player 0.420 0.479
Extremely Poor Player 0.419 and Less

NOTE ABOUT LOW CAREER RATINGS
Players rated below about .600 in their careers often get playing time based largely on factors outside of RPR, but valued by coaches and other players, such as:

--Great energy and hustle
--Toughness, such as diving after loose balls and taking charges
--Leadership and/or knowledge, especially in the case of veterans
--Perceived potential for future improvement in terms of real basketball production, especially in the case of young players
--See also the User Guide section called "Cautions"

[End of the Strategic Use of Ratings Section]

==========MECHANICS OF REAL PLAYER RATINGS AND REAL PLAYER PRODUCTION==========

MINIMUM PLAYING TIME RULES
As explained further in the adjustment for defending section of this Guide, only players who have played at least 300 minutes can have a hidden defending rating and an overall RPR given to them. Due to the minimum sample size requirement for the adjustment for hidden defending, regular season ratings for NBA players can not be meaningfully done until at least mid January. Generally, we need at least 3 players to have played 1,500 minutes or more before we can or will rate that team's players.

REAL PLAYER PRODUCTION
Of course, looking at actual production (everything positive added together and everything negative subtracted out) is something that is extremely important too. The total production (everything good and everything bad combined together) is simply called Real Player Production or RPP.

BASIC VERSUS TOTAL REAL PLAYER PRODUCTION
Basic RPP does not include any estimate of how much value from hidden defending was done by the player. Starting from June 2009, there is an estimate made for the value of hidden defending of each player, calculated from the following formula:

Hidden Defending Production = Total Scored Defensive Production * (Hidden Defending Rating / Total Scored Defensive Rating)

The validity of this adjustment is somewhat less than the high validity of the defending adjustments for RPR in general. Therefore, the user is advised to not go overboard in using the results.

Then of course Total Real Player Production is Basic Real Player Production plus Hidden Defending Production. Note: At this time, RPP still refers to basic RPP, and so TRPP is the adjusted version.

SOURCE OF TRACKED BASKETBALL COUNTS
The sources for the raw counts of scores, rebounds, steals, turnovers, and so forth are ESPN.com and NBA.com. Other sites used as important data sources are Basketball-reference.com, Knickerblogger.net, and USAToday.com.

NOTES ON SOME OF THE TECHNOLOGIES USED
Microsoft Excel is extensively used to accurately produce RPR reports. Hundreds of Internet sites have been used to one extent or another in the development and in the continuing production of RPR and related reports. A very small number of sites, however, are relied on for the raw data, especially ESPN.com and NBA.com.

THE BASIC FORMULA
For 2009-10, the RPR formula has been very carefully and accurately tweaked again and is set to be as follows:

POSITIVE FACTORS
Points 1.00 (at par)
Number of 3-Pt FGs Made 1.00
Number of 2-Pt FGs Made 0.40
Number of FTs Made 0 (no "bonus for a made free throw; just the point itself goes into RPR)

Assists 2.15

Offensive Rebounds 1.43
Defensive Rebounds 1.31
Blocks 1.80
Steals 2.30

NEGATIVE FACTORS
3-Pt FGs Missed -1.00
2-Pt FGs Missed -1.03
FTs Missed -1.3256

Turnovers -1.95
Personal Fouls -1.00

ACTUAL COMBINED AWARD OR PENALTY BY TYPE OF SHOT
3-Pointer Made 4.00
2-Pointer Made 2.40
Free Throw Made 1.00
3-Pointer Missed -1.00
2-Pointer Missed -1.03
Free Throw Missed -1.3256

ZERO POINTS: PERCENTAGES BELOW WHICH THERE IS A NEGATIVE NET RESULT
3-Pointer 0 score % 0.200
2-Pointer 0 score % 0.300
1-Pointer 0 score % 0.570

This means that if a player has a lower percentage than any of the three above, then his RPR would be lower rather than higher as a result of his shooting that type of shot.

ASSISTS VERSUS TURNOVERS ZERO POINT
Assist/Turnover Ratio That Yields 0 Net Points: 0.908

Asset/turnover rations greater than .908 are positive with respect to RPR. This also means that any player who has an assist/turnover ration of less than .908 is losing RPR rating when assists and turnovers are considered. He would have to either increase assisting or reduce turnovers to turn the combined effect from assists and turnovers positive.

HIDDEN DEFENDING RATINGS
A quality of defending rating of between 0 and .324 is added to base or unadjusted RPR. In most cases, the hidden defending rating is between 0.050 and .230. See the Hidden Defending Adjustment to Real Player Ratings sub section that follows just below below here for a very detailed explanation of how we determine player hidden defensive ratings and how we combine them with base RPR.

[End of Mechanics of Real Player Ratings and Real Player Production Section.]

======== DEFENSIVE AND OFFENSIVE SUB RATINGS ======================

DEFENDING SUB RATINGS
DEFENSIVE AND OFFENSIVE SPECIALISTS
Defensive specialists will have a much higher percentage of their overall RPR determined by the defensive sub rating than the League average of 45%. At the extreme, defensive specialists who are power forwards or centers will have defensive sub ratings that constitute as much as about 75% of their overall ratings. Due to the team nature of basketball, it is not an automatic disqualifier for winning the Quest to have a player unbalanced in this way, provided that the unbalanced player is truly outstanding defensively.

Offensive specialists will have a much higher percentage of their overall RPR determined by the offensive sub rating than the League average of 55%. At the extreme, offensive specialists who are guards will have offensive sub ratings that constitute as much as about 85% of their overall ratings. Due to the team nature of basketball, it is not an automatic disqualifier for wining the Quest to have a player unbalanced in this way, provided that the unbalanced player is truly outstanding offensively.

THE HIDDEN DEFENDING ADJUSTMENT (HDA) TO REAL PLAYER RATINGS
The hidden defending adjustment is on average 20.25% of overall RPR. Players will range widely though: as little as virtually 0% and as much as about 45% of a players' full RPR will be the hidden defending component.

Obviously, some valuable things that basketball players do are never counted by scorekeepers. Many of these uncounted things are defensive, insofar as they prevent scores, or reduce the scoring opportunities of the opponent. These things would include chasing down loose balls, taking charges, and good or great man to man defending. Man to man defending that is good enough to prevent what would have been a score from actually being a score is the most common and important basketball action which can not be and is not tracked by NBA scorekeepers.

Man to man defending however, although the most important, is not by any means the only defensive element that can not be tracked or scored. Broadly, what is missed or hidden is all the things that the player does to make the possessions of the opposing teams worthless other than what is already counted, which would be rebounds, steals, blocks, and personal fouls. These untracked or hidden actions would include:

SOME BASKETBALL FACTORS ESTIMATED BY THE HIDDEN DEFENDING ADJUSTMENT TO RPR
--effective man to man defending
--effective rotation / switching on defense, especially off screens and picks
--effective pick and roll defense
--effective defensive recognition
--quickness of defensive reaction
--energy and hustle on defense
--effective taking of charges (causing a driving offensive player to be called for an offensive foul)
--effective hustling after loose balls
--effective calling of time-outs, for example, to avoid a jump ball being called

These things would be counted by scorekeepers if it were possible. But, for example, there is no way to know exactly how many shots a good (or any kind of) defender has changed from being a score to a miss.

Quest for the Ring has developed a statistically valid way to accurately estimate the untracked or hidden aspects of defending. This is described in complete detail in the latter sections of this Guide.

HDA IS AN UPGRADE TO DEFENSIVE EFFICIENCY RATINGS OF PLAYERS SEEN ON OTHER SITES
There are a small number of sites that show you each player's "defensive efficiency," which is number of points allowed per 100 possessions. This sounds nice, but it actaully is not all that valuable. The Hidden Defending Adjustment of RPR is an upgrade for this.

Probably the most important improvement is that in HDA, players' defending is standardized for team defending. With the defensive efficiency on certain other sites, players who are on good defensive teams have elevated ratings simply because they are on those teams. But obviously, many of the players on a good defensive team are producing that good defense, not just any one of them. The Hidden Defensive Adjustment corrects for this quality of team defense bias, which enables players on different teams to be fairly compared with respect to hidden defending.

THE HIDDEN DEFENDING ADJUSTMENT EXPLAINED
It took almost two years of hoping, searching for things, planning, and then developing, but finally the basic breakthrough was achieved in the objective of correct evaluation of defending. Now that the breakthrough has come, I am now more certain than ever that RPR is the best overall rating system in existence, and that it is now roughly as good as it will ever or can ever be.

HDA is a statistically valid way to rate the hidden defending of players, that is, what they do to prevent scores other than rebounding, blocks, steals, and fouls, which were always included in RPR. This would include man to man defending, zone defending, pick and roll defending, defensive recognition, and defensive rotation.

Although the technique used had to be indirect and subject to a very small amount of statistical error, it validly awards the better defenders with bigger RPR bonuses. It has been validated by comparing results obtained with the player defensive efficiency ratings shown on three different "advanced basketball statistics" web sites. HDA results were shown to be highly correlated with those efficiency ratings.

Where there are small differences, HDA is better, because of the correction for team defense bias, because HDA uses simple, bedrock statistical theory rather than involved formulas involving assumptions, and for other lessor reasons.

USE OF BASIC STATISTICAL SAMPLING THEORY
What we are doing is using an indirect and inexact, yet accurate and statistically valid way to discover who the better defenders are. No two players are out on the court for all the exact same minutes. So although for every player, what the other players out on the court do defensively while they are out on the court is a very large factor determining what that player's points per minute allowed will be, when you look at many, many hundreds of minutes, what the individual player does, or does not do defensively, as the case may be, will eventually show up in that particular player's points allowed per minute statistic.

In other words, what any individual player does defensively has to sooner or later show itself in a differentiation from other players of his points allowed per minute. As the number of minutes rise above 500, and then 1,000 and then, for many players, above 2,000 and even 3,000 for a regular season, what a particular player does or does not do defensively becomes more and more exactly shown by the points allowed per minute number. This is very basic statistical sampling theory in operation. Statistical sampling theory is an easy to understand bedrock theory of statistics.

Due to the necessity of a large sample of minutes, we will not do defending estimates for any player who has played for fewer than 300 minutes. Quality of defending estimates will be slightly less accurate for players who have only played between 301 and about 600 minutes than they will be for players who have played for more than 600 minutes. We believe that the estimates are going to be extremely accurate for all players who have played 750 minutes or more. The idea is relatively simple: as the number of hundreds of minutes played goes up, the accuracy of this system improves, to the point where it gives you the same information you would have if you knew exactly how many possessions of the other team each player ruined with his defending.

For your information, after adjustments for pace, all players allow between 1.87 and 2.26 points per minute; most allow between 1.96 and 2.17. The overall NBA average is about 2.06 points per minute allowed.

A REMINDER: NOTHING IS HIDDEN HERE
Unlike most "advanced statistics" that are published on the internet or in print, we give you all the details about how we do ours, so that you can evaluate the evaluations, so to speak.

HOW TO REVEAL HIDDEN DEFENDING IN FOUR STEPS
STEP ONE: CALCUATION OF RAW POINTS GIVEN UP PER MINUTE

Where do we start to discover what is hidden? We keep it as simple and yet as accurate as possible. We use the most official and therefore presumably the most reliable data as the building blocks for rating the defense of NBA players. We start with the player minutes and points scored by the other team while the player was on the court that are shown in the plus/minus statistical section at NBA.com.

There are no value judgments made regarding a player's defending style or, for that matter, regarding a team's defending style. We don't care about style. Using points allowed per minute is looking at results, nothing more and nothing less.

STEP TWO: THE PACE ADJUSTMENT
After simply dividing points allowed by minutes on the court, we adjust (or standardize, or correct) that rate for the relative pace of the team. Pace is the average number of possessions per game. The faster the pace, the greater the number of possessions per game. Relative pace is average League pace divided by the team's pace. For your information, the average League pace in 2009-10 was 92.7 possessions per game. Fast paced teams will have pace adjustments of slightly less than 1 and slow pace teams will have pace adjustments of slightly greater than 1.

Then we simply multiply each player's raw points allowed per minute played by his team's pace adjustment.

It would be grossly unfair to compare the rate of points allowed by a player on a fast paced team to a player on a slow paced team. The player on the fast paced team automatically gives up more points per minute defensively because there are more possessions in a fast paced teams' games and, therefore, more points scored by the opponents. In other words players who are on teams with faster paces give up more points per minute through no fault of their own.

Similarly, players who are on teams with less efficient defenses give up more points per minute, regardless of how well they defend, everything else held constant. You can not fairly compare players on two or more teams with different paces and different team defense qualities unless you standardize, or in other words control for those differences for all NBA players. The correction for pace has just been described. The correction for team defensive efficiency turned out to be a big problem that was not largely solved until May 2010. See below for how the correction is made for team defensive efficiency.

STEPS THREE AND FOUR: CONVERSION OF PACE ADJUSTED POINTS GIVEN UP PER MINUTE TO A SCALE APPROPRIATE FOR REAL PLAYER RATINGS WHILE SIMULTANEOUSLY CORRECTING FOR TEAM DEFENSE BIAS
We need to translate the pace-adjusted points allowed per minute into numerical terms that are the most useful with respect to RPR. We also need to as well as we can correct or standardize for very different team defense qualities. Before we describe how we accomplish both of these objectives at the same time, let's back track a little for a brief history...

A BRIEF HISTORY OF THE HIDDEN DEFENDING ADJUSTMENT
Beginning in January 2009 the Hidden Defending Adjustment (HDA) was included in Real Player Ratings after extensive research and development. However, the early versions of HDA did not correctly and/or did not completely solve the comparability of ratings among players on different teams problem so earlier versions of HDA were replaced in May 2010 by a version that appears to be accurate enough to be the permanent version, subject in the future to only relatively minor tweaking.

HDA was apparently the first ever effort to rate the defensive efforts of players that are hidden unless you watch all that player's games, because they are not scored or tracked by scorekeepers. The basic problem of course is that much of what players do defensively can not and is not tracked by scorekeepers and box scores.

As of June 8, 2009, the mechanics of the HDA were slightly changed to increase accuracy.

As of November 14, 2009, the HDA was upgraded, on the average, from about 40% of the overall defensive sub rating of players to about 45%. Furthermore, as of November 14, 2009, the overall defensive sub rating was recalibrated so that it would now be about 45% of the overall RPR, versus about 42% in 2008-09. The offensive sub rating was recalibrated so that it would now on average constitute about 55% of the overall RPR, versus about 58% in 2008-09.

MAY 2010 REFORMULATION OF THE HIDDEN DEFENDING ADJUSTMENT
A fairly major problem was discovered in March 2010: the HDA was substantially (but not overwhelmingly, though) biased against players playing for the best defensive teams and it was similarly biased in favor of the players playing for the worst defensive teams. The problem was that the method at that time was to standardize for both pace and the relative team defensive efficiency by blunt multiplication of the raw points allowed per minute by those factors for teams. Team defensive efficiency is number of points surrendered per 100 possessions and relative team defensive efficiency is League average defensive efficiency divided by a team's defensive efficiency.

What we used to do is multiply the relative defensive efficiency by the raw points scored by opponents when a player is on the court (which is the raw starting point for HDA). The objective of standardizing or correcting for team defense was to prevent poor defenders on good defensive teams from getting a too high rating and to prevent good defenders on bad defensive teams from getting a too low rating.

When you do this full standarization for relative team defensive efficiency however, it turns out that you "overshoot the mark" and you unfairly and excesssively shrink HDAs of the better defenders on the better defending teams. And vice versa, you unfairly and excessively magnify HDAs of the lesser defenders on the poor defending teams. So in solving one set of problems we created another set of problems.

The solution was to modify the use of the (relative) team defensive efficiencies and to split the difference between the biases. In other words we are compromising between not adjusting for team defense at all and over adjusting. Very small biases remain for which there is no solution:

--The best defenders on the best defensive teams have slightly lower HDAs than they deserve.

--The worst defenders on the best defensive teams have slightly higher HDAs than they deserve.

--The worst defenders on the worst defensive teams have slightly higher HDAs than they deserve.

--The best defenders on the worst defensive teams have slightly lower HDAs than they deserve.

The HDA as redesigned in Spring 2010 is considered rock solid because the biases that remain are very small, at the very most .025 in terms of Real Player Rating. In the vast majority of cases, the remaining bias is less than .010 in terms of RPR. For example, a player who has a RPR of .720 might really deserve only a .710 or as much as a .730 if a perfect HDA was possible.

Instead of using the relative team defensive efficiencies "in full force" by directly multiplying the raw points per minute by the relative defensive efficiencies, we created a huge evaluation grid (chart) with team relative defensive efficiency on one axis, with Hidden Defending Ratings on the other axis, and with points per minute scored by opponents when the player is on the court (adjusted for team pace) arrayed throughout the interior of the chart. By doing this we can compromise between too much standardization for relative team defensive efficiency and no standardization at all.

In effect we grade every player's hidden defending "on the curve". The better a player's team is defensively, the lower the points per minute the opponents score while the player is on the court for any given Hidden Defending Rating. For example, let's say a player gives up 2.01 points per minute (adjusted for team pace) while he is on the court. If that player is on one of the best defensive teams, the chart shows that his Hidden Defending Rating shall be .174. But if that player is on one of the worst defensive teams, the chart shows that his Hidden Defending Rating shall be .258. The player deserves and gets a higher HDA for the same points given up per minute if he is on a lousy defensive team.

But as previously noted the overshooting the mark problem is avoided through the use of the chart as opposed to bluntly multiplying by team relative defensive efficiencies.

Remember that the chart simultaneously achieves two objectives. First, the very small differences in different players' points allowed per minute are translated into numerical terms that correlate to the role that HDA needs to play within overall RPR. Second, we adjust for the differences between teams' defenses without over adjusting, and we compromise as described above.

STEP FIVE: CALCULATION OF REAL PLAYER RATING (ADJUSTED FOR HIDDEN DEFENDING)
The final step is to simply add the hidden defending rating to the Base RPR to yield RPR (Real Player Rating).

GUARD AND FORWARD OUTLIER RULES ARE REPEALED
With the May 2010 revamping, outlier rules are unnecessary and are repealed.

USE OF HIDDEN DEFENDING RATING
We now have added in a reasonably good estimate of the value of actions of players that are not even kept track of by scorekeepers! Technically, you could call the final result "Ajusted RPR," but we are trying to avoid that terminology because of how important we think it is to include the hidden defending in the performance measure.

SIZE OF THE DEFENDING ADJUSTMENTS
Base regular season RPR's for most NBA players range between .400 and 1.000. The total range of possible defending adjustments to the base RPRs is from 0 to .325. In most cases, however, the adjustment will be between 0.075 and .250.

THE DEFENDING SUB RATING: PUTTING THE HIDDEN AND THE UNHIDDEN TOGETHER
Aside from the Hidden Defending Rating we can find out how well each player does in terms of unhidden or scored defending, can't we? Of course we can.

Unhidden or tracked defending, is defensive rebounding plus steals plus blocks minus personal fouls, calibrated according to the usual RPR factors. If we extract the combination of those four out of the same counts that underlie the RPR as a whole, and use the usual factors, we get what we are going to call the Scored Defending Production. This could also be thought of as Tracked Defending Production if you prefer. Then if we divide this by minutes, we have a Scored (or Tracked) Defending Rating.

Finally, if we combine Hidden Defending Rating (HDR) with Scored Defending Rating (SDR) we can have an Overall Defending Rating (ODR).

Obviously, the HDR scaling is designed to coordinate correctly with both SDR and with RPR as a whole. All of the coordinations reflect the latest undertanding of how basketball games are won and lost. The HDR constitutes about 45% of ODR while SDR constitutes the other 55%. In other words, the value of hidden defending is perceived to be about 45% of the overall value of defending, while the value of scored (unhidden) defending is perceived to be about 55% of the overall value of defending.

There appear to be many coaches and not a few hardcore basketball fans who believe that hidden defending is actually more important than scored defending, but I am never going to agree with that. I think that although hidden defending is important, and plausibly almost as important as tracked defending, that it can not be more than this. Hidden defending is like a quicksand, in that there seems to be a tendency for a substantial minority of basketball people to get carried away with estimating the importance of it, and then become more and more trapped by their error in terms of how they look at basketball or in terms of how they coach their team if they are coaching.

FORWARDS AND CENTERS WILL GENERALLY HAVE SUBSTANTIALLY HIGHER DEFENDING RATINGS
Due to having primary responsibility for defense of the paint and for rebounding, centers and forwards are going to inevitably have higher defensive ratings than will guards. Along with much greater opportunity for rebounds and blocks, centers and forwards also have more opportunity for such hidden defending actions as good man to man defending and correct rotations than do guards. Guards out on the perimeter generally should not and do not man to man defend as closely as do interior defenders, due to the well known guideline that it is quite foolish to foul a jump shooter outside of the paint.

THE OFFENSIVE SUB RATING
The Offensive Sub Rating is all tracked actions other than the defensive ones (defensive rebounding, steals, blocks, and personal fouls) combined together using the RPR weights, divided by minutes. In other words, it is Total Offensive Production divided by minutes. For the list of all tracked actions and the weight factors assigned to each, see the secion titled "The Formula" above.

THE BEST GUARDS WILL HAVE THE HIGHEST OFFENSIVE SUB RATINGS
The very best guards in basketball are ones who, although they are not afraid to drive to the hoop from time to time, are able to make outside shots at a good rate. Also, guards in general, and especially point guards, are usually primarily responsible for making assists. These two are among the several reasons why the better guards in pro basketball will have the highest offensive sub ratings in the League.

On the other hand, some of the most valuable players in the NBA are centers and forwards who are great defenders and efficient inside scorers at the same time. Even more unusual and probably for that reason more valuable is a forward who is (a) a great inside defender (b) a great inside scorer and (c) someone who can hit jump shots, perhaps even including threes, from outside the paint. Lamar Odom is an example of this kind of extremely valuable player.

Some of these big men will have offensive sub ratings that exceed those of the lessor skilled shooting guards and even those of some of the less skilled point guards.

[End of Defensive and Offensive Sub Ratings Section.]

======== SUMMARY OF PRIMARY FORMULAS SECTION =================
Real Player Production or RPR = (All tracked or scored actions weighted according to best available analysis of importance / minutes) + Hidden Defending Rating

Real Player Production or RPP = Total Offensive Production + Total Defensive Production. (All tracked or scored actions weighted according to best available analysis of importance.)

Offensive Sub Rating = Total Scored or Tracked Offensive Production / Minutes

Defensive Sub Rating = Total Scored or Tracked Defensive Production + Hidden Defending Rating

Step One for Hidden Defending Adjustment:
Points Scored by Opponents While Player was on the Court / Minutes Played by Player

Step Two for Hidden Defending Adjustment:
Result of Step One * Relative Pace Adjustment (Team's Pace Relative to League Average)

Thursday, May 6, 2010

[Historical and Non-Current] User Guide for Real Coach Ratings, May 2010

IMPORTANT NOTICE: THIS IS A NON CURRENT, LEGACY USER GUIDE THAT WILL EVENTUALLY BE DELETED. AN UPDATED AND CURRENT GUIDE IS LOCATED HERE.




INTRODUCTION
I am proud and pleased to present what is probably the world's first serious effort to accurately rate and rank all of the current NBA head coaches. The first edition of these annual ratings was published in October 2008. The second edition was published (slightly late) in early December, 2009.

Why should the coaches hide behind a black curtain? Concerning coaches, there is virtually a total lack of the kind of statistical comparing and contrasting that goes on with players 24/7. I for one think it is way overdue that coaches be fairly and systematically compared and contrasted.

I can pretty much guarantee you that no one has ever, even with the capabilities created by the Internet age, put in as much effort and thought as I have into fairly comparing NBA coaches with widely different lengths of time spent in professional head coaching. And this system CAN be used in other Leagues, other countries, and on other planets. If there are any other basketball planets, that is!

For convenience, this Guide is developed into main sections and subsections. The main sections are:

--Mechanics of Basic Real Coach Ratings
--Usage of Basic Real Coach Ratings
--Mechanics of Advanced Coach Ratings
--Usage of Advanced Coach Ratings
--Cautions Regarding Basic and Advanced Real Coach Ratings

Within each section subsections are in caps as shown.

======== MECHANICS OF BASIC REAL COACH RATINGS =========

POSITIVE FACTORS THAT AFFECT REAL COACH RATINGS
1. Number of Regular Season Games Coached: The Experience Factor:
One Point is given for each regular season game coached up to 600 games, which is almost 7 1/2 seasons worth of games. If a Coach has not learned just about everything he needs to by this point, he most likely never will, so the award for experience is sharply reduced for all games coached beyond 600. 0.25 points (1/4 of a point) is given for games 601 through 1,000. Nothing at all is given for any games coached beyond 1,000 games. If a coach has not learned everything he can learn after 1,000 games, he is never going to learn it.

What about rookie and near rookie coaches? Just because they have never coached in the NBA, should their experience rating be zero? No, I don't believe so. They either have substantial coaching experience in other Leagues, or they were extremely talented and/or intelligent players, or both, or else they would not have been hired to be a head Coach in the NBA. So any coach who has coached for fewer than 200 NBA games is given exactly 200 points for experience. So rookie coaches start out with Real Coach Ratings of 200 and they go up or down from there.

2. Number of Playoff Season Games Coached: the Playoff Experience Factor:
Four points are awarded for every playoff game coached (regardless of result). The limit is going to be 300 such games. Probably no one will ever reach the limit except for Phil Jackson. He exactly reached 300 playoff games coached after he won his 10th Ring in June 2009. So Jackson will fail to get any more playoff experience points when he coaches more playoff games in 2010. Certainly by June of 2009, Phil Jackson already knew as much as he will ever know about winning NBA playoff games.

3. Number of Games Coached With Current Team:
This is a supplementary experience score which most benefits coaches who have gone the longest without being fired by their current teams. The points given are 0.25 (1/4 of a point) for all games coached with the team the Coach is currently working for.

The one side of the coin regarding this is that the coach must be doing what the organization wants to avoid being fired, and he can't be a total failure basketball wise, so starting with those things he deserves credit in proportion to how long he has kept his post. The other side of the coin is that the more experience a Coach has with a particular team, the more valuable he is to that franchise, because he knows everybody and everything concerned with the franchise better and better with each passing year. Generally speaking, the more successive games a Coach has coached with the same team, the more effectively and efficiently he can help the team squeeze out wins that would otherwise be losses.

Jerry Sloan, who coming in to 2009-10 had coached a mind boggling 1,668 games for the Utah Jazz, is the ultimate example of a Coach who due to his many years with the same team is going to be more effective and efficient than he would be if he had just switched to a different team. Due partly to this factor, do not be surprised if the Jazz become a losing team shortly after Sloan finally retires.

Another name for this factor might be "franchise specific experience." For 2009-10 the Washington Wizards hired a new head Coach, Flip Saunders, who has a lot of prior experience with other teams and has a relatively high rating. But he is brand new to the Wizards, so be careful not to expect miracles or even to assume that his coaching is going to be as good as it has been in the past from the get go. Look instead for the Wizards to get a little better as the season goes along and in the coming years if Saunders remains the coach. Because Saunders needs time to merge his skills and abilities with the specific factors involved with making the Wizards a winning team.

4. Regular Season Wins
4 points is assigned per regular season win.

5. Playoff Wins:
20 points are assigned per playoff win. Very slightly more than half the teams make the playoffs in the current NBA: 16 out of 30 teams. Theoretically, unless he is stuck with a truly lousy roster, any good coach can win a lot of regular season games and get his team into the playoffs. Plus, any coach at all, including a bad one, can squeak a very good or great team into the playoffs. For a good coach, merely getting into the playoffs is really not much of an accomplishment at all.

Many, many owners, managers, and fans do not seem to understand this, but the only thing that really matters with regard to coaching is what happens in the playoffs. Only the truly good coaches can win in the playoffs. The playoffs are where the wheat is separated from the chaff. In the NBA, the regular season is quite honestly nothing more than the preseason for the "playoff season," which is the the season which really matters when all is said and done.

Playoff games are generally more intense in all respects: individual players' efforts, team play as a whole, and coaching efforts are all ramped up.

For all of these reasons, it is necessary to factor playoff games as being worth five times as much as regular season games. So for both for wins and losses, playoff games count five times as much as regular season games do.

6. Championships
30 points are added for each winning Championship appearance. (That is 30 points regardless of how many games the Championship consisted of.) Since Championships average about 6 games, this is roughly equivalent to adding five experience points for each Championship game coached in the winning effort. Counting the four points every coach gets for experience for every playoff game, the total experience points for each Championship game (where the Championship is won) is approximately nine.

12 points are added for each Championship appearance where the Coach lost in the Championship. (That is 12 points regardless of how many games the Championship consisted of.) Counting the four points every coach gets for experience for every playoff game, the total experience points for each Championship game (losing effort) is approximately six.

NEGATIVE FACTORS THAT AFFECT REAL COACH RATINGS
1. Regular Season Losses:
5.75 points is charged for each regular season loss.

2. Playoff Losses:
28.75 points is charged for each playoff loss.

Now there will be some who leap out of their seats and say "this guy is off his rocker" when they see that the penalty for losing a playoff game is 28.75 points while the award for winning a regular season game is four points. I can assure you, ye of little faith, that I know exactly what I am doing and that this is either precisely correct or possibly the playoff loss penalty should be even greater. I have already explained why playoff games must be valued at least five times the valuation put on regular season games. A regular season loss is 5.75 points, and 5.75 times 5 is 28.75.

Now consider the true underlying net positive and negative scores for the four types of games and results, which you get by combining the experience award and the winning or losing number:

TRUE NET SCORES COMBINING EXPERIENCE AND WIN / LOSS SCORES TOGETHER
Regular Season Win True Net Score: 5 Points: 4 points for the win and 1 point for the experience. But it is 4.25 points for coaches (for new games) who have between 600 and 1,000 games coached since they get only .25 points for experience. And it is just 4 points for coaches (for new games) with more than 1,000 games coached since they don't get any further points for experience.

Regular Season Loss True Net Score: Minus 4.75 Points: minus 5.75 points for the loss plus 1 point for the experience. But it is minus 5.5 points (for new games) for coaches who have between 600 and 1,000 games coached since they get only .25 point for experience. And it is minus 5.75 points (for new games) for coaches with more than 1,000 games coached since they don't get any further points for experience.

Can you see what I think is the genius of this system? The more experienced coaches get experience points that obviously are not available to less experienced coaches. To partially or in some cases completely offset what would otherwise be an unfair advantage in the rating system, the more experienced coaches are expected to do somewhat better in winning and losing in order to achieve a net positive from their winning and losing toward their ratings. This is a primary mechanism used here that tends to even the playing field between coaches of widely differing amounts of experience, without being unfair to any type of coach. This whole project would have been largely a waste of time if I didn't have a good and fair way of varying the treatment of coaches with radically different amounts of experience.

Now here are the true net scores for playoff games:

Playoff Win True Net Score: 24 points: 20 for the win and 4 for the experience.

Playoff Loss True Net Score: Minus 24.75 points: minus 28.75 for the loss plus 4 for the experience.

PLAYOFF COACH SUB RATING
Mechanically, the playoff sub rating is simply the rating you get when you factor in only the playoffs-related factors. In the spreadsheet of the report, the Playoff Sub Ratings are just below the overall Ratings.

Two of the three sub ratings from 2008 are discontinued beginning 2009. Readers can now scan the raw data and get at least as much information as they could from the discontinued sub ratings. The only sub rating we are still publishing is the playoffs sub rating. (Who would have thought we would key in on that one, laugh out loud.)

In the December 2009 Ratings, George Karl is no longer at the very bottom of the playoffs sub ratings; he is ahead of Don Nelson thanks to Karl's Nuggets' 10-6 playoffs campaign in 2009. Golden State Warriors Coach Don Nelson is now dead last in the playoffs sub ratings. However, the deep hole that Karl dug in earlier years was so deep that the Nuggets' miraculous 2009 playoffs campaign was not enough to overall lift him very much in the playoffs sub rating. He is still showing up as a very, very poor playoffs coach, though Karl's rating is not as extremely poor as it was a year ago.

As of May 2010 Karl has now won 74 playoff games and lost 93 of them. Prior to the 2009 playoffs, Karl had won just 62 playoff games and lost 83.

======= USAGE OF BASIC REAL COACH RATINGS ========

HOW TO INTERPRET DIFFERENCES IN RATINGS
We will use Phil Jackson versus George Karl from the 2009 Real Coach Ratings, published in early December, 2009. You can see that the best cautious rating system we can produce (the one most in George Karl’s favor) and not be laughed out of the room shows that Los Angeles Lakers Coach Phil Jackson has a rating about ten times that of Denver Nuggets Coach George Karl.

You can interpret this in either of two ways. The first way to look at this is similar to the way that the Real Team Ratings are interpreted: It is about ten times more likely that Phil Jackson is a better coach and will defeat George Karl in a playoff series than the other way around, assuming the raw talent and injury situations of their teams are about the same. Given equal teams, Phil Jackson is going to beat George Karl unless something really rare is going on.

The other way to interpret this is to think of the differential between the two ratings as an amount which translates into an actual real life coaching difference. Then you plug that difference in with the other differences that determine who wins a playoff series. If the coaching difference and/or the size of the coaching component is big enough, it will result in the lesser skilled team winning the series if they have the better coach.

Even though we are unable at this time to properly estimate the actual size of the coaching factor in a playoff series, we know it is NOT negligible, trivial, or even very small. Coaching may be a small rather than a "middle-sized" factor (we don't have exact knowledge of how big a factor it is yet) but if the players between the two teams are evenly matched, then even a small difference in the coaching could determine the series and a large difference between coaches would definitely determine who wins a series between teams with equal players.

In any event, the difference between Phil Jackson and George Karl is so large that even if the coaching impact on playoff games is at the low end of the possible range, George Karl would still have to have a much better team to be able to defeat Phil Jackson in a playoff series.

The same applies to Phil Jackson versus Boston Celtics Coach Doc Rivers. We think right now (November 2009) that the 2010 Championship is about a toss-up between the Celtics and the Lakers. Phil Jackson is such a great coach that he is clearly better than even good and very good coaches such as Doc Rivers. Were it not for the Lakers' coaching advantage over the Celtics, the Celtics would have to be favored to win the Ring in 2010 by maybe 4 games to 2, since the Celtics are clearly better than the Lakers in terms of raw skill and raw potential.

CERTAIN VETERAN PLAYERS CAN COACH THEMSELVES TO A LARGE EXTENT
Always keep in mind that older, more veteran teams can coach themselves to one extent or another, particularly if the roster is both highly skilled and highly experienced. It doesn't matter who comes up with the winning schemes and patterns; what matters is that someone does. Younger teams, however, always need a good coaching staff to make headway in the playoffs.

Quest for the Ring has gone on record claiming that the 2007-08 Champion Boston Celtics are a good example of a team that could coach itself well to a large extent.

COACH OBJECTIVE #1: TO AVOID BEING FIRED
Calculations indicate that the average Real Coach Rating is currently 639 and the median is about 200. So the objective of all rookie coaches must be to increase their starting rating of 200 toward the average of 639 as soon as they can do so. You can think of the range between 200 and 600 as "the proving ground" or even the "make it or break it range" for coaches. Most coaches who drop below zero instead of going up from 200 during their first 3-6 years will be bounced out of the NBA.

Coaches who have ratings below 200 for more than about five straight years, and especially coaches who have ratings below zero for about five straigt years should be fired unless the managers and owners involved are sure that the coach has not had competitive players to work with, or are sure that the coach is getting better at his job, or unless there is some other unusual mitigating factor.

Coaches who maintain their jobs with Real Player Ratings below 200, and especially with Real Coach Ratings below zero, are frequently going to be men who have very cordial relations with the managers and owners. In other words, they are being kept on the payroll because the managers and/or the owners involved personally like the coach in question enough to brush aside any concerns about whether that coach is doing a good enough job for their team. These dubious coaches are given the benefit of the doubt, in other words, or sort of a free pass.

It is also true that some managers and owners live in fear that they might go from bad to worse if they exchange one coach for another. They simply do not have enough courage to strike out and try a rookie or a near-rookie coach, or to pick up a coach who has been fired by another team but who deserves a second chance.

The key is balance. On the one hand you don't want to be stuck out of caution or fear with a veteran coach who is simply not among the best coaches. On the other hand, you can't just strike out and pick any one who has never coached an NBA team before but seems like he might be a good coach. Rahther, you have to do a lot of homework and research. You have to spend a lot of time and make every effort to find that one in a hundred candidate who will actually become one of the better and maybe even one of the best NBA coaches.

NEVER EVER HIRE A COACH WITH A POOR PLAYOFFS RECORD IF YOU WANT TO WIN A RING
The Nuggets hired Karl despite the fact that he had a poor playoffs record and rating. When the Nuggets hired Karl, his playoffs record was 59-67. While coaching the Nuggets, Karl's playoffs record is 15-26 as of May 2010. Percentage wise, Karls' playoff record has gotten worse while he has coached the Nuggets, not better (despite 2009).

The Nuggets were wrong to hire Karl, and they are also wrong not to fire him unless he wins the NBA Championship within the next year or two. Which by the way, the Nuggets were in 2008, possibly were in 2009, and are again for 2010 talented enough to win a Championship if the coaching was top notch. Coaches with losing playoff records are fired by all truly serious NBA franchises these days regardless of regular season records. Karl had a losing playoffs record when he was hired and it has only gotten worse since.

Why did the Nuggets hire Karl? I can only speculate. The Nuggets either knew in advance they would never win the Quest with Karl and hired him anyway, or they figured incorrectly that Karl's playoff record was trumped by better aspects of Karl's record, or they decided that Karl's playoff record could be excused for irrational reasons, or there was some other unknown, off the wall reason for hiring Mr. Karl.

The most favored specific theory regarding why Karl was hired is that the Nuggets decided roughly in 2002 to go for a certain kind of player who can be a major bargain because other teams generally avoid that kind of player. The Nuggets decided to go for more volatile players who might need to be contained by a crack the whip type of coach so that they don't "fly off the reservation" and harm team cohesion and morale. Karl is in fact a good coach if you have a bunch of players more emotional and more volatile than average, because for one thing he will not hesitate to bench even players who get enraged about this, that, or the other thing. He will bench anyone at any time and for any reason, good or not.

Whatever the Nuggets' management thought, they thought wrongly. If you are a team owner or manager, you can not afford to take any risk or to make any benign assumptions or weak rationalizations when you choose a head coach. If a coach has a poor playoffs record, you have no choice but to not hire that coach if you are serious about winning the Quest. There are going to be coaches who are good enough to do well in the regular season but not good enough to prevail in the playoffs. You should not be the goober who hires one of them, obviously. Let some other franchise/team get stuck in the mud with that type of coach.

I have to be blunt here to make sure I am understood. You should never, ever do what the Nuggets did if you are serious about winning the Quest. Your coach should have a good record for BOTH regular season and playoffs. The playoff record is even more important than the regular season record.

Finally, before leaving this crucial subject, I am going to state that given the choice between on the one hand a younger coach who is considered to be a good or great up and coming coach, but who has no NBA playoff record at all, and not much of a regular season one, and on the other hand a long-term veteran coach who has a decent, good, or even great regular season record but a poor, losing playoffs record, you are better off choosing the young coach with no playoff record.

In point blank and clear summary, hiring a coach with a bad playoffs record is one of the worst things you can do if you want to win the Quest.

======= MECHANICS OF ADVANCED REAL COACH RATINGS =======

THE ADVANCED SYSTEM IMPROVES THE PLAYOFF SCORES OF THE BASIC SYSTEM
The Advanced system is added on to the basic system. Everything stays the same and carries over from the basic system except for playoff wins and playoff losses. All of the mechanics for the basic system shown above apply to the advanced system except that how playoff wins and playoff losses are dealt with by the basic system is null and void in the advanced system. In other words, from basic to advanced everything stays the same except for playoff wins and playoff losses. The advanced system replaces the playoff wins and losses awards and penalties of the basic system with a more sophisticated system.

In the advanced version, every playoff series is looked at as a unit. We start with four measures, the offensive efficiency of the two teams and the defensive efficiency of the two teams (from the regular season, of course). Efficiency is how many points scored or how many points given up per 100 possessions. Over the course of the regular season, the thousands of possessions result in precise efficiency numbers where seemingly very small differences are actually big differences between teams.

Then we subtract the defensive efficiency from the offensive efficiency to find the net efficiency for each team. Most but not all playoff teams have positive net efficiency numbers and most teams that do not make the playoffs have negative net efficiency numbers.

Then we compare the two net efficiencies and whichever team is higher is the favorite. Of course this is true in real life: the team with the better net efficiency beats the other team the vast majority of the time, although when the differences are smaller this is not so certain.

The exact difference between the two net efficiencies is crucial, because it determines the likelihood or probability of the favored team winning. The greater the difference in net efficiency, the closer to 100% the probability that the better team will win the series. We have a carefully constructed scale to translate differences in net efficiency to how many games the underdog should win on average in a best of seven game (and a best of five) series. For example, if the difference in net efficiency is 5.0, the underdog will on average win 2.3 games in a best of seven series (with the favored team winning 4 games).

Then for each playoff series, we compare the number of games won and lost by the coach versus what the average or standard number of wins and losses are. So then the advanced version breaks down games within playoff series results as follows:

Underdog team wins as expected 16
Underdog team unexpected playoff wins 76
Underdog team expected wins not achieved -84
Underdog team losses as expected -23

Favored team losses as expected -23
Favored team unexpected losses -84
Favored team fewer losses than expected 76
Favored team wins as expected 16

Wins by the favored team get 16 points (instead of 20 that they get in the basic). But unexpected wins, which are extra wins by the underdog team or fewer losses by the favored team get almost five times that many points: 76. Note that if a coach coaches his team to an upset playoff series win, his award would be the difference between the 4 wins it takes to win the series and the number of wins he was “supposed to” get times 76.

Unexpected losses are minus 84 points each and consist of underdog teams winning even fewer games than they were supposed to (and still losing the series) and favored teams losing more games than they were supposed to (but still winning the series). If a favored team loses the whole series then the penalty is the difference between the four wins the underdog team won and the number of wins the underdog team was supposed to win in the series.

Unexpected wins and losses are rewarded and penalized heavily but not excessively. Unexpected playoff losses are one of the worst things that can happen to a team and a franchise, because they waste the owners’ money, because they partly waste the efforts of a lot of players and managers, and because they make the franchise less likely to attract top free agents. Unexpected playoff losses are a nightmare and the fewer of them you have the better.

Note that unexpected playoff losses are in theory supposed to be largely offset by unexpected playoff wins. Most coaches are going to have a series once in awhile where his team performs below standard, but these will be mostly offset by that coaches’unexpected playoff wins.

This is the most crucial thing you have to keep in mind: the main purpose of the advanced system is to on the downside flush out and penalize coaches who have more unexpected playoff losses than unexpected playoff wins. On the upside, the primary purpose of the advanced system is to flush out and to award coaches who have more unexpected playoff wins than unexpected playoff losses.

In other words, the main purpose of the Advanced Real Coach Rating system over and above the Basic system is to assign unexpected playoff wins and losses to coaches so that coaches whose methods work better in the playoffs than in the regular season are identified and so that coaches whose methods work worse in the playoffs than in the regular season are identified.

Quest for the Ring already knows many of the basketball strategies and tactics that work better in the playoffs than in the regular season, and you do to if you read this site because we review and illustrate most of them from time to time.

======= USAGE OF ADVANCED REAL COACH RATINGS =======
When every playoff series that a coach has ever coached has been evaluated, we will be able to correctly assign that coach to one of the following categories:

FINAL CLASSIFICATION OF COACHES BASED ON ADVANCED REAL COACH RATINGS
A: 2,000 and more: An excellent, top of the line coach to have if you want to win the Quest for the Ring
B: 1,200 ti 2,000: A good or maybe a very good coach to have if you want to win the Quest for the Ring
C: 500 to 1,200: A decent but probably just mediocre coach to have if you want to win the Quest for the Ring
D: 0 to 500: A poor to mediocre at best coach to have if you want to win the Quest for the Ring
E: minus 500 to 0: A very poor coach to have if you want to win the Quest for the Ring; you have only a very, very small chance to win the Ring with this type of Coach.
F: minus 500 and less: A terrible, nightmare coach to have if you want to win the Quest for the Ring; you will definitely not win the Quest with a Coach this bad

Once the system is fully operational, Quest for the Ring will guarantee that any coaches who are given an F will never, ever win the Quest. If an F coach ever wins the Quest, we will shut down this site and apologize for being grossly wrong, but trust me, that will never happen. Whether we will issue the absolute guarantee for E coaches is under review; suffice it to say for now that E coaches have only a trivial chance of ever winning the Quest.

In general, as you might already realize, the lower the grade of the coach, the better the players have to be to win the Quest for the Ring...

Coach is an A: Players need to be at least very good
Coach is a B: Players need to be at least very, very good
Coach is a C: Players need to be extremely good
Coach is a D: Players need to be historically good-one of the best teams of all time
Coach is an E: Players need to be about the best team of all time.
Coach is a F: There is no possible way any set of players can possibly win the Quest

======= CAUTIONS REGARDING BASIC AND ADVANCED REAL COACH RATINGS ========

THE WIDELY DIFFERENT AMOUNTS OF EXPERIENCE PROBLEM
There is one big hurdle (or notable shortcoming if you want to be negative) in the Real Coach Ratings, and we have largely but probably not completely solved the problem as of 2009. This problem originates in the huge discrepancies in the amount of experience between long-term veteran coaches and much younger coaches. To some extent this makes comparing NBA coaches like trying to compare apples and oranges rather than like trying to compare various apples.

In the 2008 User Guide, this was what I had to say about this issue when I tackled it for the first time:

2008 WORK ON THE EXPERIENCE DISCREPANCY PROBLEM
As I was working on this I often had a sinking feeling that trying to fairly compare coaches with more than 10 years of experience with those with less than 2 years experience would be in the end impossible. But I persevered and scrapped and fought my way to the goal line and got it done. I achieved all of the balancing that I needed to achieve. Specifically, for example, I kept the points given for experience within reason, while making sure that regular season and playoff losses were penalized to the full extent they should be.

You must keep in mind that any coach who has been fired for not winning enough in the regular season, for not winning enough in the playoffs, or for both, and has not been rehired by another team, is not on this list. We don't care about them. The whole idea in multi-billion dollar professional sports is to win more than you lose, and that most obviously and most definitely includes the coaches. So a 50/50 record in either the regular season or in the playoffs is not good enough long term, and coaches who are not better than .500 get fired and not rehired sooner or later, and those who have met that fate already are not on this list.

To reflect the reality that coaches who can not win more than they lose are sooner or later going to be fired, and will most likely never advance in the playoffs before they are fired, it is necessary to make sure that losses entail a bigger negative number than do wins entail a positive number. But we have to avoid getting carried away. So when I add in the amount given for experience, the apparent gap between the award for winning and the penalty for losing is shrunk down to a small amount.

2009 WORK ON THE EXPERIENCE DIFFERENTIAL PROBLEM
Notice that in 2008 I said “we have to avoid getting carried away” in the 2008 attempt to solve this problem. Well, it turns out I probably did get a little carried away. The heavily experienced coaches with a lot of losses were being hammered a little bit too much!

So the number of points subtracted for losses were slightly reduced for 2009. Regular season losses are now minus 5.75 (instead of minus 7).

However, due to another consideration, playoff losses are slightly greater minuses in 2009 than they were in 2008.

Where we are right now is that we are in very good shape overall, but out of respect for conservatism we may still have a small problem left with the experience discrepancy problem. In a nutshell, we decided to take the risk that the problem is not completely solved so as to avoid being overly harsh toward certain long-term coaches. "First, do no harm..."

When all is said and done, everyone, including a bad coach, can possibly improve even after many years of not improving. This fact, which we didn’t allow for last year, is the biggest reason why we tweaked the way we did. Unfortunately, the price for this is the real possibility that the experience discrepancy problem is not completely solved.

SLIGHTLY DIFFERENT REWARDS AND PENALTIES BASED ON EXPERIENCE
Even after the tweaking, this feature of the system goes a long way toward solving the experience differential problem. Here is how it works:

In the case of all coaches who have coached fewer than 600 games (which is currently 17 out of 30 of them) since a full point is given for every regular season game for just the experience factor, and since the award for a regular season win is 4 points, and since the penalty for a regular season loss is minus 5.75 points, these younger, less experienced coaches do slightly better than break even just by achieving a 50/50 regular season record. When you combine the win or loss points with the experience point, a win earns a new coach a total of 5 points, while a loss earns him minus 4.75 points.

The new coaches are learning, so the system must be slightly easier on them. They can not be expected to know everything right now that they will know in a year or two or three or four. And if they learn the right things, than they might become the next Phil Jackson or Rick Adelman!

Coaches who have coached more than 600 games but fewer than 1,000 games must do a little better than .500 in the regular season to achieve a net positive toward their overall Real Coach Ratings. These coaches get 4.25 points for each regular season win and lose 5.5 points for each loss.

The long-term veteran coaches who have coached more than 1,000 games get no experience points at all. So they get 4 points for each regular season win and lose 5.75 points for each regular season loss.

For the playoffs, all coaches have the same total (including the four experience points) gain or loss: 24 points for a playoff win, and minus 24.75 points for a playoff loss.

REMAINING EXPERIENCE DISCREPANCY PROBLEM
The worst of the long-term veteran coaches probably have ratings that are slightly higher than what they really should be. If a Coach has received some "lucky breaks" by not being fired after bad losing seasons, and/or after bad losses in the playoffs, and he has over the years now accumulated 1,000 or more regular season games and 100 or more playoff games, his rating will likely still be in effect slightly distorted on the high side relative to the other coaches.

This is because the long-time veteran Coach, who could have been fired a long time ago but was not fired, will max out on the experience points, and he will also have a few winning seasons to go with the losing seasons. The sum of the maximum experience points (which is 700 for regular season experience plus four times the number of playoff games) plus any positive net from winning seasons will tend to more than offset all the losses from the year(s) he might have been fired, despite the heavy negatives that losses carry.

Another way of thinking about this issue is that assuming a long-term veteran Coach has a too high rating due to the above, keep in mind that Coach would not even be in the ratings had he actually been fired. Coaching a professional sports team is about the worst job in existence for job security, since the vast majority of coaches are involuntarily fired.

Yet another way of focusing on this problem is realizing that pro basketball coaches are fired or not fired based on different criteria.

We can not simply remove experience from the set of factors, since in every single career there is, the more experience you have, the better you tend to be. Moreover, even if we did reduce or remove the experience factor, the same problem would still be there in the case of coaches who probably should have been fired, but are not and then end up fortunately coaching very skilled teams in subsequent years, thus piling up wins with those teams.

In other words, we have no choice but to proceed as if all coaches face the same criteria as to whether they are fired or not, even though we know that some coaches, especially veteran coaches, are treated much more leniently than others

One other thing to keep in mind about long-term veteran coaches (the ones with more than 1,000 regular season games coached) is that once such a Coach gets older than 60, 65, and then maybe even 70 years old, that Coach's abilities will probably be less than they were when he were younger. Whereas almost all coaches with little experience are under the age of 55.

For example, Utah Jazz Coach Jerry Sloan is 68 years old on March 28, 2010, so it is possible that he is a little too old now for maximum effectiveness.

The bottom line is that there will be a small number of older, veteran coaches whose ratings are misleading on the high side. Unfortunately, we are unable to completely correct for this or to properly estimate the amount of the unavoidable distortion at this time.

So we advise you when looking at the ratings to make sure you give the benefit of the doubt to younger coaches who seem to have good potential. The coaches whose ratings are most likely distorted upwards would be, at the moment, in order of the most likely amount of distortion, George Karl, Don Nelson, and Larry Brown. It is plausible, for example, that young Miami Heat Coach Erik Spoelstra is as good or better a Coach right now as is Don Nelson.

PROBABLE DOWNSIDE DISTORTIONS
The flip side of the above distortion is also going to be a problem sometimes. If you have a younger coach who has just started out, and he has a bad team to start with (and a lot more new coaches start with bad teams than good ones) then his rating will be much lower than it will be in future years if he avoids getting fired and in the future gets much better teams to work with.

However, it is also very possible that in most cases the worst teams get only the medium and poor coaches, that in other words the really good coaches never have to start out coaching a bad team, so that any downside distortions are small and mostly moot points.

Generally speaking, we are still working on a way to make the comparisons between long-time veterans and much younger, newer coaches more valid than they are in the current system. We hope of course to make a breakthrough or two for next October's Report.

BE CAREFUL REGARDING THE VERY LARGE TIME SCALE OF THESE RATINGS
Keep in mind that each coach is rated using information from every season that he has been a head coach in the NBA. It is very plausible that some of the coaches will currently be substantially better or substantially worse than their overall career ratings indicate.

But while I am on this subject, I want to warn you to not make the assumption that all or even most coaches get better as they accumulate more and more experience. There is no empirical evidence I know of to back that sweeping generalization up, and nor is it in my view obvious or even likely to be true most or much of the time. It is plausible that coaches do not really improve that much after roughly 5 or 6 years of experience. It is also plausible that some of the heaviest experience coaches have not completely updated their beliefs and coaching schemes to reflect the current ways of basketball. They may be hurting their teams a little or even a lot by persisting with strategies and tactics that used to work well years ago but are not working very well in the NBA in 2008.

IF YOU COMPLETELY DISTRUST THE RATINGS
Even if you distrust the ratings themselves, you can evaluate the raw data yourself because Quest for the Ring beginning in 2009 provides the entire spreadsheet on which the Ratings are calculated.

FUTURE CHANGES TO THE BASIC AND THE ADVANCED REAL COACH RATINGS
Are the factors set in stone forever and ever? No, but adjustments will be few, far between, and minor in the coming years. Although this is not a perfect system, it is at the very least a very good system. And it is light years ahead of having no system at all with which to fairly compare coaches of radically differing amounts of professional basketball head coach experience.