tag:blogger.com,1999:blog-57722215473641930972024-03-07T23:48:45.896-08:00The Quest for the Ring ReferenceUnknownnoreply@blogger.comBlogger16125tag:blogger.com,1999:blog-5772221547364193097.post-17456596044881244602011-09-20T19:25:00.001-07:002011-09-20T19:25:54.917-07:00User Guide for Playoff Series Previews, September 2011Like many high level Reports at Quest for the Ring (QFTR), playoff previews are a formatted type of Report. Formatted reports have a pre-set format and there is little or no custom commentary included. The whole idea of formatted reports is to provide a very large amount of important information very efficiently. The carefully planned and long evolved and perfected formatting eliminates the need for time-consuming custom text reporting in contexts where there is really no need for it. But to fully understand a formatted Report you need to be familiar with the User Guide for it.<br />
<br />
In contrast to formatted reports, QFTR breaks new ground in general and reveals its latest discoveries about basketball in particular in free form (non-formatted) text reports. While formatted posts are "on the reservation", non-formatted text reports are where QFTR "goes off the reservation". Both types of reports are essential; having just one type without the other type would reduce the value of QFTR by MORE than half. <br />
<br />
In Playoff Preview Reports (PPRs) Excel Team Grids are used for quick and easy comparisons between teams. Since Excel is ultimately a sophisticated way to format information, PPRs are technically one of the very most intensely formatted Reports in the entire QFTR arsenal of formatted Reports.<br />
<br />
Team Grids on Excel are also actually the best foundational tool for managing a basketball team. For example, team grids allow managers, coaches, or anyone else to consider changes in players and/or in playing times that would improve the chances of winning playoff series and regular season games. <br />
<br />
Partly because no one is perfect, partly because relatively incompetent coaches are all too common, and partly because basketball (like many things) is more complicated than most people think it is, coaching errors are commonplace. Team Grids on Excel allow for quick flagging of coaching errors, some of which can be big enough to cost a team a playoff series or as many as a dozen regular season wins. <br />
<br />
We now proceed to detailed information about the content appearing in Team Playoff Previews in the Excel format.<br />
<br />
<span style="color: #cc6600;">============ SECTION ONE (AT THE TOP) OF PLAYOFF PREVIEW REPORTS USING EXCEL: HEAD TO HEAD COMPARISONS ============</span><br />
Using Real Player Ratings (RPRs) Section One allows for quick and easy comparison of players by position. You can compare specific players for any position. For example, you can see which team has the better starting point guard. You can very easily and quickly see which team has the better second squad small forward. And so on and so forth for each of the five positions and each of the two squads.<br />
<br />
Many young and some not so young basketball fans spend time arguing about who is the better player between two playoff starters at the same position. At QFTR we scientifically and accurately inform you of who was actually better in the current year.<br />
<br />
<span style="color: #cc6600;">SQUAD AVERAGES AND OVERALL TEAM AVERAGES</span><br />
One of the most important things to observe in the Head to Head Comparison area (Section One) are the squad Real Player Rating (RPR) averages. Carefully comparing the squad averages is very important and if you skip this you really will not be able to properly preview a playoff series.<br />
<br />
When you compare squad averages, you are essentially comparing the starters as a whole and the non-starters as a whole of the two teams. Since as everyone knows basketball is partly a team game and has stronger team dynamics at work than in many other sports, when the starters of one team are substantially better than the starters of the other team, this will often mean the advantaged team will likely win the series by virtue of that fact alone.<br />
<br />
But keep in mind a smart coach may possibly have graduated one or two second squad players to starter for the playoffs. This will not show up on the team grids in the Report. Also, keep in mind that in the Report, players are placed into squads according to minutes played. So when a team intentionally has the best player at a position come in late in the first quarter "from off the bench" that player may be more of a second squad player out on the court even though he is shown as a first squad player in the Playoff Preview.<br />
<br />
By looking at the squad averages you can see what the average rating of the players in that squad is for each team. By comparing the first squad with the second squad, you can see how much of a drop off there is between them. Since most of the players in the first squad are starters, this is approximately equivalent to comparing the starters and the bench. The bigger the drop off, the more minutes the starters should be playing.<br />
<br />
<span style="color: #cc6600;">TEAM REAL PLAYER RATING AVERAGES</span> <br />
At the very bottom of Section One you will see a row for “Team Average” and on that row you will find the Team Real Player Rating Average (TRPRA) for each of the two teams.<br />
<br />
TRPPA is two times the first squad average plus the second squad average divided by three. In other words, it is a weighted average of the top two squads with the first squad counted twice and the second squad counted once, which roughly corresponds to typical playing time patterns. Players in the third squad (also known as "the reserves") the injured players, and the benched players are not counted in the team average.<br />
<br />
You can put substantial stock but not an unlimited amount of stock in the team average number.<br />
<br />
One weakness of TRPPA is that even among later round playoff teams there are still often going to be in the second squad a player with a very low rating from time to time. How much such players play in the playoffs is dependent on how strapped the team is at the position and on how dumb the coaching is. <br />
<br />
Another weakness in the team real player rating average concept that sometimes can be significant is that as already indicated third squad ratings are completely ignored for the Team Real Player Rating Averages. But third squad players sometimes get fairly substantial playing time because sometimes they are fairly good players.<br />
<br />
Despite the shortcomings, TRPRA very often correctly signals which team is going to win the series. TRPRA is likely to predict the winner when the difference between the two teams is .050 or more and it is especially likely to correctly predict the winner when the difference is .100 or more. QFTR uses TRPPA (along with other information of course) to help project which team will win playoff series.<br />
<br />
<span style="color: #cc6600;">TYPICAL POSITION, SQUAD AND TEAM REAL PLAYER RATING AVERAGES FOR THE VERY BEST TEAMS</span><br />
The following discussion is limited to the very best teams, specifically the four final teams only (the teams in the Conference finals). Position, Squad and Team averages for non-playoff teams and for teams eliminated in the first and second rounds are beyond the scope of this User Guide. <br />
<br />
POSITION AVERAGES FOR 4 CONFERENCE FINAL TEAMS<br />
Point Guard .914<br />
Shooting Guard .774<br />
Small Forward .786<br />
Power Forward .872<br />
Center .920<br />
<br />
SQUAD AVERAGES FOR 4 CONFERENCE FINAL TEAMS<br />
1st Squad .853<br />
2nd Squad .708<br />
<br />
TEAM REAL PLAYER RATING AVERAGES FOR 4 CONFERENCE FINAL TEAMS<br />
Final Four Teams .805<br />
Teams in the NBA Championship .868<br />
<br />
<span style="color: #cc6600;">TEAMS IN THE CHAMPIONSHIP</span><br />
Many Championship teams will have at least one position where the average RPR of the two players who play it the most is greater than .950. Championship teams will sometimes feature two positions where the average of the top two players is greater than .900 with the most common combos being point guard and either center or power forward. At the low end, Championship teams will very seldom have any position where the best two players average below .700. <br />
<br />
But some mere playoff teams will have at least one position where the average of the top two players at the position is a little less than .700. The most common positions for this situation would be small forward and shooting guard. As you might expect, playoff teams that have even one position where the top two players who play it average less than .700 are generally the ones eliminated in the early rounds.<br />
<br />
<span style="color: #cc6600;">NBA OVERALL (ALL TEAMS) REAL PLAYER RATING EVALUATION SCALE</span> <br />
For comparison purposes this Guide now shows the overall Real Player Rating evaluation scale for ALL NBA players and ALL teams. This reminds you that many of the players on the four conference final teams are way above average players:<br />
<br />
<span style="color: #cc6600;">SCALE FOR REGULAR SEASON REAL PLAYER RATINGS</span><br />
Perfect Player for all Practical Purposes / Major Historic Super Star 1.100 and more<br />
Historic Super Star 1.000 1.099<br />
Super Star 0.900 0.999<br />
A Star Player / A well above normal starter 0.820 0.899<br />
Very Good Player / A solid starter 0.760 0.819<br />
Major Role Player / Good enough to start 0.700 0.759<br />
Good Role Player / Often a good 6th man, can possibly start 0.640 0.699<br />
Satisfactory Role Player / Generally should not start 0.580 0.639<br />
Marginal Role Player / Should not start except in an emergency 0.520 0.579<br />
Poor Player / Should never start 0.460 0.519<br />
Very Poor Player 0.400 0.459<br />
Extremely Poor Player 0.399 and less<br />
<br />
<span style="color: #cc6600;">AVERAGE RATINGS BY POSITION</span><br />
Not all positions are created equal. In pro basketball, point guard and center are the most important positions, power forward is in the middle, and small forward and shooting guard are the least important. (Some teams will have a different pattern.) The following are good estimates for average ratings by position among all NBA players who play 300 minutes or more. There are very few small forwards and shooting guards who don't fit at other positions who are superstars. Most superstars are players who can play point guard, power forward, or center.<br />
<br />
Point Guard .750<br />
Shooting Guard .635<br />
Small Forward .645<br />
Power Forward .715<br />
Center .755<br />
All Positions / All Players (NBA Overall Average) .700<br />
<br />
To quickly and fairly compare two players who play different positions, convert their Ratings as follows: <br />
<br />
Point Guards: Subtract .050; for example, .700 becomes .650<br />
Shooting Guards: Add .065; for example, .700 becomes .765<br />
Small Forwards: Add .055; for example, .700 becomes .755<br />
Power Forwards: Subtract .015; for example, .700 becomes .685<br />
Centers: Subtract .055; for example, .700 becomes .645<br />
<br />
<span style="color: #cc6600;">TEAMS SHOULD AVOID PLAYING LOW RATING PLAYERS IN THE PLAYOFFS</span><br />
Often, especially on the best coached teams and on the primary contenders, a second squad player with a relatively low rating will be strategically benched during the playoffs. Players at the nearest position can fill in at the position.<br />
<br />
In general, centers and point guards with ratings below .650 should play sparingly in the playoffs or not at all. Power forwards with ratings below .615 should play sparingly or not at all in the playoffs. Small forwards and shooting guards with ratings below .545 and .535 respectively should play sparingly or not at all in the playoffs.<br />
<br />
<span style="color: #cc6600;">============ SECTION TWO (LOWER SECTION) OF PLAYOFF PREVIEWS USING EXCEL: TEAM GRIDS ============</span><br />
<br />
<span style="color: #cc6600;">FIRST SQUAD, SECOND SQUAD, AND RESERVES</span><br />
A depth chart shows you team policy regarding who starts and who are the backups and in what order for the five positions. The team grid is based on the depth chart style. However, players (other than players acquired during the season from trades; see below regarding them) are placed into first squad, second squad, and third squad according to minutes played, not according to the latest ESPN or any other depth chart, or in other words not according to anyone's estimation of what the team policy is. <br />
<br />
Instead of using depth charts, whoever has played the most minutes at a position is shown in the “1st Squad” whether or not that player starts at the position. Whoever has played the second most minutes at a position is shown in the "2nd Squad" regardless of that player's position on any depth chart. Whoever has played the third most minutes at a position is shown in the "Reserves" (which could have been labelled "3rd Squad" instead).<br />
<br />
There is a notable exception to the rule for who goes in which squad. If a player has been acquired during the season and he is listed as the starter on the ESPN depth chart, he will be shown as first squad. Similarly, if a player acquired during the season is shown as the first backup to the starter in the depth chart he will be shown as second squad regardless of minutes. In other words, the depth chart prevails over minutes in the case of players acquired by trade during the season. This makes sense because minutes played for the prior team could not reasonably be counted for the current team.<br />
<br />
<span style="color: #cc6600;">PLAYERS WHO MOST LIKELY WILL NOT BE PLAYING</span><br />
On a Team Grid, just to the right of the “3rd Squad" column you see two grey areas. From left to right the first one is for players who are most likely or definitely out for much or for all of the series for some reason, usually due to injury. <br />
<br />
The rating for players who will not be played is shown as long as the player has played at least 300 minutes in either the current year or in the previous year. If the injured player didn't play at least 300 minutes in either of those years, then "none" will be shown for the rating for both years. Such players most likely would not play even if they were available to play.<br />
<br />
The second grey shaded area to the right is for players who could play but almost certainly will not play because they played fewer than 300 minutes during the regular season. The 300 minutes threshold is the minimum needed for a hidden defending adjustment and therefore is the minimum needed for a player to get a Real Player Rating. It also is being used here as the threshold for determining whether a player was essentially benched for the season. 300 minutes is less than four minutes a game, which is a very good dividing line for saying whether a player was benched for the season or not. You can get close to 300 minutes with just garbage time, so if you don't play at least 300 minutes, you are basically benched.<br />
<br />
<span style="color: #cc6600;">PLAYERS ACQUIRED BY TRADE</span><br />
We have already described how players acquired by trade are placed with respect to what squad they are in. Here we discuss how we determine what rating to show for them.<br />
<br />
Players acquired by trade during the season who have played at least 300 minutes for their new team (during the regular season) are treated on the grid as if they were on the team the entire season. The rating you see for them is for their new, current team minutes. The previous team rating is considered to be irrelevant for the grid.<br />
<br />
Players acquired by trade during the season who have NOT played at least 300 minutes for their new team are shown as "more or less benched" if they did play at least 300 minutes for the previous team this season but not at least 300 minutes for the new, current team. The rating you see for them in the "more or less benched" column would have to be and is their rating on their previous team this season.<br />
<br />
If the player acquired by trade has never played at least 300 minutes for any team, he is treated like any other player who has never played 300 minutes or more. How those players are shown on the Team Grids immediately follows.<br />
<br />
<span style="color: #cc6600;">PLAYERS WHO HAVE NEVER PLAYED AT LEAST 300 MINUTES IN ANY SEASON</span><br />
These players will be listed in the "More or Less Benched for the Season" column. No rating can be computed for them for any year so "none" is shown for prior year rating. Rookies who didn't get to play much in their first years are commonly shown this way. Other than garbage time, it is extraordinarily unlikely that any such players will play in any playoff game in the current year.<br />
<br />
In the "More or less Benched" area, the Real Player Rating that is shown is the one from the most recent year the player played at least 300 minutes. What year that was is shown right next to their rating. Sometimes you can spot a player who should have played more than 300 minutes in this area. Generally, players in the More or Less Benched area of the Team Grid will not be playing in any playoff game except perhaps in garbage time.<br />
<br />
<span style="color: #cc6600;">COMPARING TEAMS BY POSITION</span><br />
The position averages are shown ONLY on the Team Grids (in Section Two) of the Playoff Preview Report. They are not really relevant for the head to head comparison area (Section One). The header abbreviation used on the grids for the position average column is "POS AVGS". <br />
<br />
By looking at position averages in Section Two you can compare the two teams position by position. For each position, only the ratings of the first squad and of the second squad player are considered for the position average. And the rating of the first squad player at each position counts twice as much as the rating of the second squad player at each position. In other words, for each position the position average is two times the rating of the first squad player plus the rating of the second squad player divided by three.<br />
<br />
Reserves (third squad) players generally do not play and so their ratings are ignored for the position calculations.<br />
<br />
<span style="color: #cc6600;">WHAT IF THERE WAS ONLY ONE PLAYER WHO PLAYED AT LEAST 300 MINUTES AT A POSITION?</span><br />
The position average calculation assumes that there were at least two players who played at least 300 minutes at each position, one in the first squad and one in the second squad. If there is only one player who played 300 minutes or more at a position (who is in the first squad) there is a special rule. For the second player at the position, 75% of that single player's rating is considered to be the rating for the player at that position in the second squad. The 25% reduction is justified because of the fact that one or more players at other positions will have to fill out the position that has only one player. Those other position players will obviously generally not be as valuable at the position as players dedicated to that position are.<br />
<br />
What if there isn't much fill-in? If the single player consumes most of the playing time because he is a superstar, the 25% reduction is still justified because when any player plays most of a game, he is often not as good late in the game due to not being rested enough.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-27562979071545078092011-01-19T21:13:00.000-08:002011-01-21T08:56:30.713-08:00User Guide for Real Coach Ratings as of January 2011<span style="color:#ff6600;">======= SECTION ONE: INTRODUCTION =======<br /></span>Quest for the Ring is proud and pleased to present what is apparently the world's first serious effort to scientifically and accurately rate and rank all of the current NBA head coaches. Even the academic oriented basketball statistics sites do not have any formulas or specialized ratings for coaches, although some of them thank goodness keep track of basic coach data including wins and losses.<br /><br />The QFTR coach rating product is called Real Coach Ratings (RCR). The first edition of these annual ratings, which compared to the latest version was relatively crude (and yet still much more than mere opinion) was published in October 2008. The second edition, which features substantial but relatively modest improvements over the 2008 edition, was published in early December, 2009. In late November 2010 the third edition, which featured relatively large scale and important improvements over the 2009 edition, was published. At this time it is not known to what extent it will be desirable and possible to improve RCR further, but there is a fairly high probability that most and possibly all future changes to and expansions of RCR will all be small compared with the changes and expansion in 2010.<br /><br />Why should the coaches hide behind a black curtain as they do in the USA? Concerning coaches, there is virtually a total lack of the kind of statistical comparing and contrasting that goes on with players 24/7. To say there is a double standard where players get the short end of the stick would be an understatement. Coaches can get away with relative incompetence and negligence for many years, in some cases indefinitely, whereas players will within days, weeks, or a few months at the most have their minutes cut at the least, and they can easily be bounced around the NBA or demoted to some other League. When QFTR started to rank and rate Coaches in 2008, it was way, way overdue that someone did it.<br /><br />The big Corporation sites such as ESPN have editorial limitations which prevent them from being severely critical of NBA head coaches, managers, or owners. ESPN writers can be mildly critical at the most (which in practice means they have to hint at criticism rather than directly criticize). For heavy criticism of NBA coaches, managers, or owners, you have to go somewhere other than ESPN, CBS Sports, Fox Sports, and NBC Sports. As one of many examples, you might see some heavy criticism at SlamOnline.com. And then even when you do venture elsewhere and see some heavy criticism of coaches, managers, or owners, you are most often going to see only opinions as opposed to conclusions based on hard research. I mean, if you are lucky, the opinions are dead on accurate, but since there is little if any evidence from research backing up those opinions they could easily be wrong. Here at QFTR it is the reverse: you seldom will see a mere opinion and most of what you see are conclusions backed up by valid and adequate research.<br /><br />I can pretty much guarantee you that no one has ever, even with the capabilities created by the Internet age, put in as much effort, thought, and technology as QFTR has into fairly comparing NBA coaches with widely different lengths of time spent in professional head coaching. Despite the fact that QFTR has little or no competition for coach ratings, it applies full scale quality control to RCR and provides a very detailed User Guide that exceeds 20,000 words. And the Real Coach Ratings (RCR) system CAN be used in other Leagues, other countries, and on other planets, if there are any other basketball planets, that is!<br /><br />The Real Coach Rating system has been extensively improved in the second half of 2010. The biggest improvement is the new factor called "Playoffs Games Coaching Score". A lot of time went into developing this factor, much of which went into developing an underlying data base called the "NBA Playoffs Series, Teams and Coaches Database". This database consists of every playoff series ever played since 1980 except for twenty best of three first round playoff series played between 1980 and 1983.<br /><br />To summarize simply, for each series, a statistically valid estimate of exactly how many games should have been won by each team is calculated (to two decimal places, for example, 3.25 wins) and then the actual number of wins is compared to this and either a positive score or negative score is derived from this.<br /><br /><span style="color:#ff6600;">THE NBA PLAYOFFS SERIES, TEAMS, AND COACHES DATABASE</span><br />In 2010 Quest for the Ring developed a database which has details about virtually all playoff series of the world’s premier pro basketball League, the NBA, from 1980 to the present. The number one reason why the database was developed was so that RCR could be substantially improved. Specifically, one of the main objectives for creating this database was to identify which coaches of pro teams win more games in the playoffs that they “were supposed to lose” than they lose games that they were "supposed to win" (net playoff winners). And of course, we also want to find out which coaches lose more playoff games they were supposed to win than they win playoff games they were supposed to lose (net playoffs losers). (And of course there are some coaches who win some that should have been losses and lose some that should have been wins whose overall record on that is about even up.)<br /><br />In late November 2010 and in very early 2011 much of the information that can be obtained from the database was published in various Reports. See especially:<br /><br /><a href="http://nuggets1.blogspot.com/2010/11/nba-playoffs-upsets-how-many-are-there.html">“NBA Playoffs Upsets: How Many are There and Why do They Happen?”</a><br /><br /><a href="http://nuggets1.blogspot.com/2010/11/real-coach-ratings-for-nba-2010-11-look.html">“Real Coach Ratings for the NBA, 2010-11, Look Ahead”</a><br /><br /><a href="http://nuggets1.blogspot.com/2010/11/official-nba-coach-recommendations-can.html">“Official NBA Coach Recommendations: Can the Coach of Your Team Win the Quest or Not?” </a><br /><br />Note however that the actual database has not been published and is not scheduled to be at this time. Not all of the information that can be obtained from this database has been published in Reports yet. And although QFTR has more and more in recent years published Excel worksheets that are products of databases or in effect are micro databases, the templates for the largest databases can not be published due to risks associated with copyright violation. The QFTR public email address can be used for inquiries about how someone could possibly obtain a copy of the database and about the terms of use for it. For the email address, at the QFTR home page, click the “Contact” link that is on one of the horizontal menus just under the banner.<br /><br />Using what is formally known as the “NBA Playoffs Series, Teams, and Coaches Database", and also using knowledge about statistics and basketball, it has been proven beyond a shadow of a doubt that some coaches are better in the regular season than they are in the playoffs. Actually, to be more precise, the playoff losing coaches are ones who have their teams playing in ways that lead to relatively more wins in the regular season than in the playoffs. And vice versa: coaches who win extra games in the playoffs have selected strategies and tactics that work better in the playoffs than they do in the regular season.<br /><br />This is not really all that surprising as long as you know that the game of basketball itself changes a little in the playoffs from what it is in the regular season. The rules stay the same and to the untrained eye it may seem like the same game, but in reality the way it is played changes a little and the way the referees call games changes a little. Although most people do not know all of the details of the changes (the magnitudes and the components and so forth) most people are aware in general terms that defending is more important in the playoffs than it is in the regular season. To state it a little differently, most people are aware that many if not most teams ramp up their defending for the playoffs; they play defense more aggressively, more energetically, more athletically, and sometimes smarter.<br /><br />Defending can be improved almost overnight through will and effort. But this is not really true with offense. Here it’s appropriate to insert a few paragraphs from the User Guide to Real Team Ratings:<br /><br /><span style="color:#ff6600;">DO NOT MAKE THE MISTAKE OF OVERSTATING THE IMPORTANCE OF DEFENSE</span><br />But don’t fall into a trap here; don’t get carried away. In basketball defense is relatively less important than it is in many and very possibly most other sports. Basketball is designed to be a game that favors the offense more so than for many, many other sports.<br /><br />The tightrope here is that on the one hand you have to realize that defense is more important in the playoffs than it is in the regular season. On the other hand you have to understand that in basketball exactly how important the defense can be is limited fairly strictly. Defense alone can not possibly win you a Championship in basketball.<br /><br />By contrast, in American pro football the limitations on how important the defense can be are far weaker, meaning that unlike in basketball, you can win the Super Bowl Championship in football pretty easily with the best defense in the League but a below average offense. For example, the Pittsburgh Steelers have done this several times over the years. But in basketball it is extremely difficult to win the Championship (and you are going to need some luck) to win it with even the best defense in the League but only the 20th best offense (out of 30). What you really need in basketball to go along with the best defense in the League is at the very least the 15th best offense (out of 30); and to have a good chance you need at least the 10th best offense to go along with the best defense.<br /><br />So even though in basketball defense is more important in the playoffs than it is in the regular season, the magnitude of the change is not really all that large; in basketball defense is only a little more or, arguably in some cases, moderately more important in the playoffs than in the regular season.<br /><br />Note also that, ironically, the teams that are the very best defensively in the regular season are unable to increase the quality of their defending in the playoffs as much as teams that come into the playoffs with lower ranked defenses. Coming into the playoffs, teams with one of the best two or three offenses in the League but whose defenses are down around 10th best are generally more likely to win the Championship then teams which come in with one of the top two or three defenses but only about the 10th best offense.<br /><br />It’s obvious that teams have the opportunity to be better defensively in the playoffs than they were in the regular season; after all, this happens all the time. Defensively in the playoffs, it’s mostly a matter of doing the same things that were done in the regular season harder, faster, and/or smarter. But the opportunity for a team to be better offensively in the playoffs than it was in the regular season is very limited. In other words, offensively, what you saw in the regular season is pretty much all you are going to see in the playoffs. Teams should assume they can improve a little defensively but they should never ever assume they can get substantially better offensively when the playoffs come, because that is unlikely to happen.<br /><br />This is indirectly another reason why teams that run slightly organized offenses are much smarter and more likely to win The Quest for the Ring than are the teams that run more street ball type offenses. Coaches who run the street ball type offenses often think that that strategy will work better in the playoffs than in the regular season. They may think that unlike a slightly organized offense a street ball type offense can be ramped up in the playoffs. And they may think that a street ball type offense is exactly what you want to try to offset the ramped of defenses you see in the playoffs.<br /><br />All of these suppositions are false to one extent or another. First, street ball type offenses work less well in the playoffs against ramped up defenses than they do in the regular season against lesser defenses. Second, you can not substantially ramp up any type of offense in the playoffs including the street ball type. For offense more so than defense, it is crucial that in the regular season you are playing in a way that will allow you to win in the playoffs. For defense it is theoretically very recommended but not required that you in the regular season play in a way that will allow you to win in the playoffs. Third, ramped up defenses are relatively more effective against street ball type offenses than they are against slightly organized offenses.<br /><br />For convenience, this Guide is developed into main sections and subsections. The main sections are:<br /><br />Section 1 Introduction (Which ends here)<br />Section 2 Components of and Format of Real Coach Ratings Reports<br />Section 3 Discussion of and Calculation of Factors used for the Playoffs Sub Rating<br />Section 4 Discussion of and Calculation of Factors used for the Regular Season Sub Rating<br />Section 5 Interpretation of Ratings and Evaluation of Coaches<br />Section 6 Cautions Including the Well Known Experience Gap Problem<br /><br />Within each section subsections are in all caps as shown.<br /><br /><span style="color:#ff6600;">======= SECTION TWO: COMPONENTS OF AND FORMAT OF REAL COACH RATINGS REPORTS</span> <span style="color:#ff6600;">=======</span><br />Starting in 2010 QFTR produces two Real Coach Ratings Reports. One of them, scheduled for August is called the "Look Back Version" which, as the name implies, gives the ratings for all the head coaches from the season just gone by. The other one, scheduled for October, is called the “Look Ahead Version” which, as the name implies, gives the ratings for all the head coaches as the new season gets underway.<br /><br />Note that QFTR has data that would allow a rating to be calculated for any coach who ever coached any playoff series in 1980 or later (including retired and deceased coaches). This information will be published as time permits in future years. A total of 89 coaches have coached at least one playoff series since 1980, all of whom are in the database.<br /><br />For anyone who has seen a prior Coach Ratings Report, you will see that the format of the report has changed and that the Report is even bigger than before. Yes, this Report is longer than most, but the length is justified because if a team has the wrong coach it is going to be wasting money and wasting player talents. For any of the worst playoffs coaches, winning the NBA Championship is literally impossible unless perhaps they end up with one of the very best teams of all time, and even then the poor playoffs coach might still lose the Championship.<br /><br />The RCR Reports are now divided into three primary sections:<br /><br />--Rankings<br />--Key Details About Coaches<br />--Coach by Coach Details<br /><br />Each primary section is divided into sub sections (which are themselves sometimes divided into sub sections of the sub sections).<br /><br />The Rankings Section of a RCR report is the core of the Report, and there are three sub sections for it, all of which are rankings:<br /><br />--Real Coach Ratings (overall)<br />--Real Coach Playoffs Sub Ratings<br />--Real Coach Regular Season Sub Ratings<br /><br />The second of the three primary sections of a RCR report, the Key Details About Coaches Section, contains four sub sections:<br /><br />--Listing by team of coaches who appear in the report<br /><br />--Coaching changes by team (appears in the Look Ahead Version)<br /><br />--Coaches who QFTR guarantees will never win The Quest for the Ring (and those coaches close to this status)<br /><br />--Coaches who have never coached any NBA playoff games (who because of this have a Playoffs Sub Rating of zero)<br /><br />The first two and the fourth of these four are self-explanatory. For the criteria used to declare that a coach will never win the Quest, see “Section 5: Interpretation of Ratings and Evaluation of Coaches” below.<br /><br />The third of the three primary sections of a RCR Repot, The Coach by Coach Details Section, consists of numerous facts about all the coaches. The coaches are presented alphabetically by team. Let’s look at an example to see what information can be found here. Most of the information is self-explanatory. We’ll use Larry Brown, coach of the Charlotte Bobcats for 2010-11:<br /><br />CHARLOTTE BOBCATS<br />COACH: LARRY BROWN<br />Real Coach Rating: 2420.14<br />Rank Among 2010-11 Coaches: 2 out of 30<br /><br />PLAYOFFS / REGULAR SEASON BREAKDOWN<br />Playoffs Rating: 2199.00<br />Playoffs Rank: 2 out of 30<br />Regular Season Rating: 221.14<br />Regular Season Rank: 14 out of 30<br /><br />PLAYOFFS DETAILS<br />Playoffs experience: Number of playoff games coached: 193<br /><br />Net Playoff games WON that should have been losses: 16.1<br /><br />How many EXTRA playoff games this coach will WIN out of 100: 9.4<br /><br />NBA Championships won: 1<br />Number of times this Coach won a Conference final but not the Championship: 2<br /><br />REGULAR SEASON DETAILS<br />Games coached with current team: 164<br />Regular season games coached: 1974<br />Regular season wins: 1089<br />Regular season losses: 885<br /><br />As you can see, most of this is self-explanatory.<br /><br />As for the more mysterious items, first note that the overall Real Coach Rating equals the sum of the Playoffs Sub Rating and the Regular Season Sub Rating. One of the many interesting things about RCR is that you can easily see that some coaches have much higher playoffs ratings than they do regular season ratings (like Larry Brown does) whereas other coaches have much higher regular season ratings than they do playoffs ratings (like George Karl does). This is more proof of what QFTR talks about all the time regarding how the playoffs are more different from the regular season than most people think and regarding how some coaches are good for the regular season but bad for the playoffs.<br /><br />In the Playoffs Details area, there are two things that are going to be mysterious because most likely no one has every calculated such a thing until now. The first item is this one: “Net Playoff games WON that should have been losses: 16.1”. This is not a rate but instead it is an absolute and actual number. It is neither a directly observable number nor a certain number, but rather a number derived from the model used in the playoffs database.<br /><br />Why is this number valid? QFTR strongly endorses the database, all its components including its formulae, and all results derived from the database. To see if you agree with QFTR and for all of the details about the database and about how information is derived from the database, see "Section 3: Discussion of and Calculation of Factors used for the Playoffs Sub Rating" below.<br /><br />This first of the two mysterious items, number of wins that were supposed to be losses (or the number of losses that were supposed to be wins) is information which is free of the kind of statistical error involved with rates discussed immediately below, and so QFTR publishes it for all coaches who have coached at least one playoff game. But there is another kind of statistical error involved, so extreme caution is warrented when evaluating coaches who have coached fewer than 25 playoff games. See Section Six for complete details.<br /><br />The actual real life absolute minimum a coach in the database could have coached is three playoff games. Although the database begins with 1979-80, it excludes all four of the first round playoff series played each year from 1980 through 1983 because those were best of three series, which are so short that the database model used to determine unexpected wins and losses is not statistically valid. In a best of three, whichever team wins two games first wins the series.<br /><br />Note that from 1980 through the present sixteen teams have made the playoffs every year, but the format of the playoffs has changed several times. From 1980 to 1983 there were only four first round series, and these were best of threes. From 1980 through 1983, four teams were given first round byes; these four played the winners of the round one series in round two. In 1984 the playoffs format was changed extensively. Now there were eight first round series instead of just four, and now they were best of five rather than best of three games. There were no more byes starting in 1984. Both prior to and after 1984, rounds after round one were all best of seven series.<br /><br />Starting from 1984, all series (including the round one best of fives) are included since the model can be used without excessive statistical error for best of five series, where whoever wins three games first wins the series. The last year that the round one best of five was employed was in 2001-2002. Starting in the next year and through the present, the round ones have all been best of sevens (and of course all the other rounds have remained best of sevens.<br /><br />If a coach has not coached any playoff games, this is clearly evident because it is reported this way (in the Coach by Coach Details Section):<br /><br />Playoff games won that should have been losses: 0<br />Playoff games lost that should have been wins: 0<br /><br />Going back to the Larry Brown example for the Coach by Coach Details Section of the RCR Report, the second mysterious item, which is right below the first, is this: “How many EXTRA playoff games this coach will WIN out of 100: 9.4”. (Or it could tell you how many EXTRA playoff games the coach will LOSE out of 100.) This is a rate with the actual number of extra wins or losses as the numerator and the actual number of games coached as the denominator. The words “extra” and “lose” or “win” are in all caps to make the coach detail section easy to read or skim through.<br /><br />All rates calculated with relatively small amounts of data based on real events have relatively high statistical errors. The statistical error increases exponentially for very small and tiny amounts of data. To avoid reporting rates that are likely to be in error, QFTR does not publish rates for any coach who has coached fewer than 25 playoff games. For these inexperienced coaches, instead of a rate, you will see:<br /><br />“The extra playoff games this coach will win or lose out of 100 is not reported for this coach due to insufficient number of playoff games.”<br /><br />Remember that both of these very important numbers come directly from the NBA Playoffs Series, Teams, and Coaches Database. For further details, see Section 3: Discussion of and Calculation of Factors used for the Playoffs Sub Rating below.<br /><br />Note that even those who disagree with the innovative QFTR evaluation measures and those who are not sure and don’t have time to evaluate the model can make extensive use of the raw data that is in the Coach by Coach Details sub section. But you can only do this with the regular season details because the playoffs details are almost entirely made up of custom designed information and the simple playoffs wins and losses are NOT published by QFTR.<br /><br />Quite frankly, the raw playoffs wins and losses is information that is not only inferior to what QFTR does publish, but also it has very little information value in general. Unless you know what the playoffs record was “supposed to be”, you can’t do much of anything with the raw wins and losses (or with the raw percentage of wins in the playoffs). For one thing, there are radical differences in how many playoff games different coaches have coached. Another problem is that many coaches have coached too few games for making any judgments just based on raw wins and losses. Yet another problem (and it is a big problem) is that different coaches average different quality of players over their playoffs coaching careers. All of these problems are tackled and largely or completely solved by the QFTR methodology.<br /><br /><span style="color:#ff6600;">WHY THE SUB RATINGS ARE NEEDED AND ARE AT LEAST AS IMPORTANT AS THE OVERALL RATINGS</span><br />As you know already the RCR system involves two sub ratings that you combine to get the overall coach ratings. With all other QFTR systems the overall rating is more important than any of the sub ratings. With Real Coach Ratings, though, the playoffs and regular season sub ratings are by themselves extremely important and at this time are considered more valid than the overall ratings. The reasons are rather involved and are discussed in Section 5. QFTR thinks that the playoffs sub ratings are more important than either the regular season sub ratings or the overall ratings, but of course QFTR is biased because it is focused like a laser on the NBA playoffs and championship. For much more about this subject see “Section 5: Interpretation of Ratings and Evaluation of Coaches”.<br /><br /><span style="color:#ff6600;">NUMERICAL PARAMETERS OF RATINGS AND SUB RATINGS</span><br />Only a handful of coaches (who are likely the worst coaches) have overall Real Coach Ratings below zero. Unlike Real Team Ratings, where all the ratings average out to about zero (and where the teams not likely to make the playoffs have negative scores) with Real Coach Ratings, the vast majority of the coaches have positive ratings. And many if not most of the coaches who end up with negative ratings are going to be only slightly below zero.<br /><br />One of the ways the QFTR system is validated is that it is much more likely for coaches with low and negative ratings to be fired than ones with higher ratings.<br /><br />But the firing of coaches with negative ratings is far from automatic. Unfortunately, some teams persist with coaches who have negative ratings who in many cases could not possibly win The Quest for the Ring, and in some cases can never be and will never even be truly successful regular season coaches either. Apparently, managers and owners have a whole lot of difficulty evaluating coaches, something which is not surprising here at QFTR given all we have discovered and proven.<br /><br />Let’s look at the average, the median, and the range of the overall ratings and of the two sub ratings.<br /><br />In the November 2010 (like many QFTR Reports it was a little late) Look Ahead Version, the average Real Coach Rating is 706 and the median is 275. The highest rating is 8,801 (Phil Jackson, with Larry Brown the second highest at 2,420). The lowest overall rating is -326 (Mike D’Antoni). Twenty five coaches have overall Real Coach Ratings above zero and five coaches have ratings below zero.<br /><br />In the November 2010 Look Ahead Version the average playoffs sub rating is 227 and the median is 0. The highest playoffs sub rating is 6,035 (Phil Jackson, with Larry Brown the second highest at 2,199). The lowest playoffs sub rating is -793 (Rick Carlisle). Eleven coaches have playoffs sub ratings above zero and twelve coaches have playoffs sub ratings below zero. Seven coaches, all of whom have never coached a NBA playoff game, have playoffs sub ratings of exactly zero.<br /><br />In the November 2010 Look Ahead Version the average regular season sub rating is 479 and the median is 201. The highest regular season sub rating is 2,766 (Phil Jackson, with Greg Popovich second at 1,884). The lowest regular season sub rating is -107 (Lionel Hollins). Twenty eight coaches have regular season sub ratings above zero. Two coaches have regular season sub ratings below zero.<br /><br />We just presented those numbers not only to make using the 2010 reports easier, but also because it is likely that, unlike prior to now, those parameters are not going to change much in the future.<br /><br /><span style="color:#ff6600;">======= SECTION THREE: DISCUSSION OF AND CALCULATION OF FACTORS USED FOR THE PLAYOFFS SUB RATING =======</span><br />Mechanically, the playoffs sub rating is simply the rating you get when you factor in only the playoffs-related factors. The playoffs sub rating consists of the following factors which will be discussed in detail in order:<br /><br />(1) Playoff games coached<br />(2) Championships won<br />(3) Conference Titles won (but where the Championship was not won)<br />(4) Playoff Games Coaching Score<br /><br />This list is deceptively short because the fourth item actually requires numerous components, and it has a very sophisticated data base backing it up and validating it. If those components were listed separately, the total number of components comprising the playoffs sub rating would differ depending on exactly how the system was broken down, but would be at least ten.<br /><br /><span style="color:#ff6600;">1 PLAYOFF GAMES COACHED</span><br />This is also known as the playoffs experience factor. This is very simple: two points are awarded for every playoff game coached regardless of result.<br /><br />The limit is 200 playoff games. There will most likely never be a coach who benefits in any significant way from getting more playoff coaching experience beyond 200 games. Coaches who have coached more than 200 playoff games are going to be older, very veteran coaches who are extremely unlikely to change how they coach.<br /><br />Also, the number of coaches coaching currently who have coached more than 200 playoff games is always going to be a tiny number. As of January 2011, there are only three current coaches who are close to or over 200 playoff games coached:<br /><br />Phil Jackson 323<br />Jerry Sloan 202<br />Larry Brown 193<br /><br />Coaches such as these already know as much as they ever will know about winning NBA playoff games. If some of their beliefs are wrong, everyone is going to have to live with that, because coaches this experienced are not going to change their ways after all this experience spanning many, many years. And unfortunately, it is very possible for even coaches this experienced to have false beliefs about how playoff games and championships are won. QFTR has hard, smoking gun evidence to prove that; see Section 5 of this Guide and see also various Reports at QFTR.<br /><br /><span style="color:#ff6600;">2 CHAMPIONSHIPS WON</span><br />100 points are added for each Championship win. It is always 100 points regardless of how many games the Championship consisted of. These points are first and foremost awarded for merit but also they can be looked at as extra points given for extremely valuable experience. Counting the two points every coach gets for experience for every playoff game (assuming less than 200 playoff games have been coached) and assuming an average Championship of about six games, the total experience points for each Championship game (where the Championship is won) is approximately nineteen.<br /><br /><span style="color:#ff6600;">3 CONFERENCE FINALS WON BUT THE CHAMPIONSHIP IS NOT WON</span><br />50 points are given to each coach who wins a Conference Final but loses the Championship. It is always 50 points regardless of how many games the Conference Final consisted of and regardless of how many games the Championship consisted of. These points are first and foremost awarded for merit but also they can be looked at as extra points given for extremely valuable experience. Counting the two points every coach gets for experience for every playoff game, and assuming an average Conference Final of about six games, the total experience points for each Championship game (losing effort) is approximately ten.<br /><br />There is no bonus for mere losing appearances in the conference finals. Only two playoff series need to be won to merely reach these finals, and either an extra outstanding bunch of players and/or mere luck could in many cases allow a team with even a bad playoffs coach to fairly easily reach a Conference Final.<br /><br /><span style="color:#ff6600;">PLAYOFF GAMES COACHING SCORE</span><br />This last of the four factors making up the Playoffs Sub Rating is by far the most important one. This is where all of the good, successful playoffs coaches are going to get most of their points from. On the flip side, this factor is where the bad playoff coaches get heavily penalized up to and including cases where they end up with a very negative playoffs sub rating despite having a lot of experience.<br /><br />The following will take you on a little journey whose destination is the Playoff Games Coaching Score. This score is calculated for each playoff series and for each coach. The key to the score is statistically determining (for each coach and for each series) the exact number of playoff games won that were supposed to be losses, and also the exact number of playoff games lost that were supposed to be wins. All of this is calculated using the QFTR Playoffs Series, Games, Teams, and Coaches Database, or QFTR Playoffs Database for short.<br /><br /><span style="color:#ff6600;">THE QUEST FOR THE RING PLAYOFFS DATABASE</span><br />The QFTR NBA Playoffs Series, Teams, and Coaches Database has every playoff series played beginning with the 1979-80 year through the present (2010) except for sixteen best of three series played from 1980 through 1983 (four of them each year). Why these were excluded was explained in Section 2 above. As of 2010 there are 433 NBA playoff series in the database.<br /><br />For each playoff series, there are 22 primary information items:<br /><br />DATABASE ITEM ONE: The Year (the series was played)<br /><br />DATABASE ITEM TWO: The Round; in all years there were four rounds, but round one series played from 1980 through 1983 are not included as explained earlier.<br /><br />DATABASE ITEM THREE: Away Team; this is the team that does not have the home court advantage<br /><br />DATABASE ITEM FOUR: Offensive Efficiency of the Away Team: This is the average points scored per 100 possessions (in the regular season leading up to the playoffs).<br /><br />DATABASE ITEM FIVE: Defensive Efficiency of the Away Team: This is the average points given up per 100 possessions (in the regular season leading up to the playoffs).<br /><br />DATABASE ITEM SIX: Net Efficiency of the Away Team: This is Offensive Efficiency minus Defensive Efficiency for the Away Team. This can either be a positive or negative number, but most playoff teams have positive net efficiencies and most teams that do not make the playoffs have negative net efficiencies.<br /><br />DATABASE ITEM SEVEN: Offensive Efficiency of the Home Team: This is the average points scored per 100 possessions (in the regular season leading up to the playoffs).<br /><br />DATABASE ITEM EIGHT: Defensive Efficiency of the Home Team: This is the average points given up per 100 possessions (in the regular season leading up to the playoffs).<br /><br />DATABASE ITEM NINE: Net Efficiency of the Home Team: This is Offensive Efficiency minus Defensive Efficiency for the Home Team.<br /><br />DATABASE ITEM TEN: Home Team Net Efficiency minus Away Team Net Efficiency: This is the Net Efficiency of the Home Team minus the Net Efficiency of the Away Team.<br /><br />In almost exactly 90% of the series, this number is positive. When it was, the better team according to efficiency had the home court advantage. Note that since home court advantage is determined by wins and losses, this means that wins and losses are extremely highly correlated with net efficiency. But for looking at results of and for predicting series, net efficiency is even more reliable than simple wins and losses.<br /><br />In about 2% of the playoff series, both teams had the same net efficiency; in these cases Item Ten is zero.<br /><br />In almost exactly 8% of the playoff series, this number is negative. When it is, the team that is not as good according to efficiency was able to somehow get the home court advantage, from a tie breaker for example.<br /><br />The most lopsided playoff series in history according to efficiency was the round one 1992 series between Miami and Chicago which had Michael Jordan that year. Miami’s record that year was just 38-44 while Chicago was 67-15. Chicago’s net efficiency that year was 11.0 and Miami’s was -4.2. Item Ten was 11.0 minus negative 4.2 or 15.2; this is the highest difference since 1980 to date. The Chicago Bulls were overwhelmingly favored and, sure enough, they defeated the Miami Heat three games to zero in that one.<br /><br />The series where the away team was better than the home team by the greatest margin was in round two in 1997 where the Seattle Supersonics were the Away Team and the Houston Rockets were the Home Team. Seattle had a net efficiency of 8.5. Houston had a net efficiency of 4.8. In this case Item Ten was -3.7. Despite being much less efficient than Seattle, Houston had the home court advantage. Both teams finished with 57 wins and 25 losses. Houston won game seven of this series at home and thus won this series 4 games to 3. Houston went on to the West Conference Final but lost to the Utah Jazz 4-2.<br /><br />DATABASE ITEM ELEVEN: Home Team Net Efficiency minus Away Team Net Efficiency plus the Home Court Advantage Adjustment: The adjustment is always 1.4 points which represents the advantage that the home team has expressed in terms of net efficiency. Having home court advantage is approximately equivalent to having a net efficiency that is 1.4 points better than the one calculated from the regular season.<br /><br />This Item Eleven essentially tells you how close the series should be, with the home court advantage factored in.<br /><br />For example, for the Seattle vs. Houston series just discussed, Houston’s net efficiency was boosted from 4.8 to 6.2. Seattle still had the better net efficiency (8.5) but it lost game seven in Houston. In this case Item Eleven was 6.2 minus 8.5 equals negative 2.3. Remember, this being negative is very unusual. Only 8 percent of series have negative numbers for Item Eleven and less than that still have negative numbers after 1.4 is added to the home team’s net efficiency.<br /><br />As another example, for the Miami-Chicago series discussed just prior to the Seattle-Houston one, Chicago had home advantage and so its’ net efficiency was boosted from 11.0 to 12.4; Miami’s net efficiency remained minus 4.2. In this case Item Eleven was 12.4 minus negative 4.2 equals 16.6.<br /><br />DATABASE ITEM TWELVE: Favored Team: This field is a text field and is either “Home” or “Away” depending on which team is favored. If Item Eleven is positive as it is most of the time, the team with home court advantage was favored, and vice versa. Out of the total of 433 series, only 14 have been ones where the team without the home court advantage was favored to win the series. These series have been split seven a piece: seven times the Away Team won as expected and seven times the Home Team won unexpectedly. None of these were all that surprising upsets because the Away Team was favored by a small amount in all of these.<br /><br />The favored team needs to be clearly identified so that the expected wins and losses process can be worked relatively easily; read on for details.<br /><br />DATABASE ITEM THIRTEEN: Away Team Actual Wins: The number of games actually won in the series by the Away Team.<br /><br />DATABASE ITEM FOURTEEN: Home Team Actual Wins: The number of games actually won in the series by the Home Team.<br /><br />DATABASE ITEM FIFTEEN: Expected Away Team Wins<br /><br />DATABASE ITEM SIXTEEN: Expected Home Team Wins<br /><br />For items fifteen and sixteen, the first step is that whichever team is favored (according to Item 11 and as shown in Item 12) is expected to win the number of games that wins the series. For best of seven series, the expected wins for the favored team is four. For best of five series, the expected wins for the favored team is three.<br /><br />The expected wins for the team not favored (the underdog) is determined based on a very carefully constructed and calibrated scale. For very and extremely close series, the expected wins of the underdog is one game fewer than the number of wins needed to win the series. In a best of seven series between two very closely matched teams, the expected number of wins for the underdog is three (and the favored team is expected to win four games).<br /><br />At the opposite extreme, for series where the difference between the teams is large, which is most common in the first round, the expected number of wins of the underdog is often zero.<br /><br />In between the extremes of razor close series and very lopsided series, the expected number of wins for the underdog ranges between one fewer than the number of wins needed to win the series (which is three for best of sevens) and zero. The scale is calibrated down to net efficiency differences of just 0.1. Here is the actual scale with just the round number efficiency differences shown:<br /><br />DIFFERENCE IN NET EFFICIENCY VERSUS EXPECTED WINS BY UNDERDOG SCALE<br />The first number just below here is Item Eleven (the difference in the net efficiencies with the home court adjustment factored in) and the second number is the expected wins for the underdog in a best of seven series.<br /><br />0.0: 3.00 games<br />1.0: 2.90 games<br />2.0: 2.78 games<br />3.0: 2.58 games<br />4.0: 2.28 games<br />5.0: 1.93 games<br />6.0: 1.53 games<br />7.0: 1.20 games<br />8.0: 0.90 games<br />9.0: 0.60 games<br />10.0: 0.40 games<br />11.0: 0.20 games<br />12.0: 0.00 games<br /><br />(If the gap is greater than 12, zero games are expected to be won by the underdog.)<br /><br />The scale, which you could look at as the all-important core of the entire Playoffs Sub Rating (and even of the entire RCR system) was very carefully constructed in accordance with and validated against all of the actual historical results of NBA playoffs series from 1959-60 through 2009-10.<br /><br />There is a different scale for best of five series which is constructed, calibrated, and validated in the same way.<br /><br />So now we have Items Fifteen and Sixteen, the expected wins for each team, and we are ready to move on.<br /><br />DATABASE ITEM SEVENTEEN: Actual Away Team Wins minus Expected Away Team Wins: Positive numbers are good and negative numbers are not good for the Away Team and its Coach.<br /><br />DATABASE ITEM EIGHTEEN: Actual Home Team Wins minus Expected Home Team Wins: Positive numbers are good and negative numbers are not good for the Home Team and its Coach.<br /><br />DATABASE ITEM NINETEEN: Away Coach: The Coach of the team that did not have the home court advantage is identified here. (This is a text field.)<br /><br />DATABASE ITEM TWENTY: Home Coach: The Coach of the team that did have the home court advantage is identified here. (This is a text field.)<br /><br />DATABASE ITEM TWENTY ONE: Away Coach Score<br /><br />DATABASE ITEM TWENTY TWO: Home Coach Score<br /><br />Items 21 and 22 are the most important and innovative end products coming out of the database.<br /><br />Items 21 and 22 are calculated in a coordinated way rather than separately. For every playoff series, one of the coaches will have a positive Coach Score and the other one will have a negative Coach Score that is the inverse. For each series, if you add the two coach scores the result is always zero. For the entire database, if you add every single coach score the result is always zero.<br /><br />These two coach scores are calculated for each playoff series in three steps:<br /><br />STEP ONE<br />First, Item Seventeen times 100 is the preliminary Away Coach Score (Item 21). Similarly, Item Eighteen times 100 is the preliminary Home Coach Score (Item 22).<br /><br />STEP TWO<br />Step two is that preliminary Item 21 and preliminary Item 22 are compared. Whichever is farther from zero is declared to be the “controlling score”. (Another way to think of this is that the absolute value of the negative score is compared to the value of the positive preliminary score, and whichever number is greater is the “controlling score”. Of course, using the absolute value is a very temporary thing; the final coach score will be negative whenever the preliminary score was negative.<br /><br />STEP THREE<br />The controlling score is the actual score for the corresponding coach. (The preliminary, the controlling, and the actual scores are all the exact same number). The other score (the “non-controlling score” if you will) is discarded. In its place goes the inverse of the controlling score. Note that all that is being changed is the magnitude of the number; whether the score is positive or negative is never changed. This inverse of the controlling score is the final score for the other coach.<br /><br />What are we actually doing with this procedure? We are identifying the biggest expectation gap; specifically, we are identifying whether the Home Team and Coach had the biggest gap between expectation and result (either positive or negative) or whether it was the Away Team and Coach that had the biggest gap (either positive or negative). Once the biggest gap is identified and scored, the other coach receives the invoice or opposite score.<br /><br />Note that the gaps for all the playoff series in the database should, if the model is statistically valid, add up to very close to zero. In other words, the absolute value of the sum of the negative gaps should be very similar to the sum of the positive gaps. If they are substantially different, the scale can be slightly adjusted in a process known as recalibration. This type of recalibration is very important and very effective for insuring quality control and for insuring the reliable validity of results. For complete details, see Section Six.<br /><br /><span style="color:#ff6600;">EXAMPLE OF THE CALCULATION OF COACH SCORES FOR A PLAYOFF SERIES</span><br />Here is an example; we’ll use the 2010 NBA Championship between the Boston Celtics and the Los Angeles Lakers. Boston was the Away Team and Los Angeles was the Home Team. The Coach of Boston was Doc Rivers and the Coach of Los Angeles was Phil Jackson. Boston had a net efficiency of 3.9 and Los Angeles had a net efficiency of 5.1. The difference (Item 10) was 1.2. Item 11 is where the home court adjustment of 1.4 is factored in, so Item 11 is 2.6. Los Angeles was the favored team.<br /><br />According to the chart that QFTR uses that gives expected wins according to adjusted difference in net efficiency, the expected wins by the underdog in a best of seven series where the adjusted efficiency difference is 2.6 is 2.66. Boston, the underdog and the Away Team, actually won three games in that series. So for them, Item 17 (actual minus expected Away Team wins) was 3.0 minus 2.66 = .34.<br /><br />Next, you can see that Item 21 (Away Coach Score) preliminary is .34 times 100 equals 34.<br /><br />For Los Angeles, the expected number of wins (Item 16) was four and the actual number of wins was four. So Item 18 (actual minus expected Home Team wins) is 4 minus 4 equals zero. Then Item 22 (Home Coach Score) is 0 times 100 equals zero.<br /><br />Now we compare the two preliminary coach scores:<br /><br />Preliminary Away Coach Score: 34<br />Preliminary Home Coach Score: 0<br /><br />The one greatest from zero (regardless of whether negative or positive) is 34, which is the one for the Away Team and the away Coach. This is declared to be the controlling score, and the score of 34 is the coach score for this series for the Coach of the Away Team, which in this case was Doc Rivers. So in this particular series, the Away Team did a little better than expected, which earned the Coach, Doc Rivers, a “coach score” for this series of 34 points.<br /><br />In accordance with step three (above) the inverse or opposite of the controlling score is minus 34 (-34). This is the score given to the Coach of the home team, which in this case was Phil Jackson. That is, Jackson’s preliminary score of zero is changed to minus 34 because Doc Rivers did a little better than he was supposed to according to the statistical model which, remember, is based on and validated by more than 600 playoff series played during a 50 year period ending in 2010.<br /><br /><span style="color:#ff6600;">COACH PLAYOFF SCORES CLOSE TO OR EXACTLY ZERO</span><br />Note that with this method the only way for a coach score to be exactly zero is for the series to be decided exactly according to expectations. Realistically, the only series that can possibly be decided exactly according to expectations are ones which are supposed to be 4-0 routs (or 3-0 routs in best of fives). If the actual result is 4-0 (or 3-0), if in other words the actual result is identical to the expected result, both coaches will have coach scores for that series of zero. In this case there is no effect whatsoever on either coach’s playoff sub rating (or their overall RCR).<br /><br />But coach scores can be very close to zero regardless of how close the series was expected to be. For example, the scale might project a series to be decided (statistically, of course) 4 games to 1.99 games. If the actual result is 4-2, then the underdog coach will have a coach score of 1: (.01 times 100). The favored coach has a coach score of -1 in this example.<br /><br />The main point is that the model embedded in the database accurately measures the difference between expected and actual playoff wins for each playoff series (and for both coaches in each series). Again, the larger of the two differences (between actual and expected) is the operative one.<br /><br /><span style="color:#ff6600;">PLAYOFF COACH SCORES FOR ALL SERIES COACHED</span><br />For each coach, the combined total of all his coach scores for all series he coached is called his “Playoff Games Coaching Score”. This in turn is one of the four components of the Playoffs Sub Rating of the Real Coach Ratings system. As discussed earlier, this Playoff Games Coaching Score is more important than the other three components of the Playoffs Sub Rating combined.<br /><br /><span style="color:#ff6600;">NUMBER OF GAMES WON THAT SHOULD HAVE BEEN LOST OR NUMBER OF GAMES LOST THAT SHOULD HAVE BEEN WON<br /></span>For each coach, the Playoff Games Coaching Score divided by 100 equals the number of games won that should have been lost (if positive) or the number of games lost that should have been won (if negative). This derived result is reported in the Coach by Coach Details Sub Section of the Rankings Section of Real Coach Ratings Reports. Although technically this is a statistical construct as opposed to exact reality, we know for a fact that the real life numbers are very, very similar to the calculated numbers.<br /><br />By dividing the unexpected wins or losses by the total number of playoff games coached, we can then calculate a rate of unexpected wins for the good playoff coaches and the rate of unexpected losses for the bad playoff coaches. For more details, see Section Two above.<br /><br /><span style="color:#ff6600;">SCORES FOR GAMES WON AND LOST ACCORDING TO EXPECTATIONS</span><br />Coaches’ playoff sub ratings do not change at all when they win games they were supposed to win or when they lose games they were supposed to lose. If a series is decided in exactly the way it is supposed to be, both coaches get the experience points (two points for each game) and they get nothing else.<br /><br />You can see from this how it is not an exaggeration to say that the Playoffs Sub Rating completely ignores raw wins and losses. Instead, it awards only differences between actual and expected wins and losses.<br /><br />This is not only valid but is much superior to awarding or penalizing anything at all based on the raw wins and losses. Raw wins and losses are determined more by the quality of the players than by the quality of the coaches. What we want to know and what the playoffs sub rating shows for each coach is whether that coach won any games the players would not have won were it not for the above average coaching. And of course we also want to know for each coach is whether that coach lost any games the players alone would not have lost were it not for the below average coaching.<br /><br />This ends the primary and in detail discussion of the Playoffs Sub Rating. For those who are a little confused, and/or for those not convinced that the system just discussed in detail works well, please read the following, which is a revised version of a discussion that first appeared in the May 2010 User Guide (when the framework of the new system was established but all the details and the database were awaiting development). The following is a relatively simple but accurate and effective summary of the QFTR Playoffs Sub Rating system.<br /><br /><span style="color:#ff6600;">SUPPLEMENTARY, SUMMARY DISCUSSION OF THE COACHING SCORES FOR THE PLAYOFFS SUB RATING</span><br />For each playoffs series we start with four measures, the offensive efficiency of the two teams and the defensive efficiency of the two teams (all from the regular season, of course). Efficiency is how many points scored or how many points given up per 100 possessions. Over the course of the regular season, the thousands of possessions result in precise efficiency numbers where seemingly very small differences are actually big differences between teams that can easily be big enough to cause wins or losses in the playoffs.<br /><br />Then for each team we subtract the defensive efficiency from the offensive efficiency to find the net efficiency. Most but not all playoff teams have positive net efficiency numbers and most teams that do not make the playoffs have negative net efficiency numbers.<br /><br />Then we add a small “bonus” amount to the net efficiency of the team that has the home court advantage in the series.<br /><br />Then we compare the two net efficiencies and whichever team is higher is the favorite. Of course this is true in real life: the team with the better net efficiency beats the other team the vast majority of the time, although when the differences are smaller this is not so certain.<br /><br />The exact difference between the two net efficiencies is crucial, because it determines the likelihood or probability of the favored team winning. The greater the difference in net efficiency is, the greater the probability that the better team will win the series. Assuming no injuries, in many first round series and even occasionally in a second round series, the probability that the better team will win the series is almost 100%. QFTR has carefully constructed a scale to translate deceptively small differences in net efficiency to how many games the underdog should win on average in a best of seven game (and a best of five) series. For example, if the difference in net efficiency is 5.0, the underdog will on average win 1.5 games in a best of seven series (with the favored team winning 4 games). This average number of wins by the underdog is usually called the “expected number of wins”.<br /><br />Next, for each playoff series, we compare the number of games actually won and lost by the coach versus what the expected number of wins and losses are. The difference between the actual and the expected is the all-important thing; this difference is then amplified (with a multiplier) to accurately reflect the great (and underestimated by the general public) importance of coaching in the playoffs.<br /><br />Unexpected wins and losses are rewarded and penalized heavily but not excessively. Unexpected playoff losses are one of the worst things that can happen to a team and a franchise. Among other things, unexpected losses waste the owners’ money, because they partly waste the efforts of a lot of players and managers, and because they make the franchise less likely to attract top free agents. Obviously, unexpected losses also waste the talents and efforts of the players. Unexpected playoff losses are a nightmare and the fewer of them you have the better.<br /><br />Note that for a coach who is exactly good enough to win exactly the number of playoff games he is supposed to win and no more than that, statistically speaking, unexpected playoff losses are going to be exactly offset by unexpected playoff wins once the sample size (number of playoff games in this case) is large enough. In real life, this means that all coaches are going to have a series once in awhile where his team performs below standard (and loses one or more games that should have been wins) but these will statistically eventually be offset by that coaches’ unexpected playoff wins.<br /><br />This is the most crucial thing you have to keep in mind: the main purpose of the playoffs sub rating system is to on the downside flush out and penalize coaches who have more unexpected playoff losses than unexpected playoff wins. On the upside, the primary purpose of the advanced system is to flush out and to award coaches who have more unexpected playoff wins than unexpected playoff losses.<br /><br />Quest for the Ring already knows many of the basketball strategies and tactics that work better in the playoffs than in the regular season, and you do to if you read the site because we review and illustrate most of them from time to time.<br /><br /><span style="color:#ff6600;">======= SECTION FOUR: DISCUSSION OF AND CALCULATION OF FACTORS USED FOR THE REGULAR SEASON SUB RATING =======<br /></span><br />There are four components of the Regular Season Sub Rating:<br />(1) Number of Regular Season Games Coached<br />(2) Number of Consecutive Regular Season Games Coached with Current Team<br />(3) Number of Regular Season Wins<br />(4) Number of Regular Season Losses<br /><br /><span style="color:#ff6600;">1 NUMBER OF REGULAR SEASON GAMES COACHED</span><br />One Point is given for each regular season game coached up to 500 games, which is about six seasons worth of games. If a Coach has not learned just about everything he needs to by this point, it is unlikely he ever will, so the award for experience is sharply reduced for all games coached beyond 500. 0.25 points (1/4 of a point) is given for games 501 through 1,000. 0.06 points (about 1/16 of a point) is given for all games over 1,000. Note that in early versions nothing was given for games coached in excess of 1,000; the latest version corrects that very minor error by recognizing that even long, veteran coaches might make extremely small improvements in their later years.<br /><br />What about rookie and near rookie coaches? Just because they have never coached in the NBA, should their experience rating be zero? No, I don't believe so. They either have substantial coaching experience in other Leagues, or they were extremely talented and/or intelligent players, or both, or else they would not have been hired to be a head Coach in the NBA. So any coach who has coached for fewer than 200 NBA games is given exactly 200 points for experience. So rookie coaches start out with Real Coach Ratings of 200 and they go up or down from there. For new coaches, the Regular Season Games Coached is fixed at 200 until the coach has coached 200 games; then it goes up from there (by 1 for each game through 500, by 0.25 for games 501 through 1,000, and by 0.06 for any games above 1,000).<br /><br /><span style="color:#ff6600;">2 NUMBER OF CONSECUTIVE REGULAR SEASON GAMES COACHED WITH CURRENT TEAM<br /></span>This is a supplementary experience score which most benefits coaches who have gone the longest without being fired by their current teams. The points given are 0.30 (3/10 of a point) for all games coached, up to 1,000 games, by the coach for the team the Coach is currently working for.<br /><br />The one side of the coin regarding this is that the coach must be doing what the organization wants to avoid being fired, and he can't be a total failure basketball wise, so starting with those things he deserves credit in proportion to how long he has kept his post. The other side of the coin is that the more experience a Coach has with a particular team, the more valuable he is to that franchise, because he knows everybody and everything concerned with the franchise better and better with each passing year. Generally speaking, the more successive games a Coach has coached with the same team, the more effectively and efficiently he can help the team squeeze out wins that would otherwise be losses.<br /><br />Jerry Sloan, who coming in to 2009-10 had coached a mind boggling 1,668 games for the Utah Jazz, is the ultimate example of a Coach who due to his many years with the same team is going to be more effective and efficient than he would be if he had just switched to a different team. Due partly to this factor, do not be surprised if the Jazz become a losing team shortly after Sloan finally retires.<br /><br />Another name for this factor might be "franchise specific experience." For 2009-10 the Washington Wizards hired a new head Coach, Flip Saunders, who has a lot of prior experience with other teams and has a relatively high rating. But he is brand new to the Wizards, so be careful not to expect miracles or even to assume that his coaching is going to be as good as it has been in the past from the get go. Look instead for the Wizards to get a little better as the season goes along and in the coming years if Saunders remains the coach. This is because Saunders needs time to merge his skills and abilities with the specific factors involved with making the Wizards a winning team.<br /><br /><span style="color:#ff6600;">3 NUMBER OF REGULAR SEASON WINS</span><br />Four points is assigned per regular season win.<br /><br /><span style="color:#ff6600;">4 NUMBER OF REGULAR SEASON LOSSES</span><br />Minus 5.5 points is assigned per regular season loss.<br /><br /><span style="color:#ff6600;">WHY THE PENALTY FOR LOSING A REGULAR SEASON GAME USUALLY EXCEEDS THE GAIN FOR WINNING ONE<br /></span>You must keep in mind that any coach who has been fired for not winning enough in the regular season, for not winning enough in the playoffs, or for both, and has not been rehired by another team, is not on the list of coaches being rated. We don't care about them. In theory we are supposed to be evaluating mostly coaches who are among the best in the country.<br /><br />The whole idea in multi-billion dollar professional sports is to win more than you lose, and that most obviously and most definitely includes the coaches. So a 50/50 record in either the regular season or in the playoffs is not good enough long term, and coaches who are not better than .500 sooner or later get fired and not rehired, and those who have met that fate already are not on the list of current coaches.<br /><br />To reflect the reality that coaches who can not win more than they lose are sooner or later going to be fired, and will most likely never advance in the playoffs before they are fired, it is necessary to make sure that losses entail a bigger negative number than do wins entail a positive number. But we have to avoid getting carried away. So when I add in the amount given for experience, the apparent gap between the award for winning and the penalty for losing is shrunk down to a small amount.<br /><br />Now consider the true underlying net positive and negative scores for the four types of regular season games and results, which you get by combining the experience points with the points for the win or the loss:<br /><br /><span style="color:#ff6600;">TRUE REGULAR SEASON COACH GAME SCORES FOR WINS</span><br />For the majority of coaches, this will be 5 Points: 4 points for the win and 1 point for the experience. Here is the breakdown for each type of coach:<br /><br />Rookie and Very New Coaches (less than 200 games): 4 for the win + 0 for the experience equals 4.0 points<br /><br />Relatively New Coaches (201 to 500 games): 4 for the win + 1 for the experience equals 5.0 points<br /><br />Veteran Coaches (501 games to 1,000 games): 4 for the win + .25 for the experience equals 4.25 points<br /><br />Long Veteran Coaches (more than 1,000 games): 4 for the win + 0.06 for the experience equals 4.06 points<br /><br /><span style="color:#ff6600;">TRUE REGULAR SEASON COACH GAME SCORES FOR LOSSES</span><br />For the majority of coaches, this will be -4.5 points: -5.5 points for the loss and 1 point for the experience. Here is the breakdown for each type of coach:<br /><br />Rookie and Very New Coaches (less than 200 games) -5.5 for the loss + 0 for the experience equals -5.5 points<br /><br />Relatively New Coaches (201 to 500 games) -5.5 for the loss + 1 for the experience equals -4.5 points<br /><br />Veteran Coaches (501 games to 1,000 games): -5.5 for the win + .25 for the experience equals -5.25 points<br /><br />Ultra Veteran Coaches (more than 1,000 games): -5.5 for the loss + 0.06 for the experience equals -5.44 points<br /><br />In summary and in comparison:<br /><br />--Rookie and very new coaches get 4 points for regular season wins and lose 5.5 points for losses.<br /><br />--Relatively new coaches get 5 points for regular season wins and lose 4.5 points for losses.<br /><br />--Veteran coaches get 4.25 points for regular season wins and lose 5.25 points for losses.<br /><br />--Ultra Veteran Coaches get 4.06 points for regular season wins and lose 5.44 points for losses.<br /><br />Important note: the rookie and very new coaches actually get the same points as the relatively new coaches when you look at the bigger picture, because they already have received 200 experience points for their first 200 games.<br /><br />The key thing to note here is that with respect to wins and losses the regular season sub rating is a little biased in favor of relatively new coaches versus the veteran coaches. This is on purpose, of course. This substantially offsets what would otherwise be an unfair advantage in the rating system. The more experienced coaches are expected to do somewhat better in winning and losing in order to achieve a net positive from their winning and losing. This is the primary mechanism used by QFTR that substantially evens the playing field between coaches of widely differing amounts of experience, without being unfair to any type of coach. Without this slightly differing treatment, the ratings system would be biased to some extent in favor of the veteran coaches, because the veteran coaches are eligible for far more points from the sheer number of experience points they get, from the consecutive games coached with current team item, and often from any or all of the items in the playoff sub rating system.<br /><br />In any future tweaking of the RCR system, one of the areas most likely to be tweaked is points given or taken away for regular season wins or losses by the different types of coaches. A case can be made that relatively new coaches should be even more favorably treated in the regular season relative to the veteran coaches than they already are. But if there is any future tweaking, we will as always be careful to avoid going overboard.<br /><br />See Section 5 and especially Section 6 for more on the difficulties in comparing coaches with widely different numbers of games coached.<br /><br /><span style="color:#ff6600;">======= SECTION FIVE: EVALUATION OF COACHES AND SPECIFIC INTERPRETATION OF RATINGS =======</span><br />The primary objective of Quest for the Ring (QFTR) is to determine and report exactly how NBA playoff games are won and lost. Since in the playoffs, and especially in later rounds of the playoffs there is usually very little difference between how good the players are, any difference in the coaches, sometimes including even very small differences, can determine who wins the series. Therefore, one of the most recurring themes at QFTR is what is good and what is bad coaching for the playoffs. This means that QFTR gives very heavy attention to coaching in its reports on the main home page.<br /><br />Further, the general public is unaware of just how important coaching is in the playoffs, especially in the Conference Finals and in the Championship, and this fact makes QFTR all the more motivated to keep reporting on the subject. Very, very few other basketball writers attempt to cover this subject at all; it’s like a lonely frontier out here. Regardless of their being very little if any competition for reporting on pro basketball coaching, QFTR uses the same high quality standards and high and reliable quality control for this area as it does for other areas (which other writers and broadcasters do attempt to cover).<br /><br />Since there is so much on the subject in the hundreds and hundreds of reports on the QFTR home page, any single article on the subject, assuming it was not a full scale and lengthy book, could only highlight the main points. Similarly, this Section of this User Guide (which obviously can not be even a short book in length let alone a long one) can only discuss some of the most important points about coaching in the playoffs in particular and in the NBA in general.<br /><br />Moreover, this Section has the second objective of explaining specifically how to use the overall ratings and the two sub ratings of the Real Coach Ratings System. The need for this second focus further limits the amount of coverage we can devote to the evaluation of coaches topic. We will try to more than scratch the surface here, but trust me; this topic is way too big for this Section of this Guide.<br /><br />Given all of the limitations we have for this Section, anyone who wants “full and complete coverage” of what good and bad coaching is in the NBA, and especially in the playoffs, should read any or all of hundreds of reports that are at the QFTR home page.<br /><br />“Evaluation of coaches” generally will be covered first in this dual focus Section and then “specific interpretation of ratings” will be last.<br /><br /><span style="color:#ff6600;">PART ONE OF TWO PARTS OF SECTION FIVE: EVALUATION OF COACHES</span><br /><br /><span style="color:#ff6600;">IMPACT OF COACHING IN THE REGULAR SEASON VERSUS THE PLAYOFFS</span><br />Theoretically, unless he is stuck with a truly lousy roster, any reasonably good coach can win a lot of regular season games and get his team into the playoffs. Plus, any coach at all, including a bad one, can squeak a very good or great team into the playoffs. For any reasonably good coach, merely getting into the playoffs is really not much of an accomplishment at all.<br /><br />Many, many owners, managers, and fans do not seem to understand this, but the only thing that really matters with regard to coaching is what happens in the playoffs. Only the truly good coaches can win in the playoffs. The playoffs are where the wheat is separated from the chaff. In the NBA, the regular season is quite honestly nothing more than the preseason for the "playoff season," which is the only season which really matters when all is said and done.<br /><br />Another way to look at the regular season is that it is a sort of D-League for the off-season. What I mean by that is that owners, managers, and coaches should be watching other teams in the regular season so that they can spot up and coming players who they should try to obtain in the off season (and to a lesser extent in trades in the regular season prior to the trading deadline in February).<br /><br />Playoff games are generally more intense in all respects: individual players' efforts, team play as a whole, and coaching efforts are all ramped up. And as most of the general public is generally aware of, most teams ramp up their defending in the playoffs.<br /><br /><span style="color:#ff6600;">CERTAIN VETERAN PLAYERS CAN COACH THEMSELVES TO SOME EXTENT</span><br />Always keep in mind that older, more veteran teams can coach themselves to one extent or another, particularly if the roster is both highly skilled and highly experienced. It doesn't matter who comes up with the winning schemes and patterns; what matters is that someone does. Younger teams, however, always need a good coaching staff to make headway in the playoffs.<br /><br />Quest for the Ring has gone on record claiming that the 2007-08 Champion Boston Celtics are a good example of a team that could coach itself well to a large extent.<br /><br />However, coaches are important in the late playoff rounds even for teams that can partly coach themselves. Coaches determine playing times, which are much more important than most people realize. If the coach of a really good, veteran team that is to some extent “coaching itself” often inserts the wrong players in the game at the wrong time and/or does not have the playing times roughly correct, and/or has a player completely benched who should be playing, then the team will be damaged from bad coaching regardless of how well the players are “coaching themselves”.<br /><br /><span style="color:#ff6600;">COACHES' NUMBER ONE OBJECTIVE IS TO AVOID BEING FIRED</span><br />The number one objective for all coaches, but especially for rookie and newer coaches, is to avoid being fired. Calculations indicate that the average Real Coach Rating is currently 706 and the median is about 275. So the objective of all rookie coaches must be to increase their starting rating of 200 toward the median and later on toward the average of 706 in as few years as possible.<br /><br />Although there will occasionally be exceptions to the rule, coaches who move up even a little from 200 are generally safe from being fired while those who move down from 200 are not safe. Even achieving just a 250 gives the coach a little job security, 325 gives substantial job security, and 400 gives very substantial job security. I’m not saying that the job security achieved for those relatively modest ratings is a good or right thing. Rather, I am merely reporting what is going on in the real world.<br /><br />The firing of coaches with ratings higher than 250 is relatively uncommon. But when a coach who has a rating of 250 or higher is fired, he is likely to be hired by a new team, most often for the very next season, but sometimes after a delay of a year or two or three. Coaches with ratings higher than 400 who are fired are very likely to be hired by a new team within at most a few years. If a coach with a rating higher than 600 is fired but is never rehired, then something exceptional happened; for example, maybe there was a complete and humiliating collapse in a playoff series. Or perhaps there was a vicious argument between that coach and one of the managers or the owner.<br /><br />Note also that there is a huge exception to the general rule of thumb that coaches with ratings below 200 (and especially those with ratings below zero) are not safe from being fired. Long veteran coaches, those who have coached about 800 games or more, are often not fired even if they are very poor playoff coaches whose sharply below zero playoff sub rating drives their overall coach rating below zero. This is because many owners do not understand that some coaches do well in the regular season but can not do well in the playoffs, or worse, because of owners who are willing to settle for a good, “dependable” regular season coach even if he is a bad playoffs coach.<br /><br />You can think of the range between 200 and 400 as "the proving ground" for coaches. Most coaches who drop below zero instead of going up from 200 during their first 3-6 years will be bounced out of the NBA. No mercy is given for coaches stuck during all of those years with sub par teams.<br /><br />QFTR recommends that coaches who have ratings below 200 for more than about five straight years, and especially coaches who have ratings below zero for about five straight years should be fired unless the managers and owners involved are sure that the coach has not had competitive players to work with, or unless the managers and owners involved are sure that the coach is getting better at his job, or unless there is some other unusual mitigating factor.<br /><br />Coaches, whether they are newer ones or long veterans, who maintain their jobs with Real Player Ratings below 200, and especially with Real Coach Ratings below zero, are frequently going to be men who have very cordial relations with the managers and owners. In other words, they are being kept on the payroll because the managers and/or the owners involved personally like the coach in question enough to brush aside any concerns about whether that coach is doing a good enough job for their team. These dubious coaches are given the benefit of the doubt or, in other words, sort of a free pass. These free passes generally don’t last for longer than roughly six years for newer coaches, but can last indefinitely for long veteran coaches.<br /><br />It is not just owners and managers who can be fooled into thinking that a coach is a good one just because he has been coaching for many, many years. It honestly seems that most basketball writers and broadcasters are fooled in this way also. And of course, much of the general public is also fooled.<br /><br />It is also true that some managers and owners live in fear that they might go from bad to worse if they exchange one coach for another. They simply do not have enough courage to strike out and try a rookie or a near-rookie coach, or to pick up a coach who has been fired by another team but who deserves a second chance.<br /><br />The key is balance. On the one hand you don't want to be stuck out of caution or fear with a veteran coach who is simply not among the best coaches. On the other hand, you can't just strike out and pick any one who has never coached an NBA team before but seems like he might be a good coach. Rather, you have to do a lot of homework and research. You have to spend a lot of time and make every effort to find that one coach out of a hundred candidates who will actually become one of the better and maybe even one of the best NBA coaches.<br /><br />Note that in the real world, most owners who strike out on this subject do so by erring on the side of too much caution or fear. In the real world, it appears to be pretty rare and pretty difficult for an owner to choose a coach who has never coached in the NBA before who ends up being a waste of time in effect. Due to the fact that coaching in the NBA is at least a little more complicated and a lot more important than most people and owners think it is, owners who gamble a little by trying a coach who has never coached in the NBA before have a fairly good chance to get a big reward for the little gamble.<br /><br /><span style="color:#ff6600;">THE COACH RUT AND WHY IT CAN EASILY HAPPEN TO OTHERWISE DECENT FRANCHISES<br /></span>Teams should avoid getting stuck in a rut that the public is completely unaware of but that QFTR has proven exists. This rut is where a team has a very good regular season coach but a lousy playoffs coach. It can be extremely difficult to get out of this rut because it is very hard to fire a coach who usually does very well in the regular season.<br /><br />Plus, which coaches are not good for the playoffs is basically a secret from the public. This is one of QFTR’s favorite and most important topics, and yet it took even us until November 2010 before we assembled all the hard proof and officially reported out which coaches are lousy playoffs coaches. And this is most likely the first time in history anyone carefully and mathematically investigated this. It took many hours of work to prove this beyond a shadow of a doubt and it was not very easy to do. So it is understandable that most people are in the dark and would not believe that there are a substantial number of coaches who are very good in the regular season but are poor in the playoffs.<br /><br />The point is, this is basically unknown territory, so don't expect that these good in the regular but bad in the playoffs coaches are going to be fired when they should be (or never hired in the first place). Instead, expect that teams are going to make mistakes with these types of coaches year after year after year. People and things other than the coach will get the blame, and in some cases other people and things are also to blame. But the problem remains that this type of coach is very seldom if ever blamed simply because no one is aware that this type of coach exists and is fairly common.<br /><br />Most lousy playoffs coaches get away with being lousy playoffs coaches year after year after year as long as they are good regular season coaches. A franchise can be in the dark about this for many years, for the entire time the coach is the coach. A team stuck with this type of coach will typically go along year after year thinking they have a chance to win the Quest, whereas their coach may be so poor in the playoffs that realistically they have no chance whatsoever to win it regardless of who the players are.<br /><br /><span style="color:#ff6600;">NEVER EVER HIRE A COACH WITH A POOR PLAYOFFS RECORD IF YOU WANT TO WIN A CHAMPIONSHIP<br /></span>The best way to explain this section is with an example. The Denver Nuggets hired George Karl in January 2005 as their head coach despite the fact that he had a poor playoffs record and rating. RCR did not exist back then (and nor would the Nuggets use RCR even now) but they did have Karl’s playoffs win/loss record which should have been enough for them to avoid the mistake of hiring Karl. Specifically, when the Nuggets hired Karl, his playoffs record was 59-67. While coaching the Nuggets, Karl's playoffs record is 15-26 as of January 2011. So overall, his playoffs record as of January 2011 is 74-93. Percentage wise, Karl’s' playoff record has gotten worse while he has coached the Nuggets, not better (despite a strong result in 2009). In short, Karl had a losing playoffs record when he was hired and it has only gotten worse since.<br /><br />The Nuggets were wrong to hire Karl and they are also wrong not to fire him unless he wins the NBA Championship within the next year or two. Which by the way, the Nuggets probably were in 2007, definitely were in 2008, possibly were in 2009, and were possibly again for 2010 talented enough to win a Championship if the playoffs coaching had been top notch. The now fired Nuggets general managers of the 2006-2010 era were experts at bringing relatively obscure but surprisingly good players (especially surprisingly good scorers) to the Nuggets.<br /><br />Coaches with losing playoff records are fired by all truly serious NBA franchises these days regardless of regular season records. The absolute top franchises, including at least the Lakers, the Celtics, and the Spurs, would never in the first place hire a coach with a losing record in the playoffs. If their coach ever dropped to where he had a losing playoffs record, he would be fired by the top franchises regardless of how fantastic the coach’s regular season record was.<br /><br />Why did the Nuggets hire Karl? I can only offer educated guesses. The Nuggets either knew in advance they would never win the Quest with Karl and hired him anyway, or they figured incorrectly that Karl's playoff record was trumped by better aspects of Karl's record, or they decided that Karl's playoff record could be excused for irrational reasons, or there was some other unknown, off the wall reason for hiring Mr. Karl.<br /><br />The most favored specific “off the wall” theory regarding why Karl was hired is that the Nuggets decided roughly in 2002 to go for a certain kind of player who can be a major bargain because other teams generally avoid that kind of player. The Nuggets decided to go for more volatile players who might need to be contained by a crack the whip type of coach so that they don't "fly off the reservation" and harm team cohesion and morale. Karl is in fact a good coach if you have a bunch of players more emotional and more volatile than average, because for one thing he will not hesitate to bench players who get enraged about this, that, or the other thing. He will bench anyone at any time and for any reason, good or not.<br /><br />Whatever the Nuggets' management thought, they thought wrongly. If you are a team owner or manager, you can not afford to take any risk or to make any benign assumptions or weak rationalizations when you choose a head coach. If a coach has a poor playoffs record, you have no choice but to not hire that coach if you are serious about winning the Quest. There are going to be coaches who are good enough to do well in the regular season but not good enough to prevail in the playoffs. You should not be the goober who hires one of them, obviously. Let some other franchise/team get stuck in the mud for years and years with that type of coach.<br /><br />I have to be blunt and a little repetitive here to make absolutely sure I am understood. You should never, ever do what the Nuggets did if you are serious about winning the Quest. Your coach should have a good record for BOTH the regular season and the playoffs. The playoff record is even more important than the regular season record.<br /><br />Finally, before leaving this crucial subject, I am going to state that given the choice between on the one hand a younger coach who is considered to be a good or great up and coming coach, but who has no NBA playoff record at all, and not much of a regular season one, and on the other hand a long-term veteran coach who has a decent, good, or even great regular season record but a poor, losing playoffs record, you are better off choosing the young coach with no playoff record.<br /><br />In point blank and clear summary, hiring a coach with a bad playoffs record is one of the worst things you can do if you want to win the Quest.<br /><br /><span style="color:#ff6600;">MORE ON THE EVALUATION OF GEORGE KARL</span><br />Ever since our project started QFTR has focused on George Karl more so than any other coach (simply because when we first started we only intended to be a Denver Nuggets site). This may sound sarcastic but we actually do not intend it to be: George Karl has, by doing things that are wrong (or unwise if you prefer) alerted QFTR to many things that you DON’T want to do if you are coaching playoff games in the NBA.<br /><br />Karl will go down in history as not the only one but certainly as one of the all-time most famous coaches among the ones whose coaching beliefs and methods work much better in the regular season than they do in the playoffs. There have always been coaches like this, there are other coaches like this right now, and there will always be coaches like this. But Karl will always stand out as a particularly good, example of this kind of coach, a “textbook case” if you will.<br /><br />Out of twenty years in the playoffs, Karl has managed to get winning playoffs records in only four years. One of those was 2009, which was surprising to say the least. That year, Karl tried an ultra aggressive and energetic type of defending and proved that it can win you a few playoff games that you would otherwise have lost as long as the referees fail to call a good number of the fouls. However, the deep hole that Karl dug in many earlier years was so deep that the Nuggets' miraculous 2009 playoffs campaign was not enough to lift George Karl all that much in his playoffs sub rating. In the 2009 playoffs, his win-loss went from 62-83 to 72-89. (Then in 2010 it went to 74-93). Karl was still after 2009 and is still right now showing up in the win-loss and also in the ratings as a very poor playoffs coach.<br /><br /><span style="color:#ff6600;">PART TWO OF TWO PARTS OF SECTION FIVE: SPECIFIC INTERPRETATION OF RATINGS</span><br />In late 2010 QFTR evolved what was a general and vague coach recommendation system to a more organized and exact one tied to Real Coach Ratings that can be called the QFTR Coach Recommendation System (CRS). Separate playoffs and regular season recommendations are given for all NBA head coaches. These are given in a report that appears within a few days (or a few weeks at the most) following the Real Coach Ratings Reports. Specifically, the Reports with the official recommendations are scheduled for late August and for October; however, production limitations will sometimes cause them to be late.<br /><br />QFTR gives two recommendations for each coach but paradoxically does NOT give any overall recommendation. Two main reasons explain this paradox. First, it turns out that there is a big, big difference for a lot of coaches in how well their coaching works out in the regular season versus how well it works out in the playoffs. It turns out that it is relatively common for pro basketball coaches to be very good regular season coaches but poor or very poor playoffs coaches. For these coaches, the way they look at and understand basketball and how they have their team playing works better in the regular season than it does in the playoffs. Because of this alone, making combined regular season / playoffs recommendations would be far less productive than you might think.<br /><br />The second reason why we don't even attempt an overall recommendation is that franchises will look at the importance of the regular season and the playoffs differently. For franchises who know already they are most likely not going to be in the playoffs for a while, and also for franchises who think the regular season is more important for them than the playoffs, they might perhaps use the regular season recommendations more than the playoff ones.<br /><br />However, QFTR strongly disagrees with any owner or manager who places the importance of the regular season above the importance of the playoffs. By rights, the playoffs should always be considered as more important than the regular season. If a team is not going to be making the playoffs this year it should by rights have a great playoff coach anyway, so when the team does make the playoffs in the near future it has the right coach for winning in the playoffs.<br /><br /><span style="color:#ff6600;">RECOMMENDATIONS ABOUT THE RECOMMENDATIONS</span><br />QFTR highly recommends that all franchises use the playoff recommendations more strongly than they do the regular season recommendations.<br /><br />But some words of caution are in order. Never completely ignore the regular season recommendations. It is going to be very unusual for a great playoff coach to be a not so good regular season coach (unlike the reverse which is surprisingly common) but if there ever was a coach with an outstanding playoff record but a poor regular season record, you would want to avoid this coach as a kind of insurance policy against having the wrong coach overall. This scenario could play out if the number of playoff games coached was relatively low and a fluke amount of statistical error resulted in an artificially high playoffs rating (whereas meanwhile the lower regular season rating was exactly accurate).<br /><br />At an absolute minimum, the playoffs should be considered equal in importance to the regular season and the playoff coach recommendations should be just as important as the regular season coach recommendations.<br /><br />One thing QFTR could do (and what QFTR would do if forced to make an overall recommendation) would be to use a formula where the playoffs rating was more important than the regular season rating. Or for that matter we could change the overall Real Coach Ratings system so that it was even more weighted in favor of the playoffs than it already is. We choose not to do either of these things at this time because of the complexities already discussed and because of other factors not mentioned here.<br /><br />To some extent this discussion about which recommendations to use is not completely on point, because obviously, the best thing and what you want is a coach who is above average for BOTH the playoffs and for the regular season. Unfortunately however such coaches are much rarer than most people think they are. It turns out that although it is not rocket science, coaching basketball at the NBA level is much more difficult and complex than most people think it is. And then NBA playoff coaching is more difficult and complex than regular season coaching is. Ironically, many of the head coaches themselves apparently underestimate how difficult their job is and many of them don’t even begin to understand the magnitude and nature of the differences between the regular season and the playoffs.<br /><br /><span style="color:#ff6600;">THE PHIL JACKSON ADJUSTMENT FOR THE PLAYOFFS COACH RECOMMENDATIONS</span><br />Phil Jackson is by far the best and most successful NBA playoffs coach among current and recent head coaches. Actually, he is most likely the best NBA playoffs coach of all time (although there are a handful of other ones who are in Jackson’s ballpark). Jackson has repeatedly won playoff games he wasn't supposed to win versus some of the very best of the other NBA coaches. Jackson has won just about 42 playoff games he wasn’t supposed to win out of a total of 323 playoff games. Jackson’s all time playoffs record is 225-98 but according to the QFTR investigation his “par record” is just 183-140.<br /><br />This means that if you think (as most of the general public does) that Phil Jackson wins in the playoffs mostly according to how good his players are and that he has little or no impact on how many wins his teams get you are completely wrong. Jackson has had good teams, since he was “supposed to be” 183-140 in the playoffs but he boosted that to 225-98 and this was such a big improvement that we know, for example, that Jackson would not have won 11 rings (and very possibly not even half a dozen rings) if he were an average playoffs coach. We also know that Jackson would have won very few if any rings if he was a well below average playoffs coach.<br /><br />Some coaches have come up against Phil Jackson in many more playoff games than others. Rick Adelman, Jerry Sloan, and Greg Popovich lead this pack, having faced Jackson in 29, 27, and 26 playoff games respectively. Adelman has pretty well held his own but Sloan and especially Popovich have been hammered by Jackson. After these three there is a group of five coaches who have faced Jackson in between 12 and 16 playoff games and three out of five of these have been handed (by Jackson) a big bunch of losses that should have been wins. The damage to them, though, is far less than the damage to Popovich.<br /><br />For a big majority of coaches, the more playoff games a coach has played against Phil Jackson, the more his Playoff Rating is going to be depressed because Jackson has heavily dominated in playoffs coaching. Therefore, for my playoff coach recommendations, I decided to remove most of the bias caused by big differences between coaches in the number of games versus Phil Jackson. For determining the recommendations, 4/5 or 80% of the scoring resulting from games versus Phil Jackson is removed.<br /><br />The "Phil Jackson adjustment" is NOT done in the main Real Coach Ratings Report. All of the numbers in the playoffs sub ratings in that Report include all games played against Phil Jackson. Only in the official recommendations Report are in effect 80% of the games versus Phil Jackson taken out.<br /><br />The advantages of the Phil Jackson adjustment outweigh the disadvantages. The main advantage is that without it, coaches who have been severely hammered by Jackson (due to having to play him more than other coaches) will have misleadingly low ratings.<br /><br />However, the disadvantage is that if a coach goes up against Phil Jackson in the playoffs, the coach might in theory appear to be a little more competitive versus Jackson than he really is. Really though, that is a moot point because Phil Jackson’s ratings are far, far ahead of any other coach’s whether or not the other coach’s ratings are boosted by the Phil Jackson adjustment.<br /><br /><span style="color:#ff6600;">RATINGS FOR NON-CURRENT COACHES CAN BE CALCULATED AND PROVIDED</span><br />Note that QFTR can in theory include in these recommendations any coach who has ever coached in the NBA (subject to the 25 playoff games and 200 regular season games minimums). If you need a specific coach evaluated, contact QFTR.<br /><br /><span style="color:#ff6600;">EVALUATION SCALES</span><br />QFTR has had evaluation scales for players since 2007, but it took until late 2010 before evaluation scales for coaches were developed. Prior to then the overall RCR system was not sufficiently developed to warrant a formal evaluation scale. As already mentioned, the relevant measure for the playoffs recommendation is the Playoffs Sub Rating of the Real Coach Ratings System with the Phil Jackson adjustment included. The relevant measure for the regular season recommendation is the Regular Season Sub-Rating of the Real Coach Ratings System.<br /><br />Note that after Phil Jackson retires (almost certainly in 2010) the Phil Jackson adjustment will be phased out. What will probably happen is that the adjustment will be cut by 10% each year. In 2010 and 2011 80% of the effect from all Phil Jackson encounters is removed from each coach’s score. For 2012 that removal percentage will probably be 70%, for 2013 it will probably be 60%, for 2013 it will be 50%, and so on until it is completely eliminated. It is very unlikely that QFTR will ever again need to have an adjustment due to a Coach who is far better than any other.<br /><br /><span style="color:#ff6600;">EVALUATION SCALE FOR COACHES FOR THE NBA PLAYOFFS</span><br />--At least 25 playoff games must be coached for the evaluation to be valid and official.<br />--The measure used is the Playoffs Sub Rating of the Real Coach Rating System.<br />--The effects from 80% of coach’s games versus Phil Jackson are removed.<br /><br />Absolute Highest Possible Recommendation: 1,200 or more<br />Very Highly Recommended: 900 to 1,199<br />Highly Recommended: 600 to 899<br />Recommended: 350 to 599<br />Neither Recommended nor Not Recommended: 100 to 349<br />Not Recommended: -150 to 99<br />Strongly Not Recommended: -450 to -151<br />Very Strongly Not Recommended: -750 to -451<br />Absolute Lowest Possible Recommendation: -751 and less<br /><br /><span style="color:#ff6600;">WHEN DOES QFTR GUARANTEE THAT A COACH WILL NEVER WIN THE QUEST FOR THE RING?<br /></span>The relevant measure is the Playoffs Coach Score with the Phil Jackson adjustment included. The guarantee is NOT based on the Playoff Sub Ratings, which add the experience factor and any Championship points earned by coaches to the Playoffs Coach Score. Remember though that the Playoff Coach Scores are the dominant factor in the Playoff Sub Ratings. The Playoff Coach Scores average about 150 points less than the Playoff Sub Ratings.<br /><br />GUARANTEE LEVEL: -750 or less<br /><br />That is, QFTR guarantees that any Coach with a Playoffs Coach Score of -750 or less will never win The Quest for the Ring.<br /><br />If after being added to the guarantee list a coach wins one or more playoff games that should have been losses, he will be removed from the list if the score becomes higher than -750.<br /><br /><span style="color:#ff6600;">EVALUATION OF COACHES FOR THE REGULAR SEASON</span><br />--At least 200 regular season games must be coached for the evaluation to be valid and official.<br />--The measure used is the Regular Season Sub Rating of the Real Coach Rating System.<br /><br />Absolute Highest Possible Recommendation: 1,300 and more<br />Very Highly Recommended: 1,050 to 1,299<br />Highly Recommended: 800 to 1,049<br />Recommended: 550 to 799<br />Neither Recommended nor Not Recommended: 350 to 549<br />Not Recommended: 100 to 349<br />Strongly Not Recommended: -150 to 99<br />Very Strongly Not Recommended: -400 to -151<br />Absolute Lowest Possible Recommendation: -401 and less<br /><br /><span style="color:#ff6600;">HOW TO INTERPRET DIFFERENCES IN RATINGS</span><br />The best way to explain this is with the aid of an example. We will use Larry Brown versus George Karl from the 2010 Real Coach Ratings Look Ahead Version, published in November, 2010. Rounded to the nearest whole number, Brown’s overall rating is 2,420. His Playoffs Sub Rating is 2,199 and his regular season Sub Rating is 221. George Karl’s overall rating is 405. His Playoffs Sub Rating is -648 and his regular season Sub Rating is 1,053.<br /><br />Comparing directly:<br /><br />Larry Brown / George Karl<br />Playoffs: 2,199 / -648<br />Regular Season: 221 / 1,053<br />Overall: 2,420 / 405<br /><br />The reason this is a very good example to use here is that Brown and Karl are the two completely different types of coaches we often talk about at QFTR. Brown is a high quality playoffs coach whereas his regular season record is surprisingly poor. Karl is precisely the opposite: he is a very low quality playoffs coach whereas his regular season record is surprisingly good. Comparing two coaches who are not opposites the way these two are is easier.<br /><br />The main thing and the most important thing to do is to look at the evaluations using the scales above:<br /><br />Larry Brown / George Karl<br />Playoffs: Absolute Highest Possible Recommendation / Very Strongly Not Recommended<br />Regular Season: Not Recommended / Very Highly Recommended<br />Overall: There is no evaluation scale for the overall ratings; see above for an explanation.<br /><br />QFTR strongly recommends that the playoffs ratings and recommendations be given priority over the regular season ones. In numerical terms, QFTR recommends that playoffs ratings be considered between 40% and 80% more important than regular season ones. Therefore, in this example QFTR would recommend Brown over Karl by a fairly wide margin.<br /><br />Compare each coach’s most favorable evaluation and then separately compare each coach’s least favorable evaluation. In this example Brown’s worst evaluation (Not Recommended) is not as bad as Karl’s worst evaluation (Very Strongly Not Recommended). Karl’s worst is two notches worse than Brown’s worst. Also, Brown’s best evaluation (Absolute Highest Possible Recommendation) is one notch better than Karl’s best evaluation (Very Highly Recommended). Brown is ahead of Karl when you compare the higher of their evaluations AND when you compare the lower of their evaluations.<br /><br /><span style="color:#ff6600;">EYEBALLING NUMERICAL DIFFERENCES</span><br />What if you are looking at ratings and for one reason or another you are not checking the evaluation scales. In this sub section we’ll give you some advice about how to interpret actual ratings and differences between ratings.<br /><br />Not counting once in a century all-time greatest playoff coaches like Phil Jackson, the overall range of the Playoffs Sub Rating is going to be from approximately -1,000 to 2,500. The range is 3,500 points. The average at any time is going to be roughly 100 and the median not counting the zero ratings is going to be roughly 0. Since coaching playoff games is a high level skill, the median score is lower than the average score.<br /><br />To make quick eyeball evaluations, start with -1,000 and divide the range (of 3,500) into ten equal mini ranges of 350 points each. The first one would be from -1,000 to -650; the second one would be from -300 to -650, and so on. Assign a simple zero to 10 rating to each category:<br /><br />-1,000 to -651 > 1<br />-650 to -301 > 2<br />-300 to 49 > 3<br />50 to 399 > 4<br />400 to 749 > 5<br />750 to 1,099 > 6<br />1,100 to 1,449 > 7<br />1,450 1,800 > 8<br />1,801 to 2,149 > 9<br />2,150 to 2,500 > 10<br /><br />Scores less than -1,000 could be translated as zero while scores greater than 2,500 (such as with Phil Jackson) could be translated as “off the scale”. Now you can compare any number of coaches for the playoffs using very simple single numbers.<br /><br />For the regular season, the overall range of the Regular Season Sub Rating is going to be from approximately -500 to about 2,000. The range is 2,500 points. The average at any time is going to be roughly 400 and the median is going to usually be 200, which is the starting Sub Rating for all rookie coaches.<br /><br />To make quick eyeball evaluations, start with -500 and divide the range (of 2,500) into ten equal mini ranges of 250 points each. The first one would be from -500 to -250; the second one would be from -250 to 0, and so on. Assign a simple zero to 10 rating to each category:<br /><br />-500 to -251 > 1<br />-250 to -1 > 2<br />0 to 249 > 3<br />250 to 499 > 4<br />500 to 749 > 5<br />750 to 999 > 6<br />1,000 to 1,249 > 7<br />1,250 to 1,499 > 8<br />1,500 to 1,749 > 9<br />1,750 to 2,000 > 10<br /><br />Scores less than -500 could be translated as zero while scores greater than 2,000 (such as with Phil Jackson) could be translated as “off the scale”. Now you can compare any number of coaches for the regular season using very simple single numbers.<br /><br /><span style="color:#ff6600;">EYEBALL INTERPRETATION EXAMPLE</span><br />We’ll use Larry Brown versus George Karl again:<br /><br />Larry Brown / George Karl<br />Playoffs: 2,199 / -648<br />Regular Season: 221 / 1,053<br /><br />Simplified to the single digits, we have:<br /><br />Larry Brown / George Karl<br />Playoffs: 10 / 2<br />Regular Season: 3 / 7<br /><br />We could then (unofficially!) make an overall comparison. Let’s use a 50% multiplier for the playoffs being more important than the regular season. We get:<br /><br />Larry Brown Overall : (10 X 1.5) + 3 = 18<br /><br />George Karl Overall: (2 X 1.5) + 7 = 10<br /><br />Therefore, unofficially and roughly speaking, Larry Brown is almost twice as good a coach as is George Karl.<br /><br /><span style="color:#ff6600;">======= SECTION SIX: CAUTIONS INCLUDING THE WELL KNOWN EXPERIENCE GAP PROBLEM =======</span><br />Since the Real Coach Ratings system is essentially two systems / models combined only unofficially into one overall result, we will discuss cautions separately for the two separate models. For each we will discuss statistical error. All statistical models contain some statistical error but the actual amounts varies radically depending on (1) how good the model is, (2) how large any sample sizes used in the model are, (3) on the real nature of what is being studied, especially on how variable or “wild” the underlying reality is, and (4) on the effectiveness of any quality control and results validation procedures. The good models have amounts of statistical error so low that you can rely on the results for years and years and have more than a 99% chance of never being in error while relying on the results.<br /><br /><span style="color:#ff6600;">STATISTICAL ERROR IN THE REGULAR SEASON SUB RATING MODEL</span><br />How good the model is can always be argued, and naturally the designer will be at least a little biased in favor of his or her model. The QFTR Real Coach Ratings Regular Season Sub Rating model is based on two primary foundations: experience and wins and losses. For these, the highest downward bias is against newer coaches who have from the beginning been coaching poor teams. The highest upward bias is in favor of coaches who are poor in the playoffs but who have been repeatedly given above normal players to coach. Unfortunately, both of these biases are rather large and mostly unavoidable. This is one of the big reasons why the RCR system consists of the two sub ratings (regular season and playoffs) and for why overall ratings are published but are not officially given a lot of weight in discussions.<br /><br />The experience bias has been substantially reduced but not eliminated by progressively (in stages) eliminating experience points available to long veteran coaches. Also, at the low end, rookie coaches are given 200 games worth of experience from the get go.<br /><br />A very substantial amount of experience bias remains, however. If all of the experience bias was reduced then the experience factor would be meaningless. That would not make sense because in many cases coaches do get a little better with experience.<br /><br />The problem is that there are a fairly large number of exceptions to the rule that coaches get better with experience. A minority of coaches do not get substantially better with experience, either because they are brilliant coaches to begin with who can’t possibly get substantially better, or because they learn some wrong things from their experience and so they on net stay the same or they actually get worse with experience. Unfortunately, there is no known valid way to determine on a case by case basis how experience changes various coaches. Unfortunately you can not simply use changes in win-loss percentages over time because (1) there are other variables that could explain all of those changes and (2) in most cases there are not enough such changes to constitute an adequate sample size.<br /><br />With regard to the experience and the wins and losses foundations, the bad news is that we are left with a moderate amount of bias (as just discussed) but the good news is that we are left with no sampling error, simply because no samples are needed because one hundred percent of the information is available and is used. By contrast, other possible regular season coaching variables, such as opinions of sports writers, opinions of players, etc. are subject to very high bias and also high statistical sampling error; QFTR would never condone usage of opinions in any of our models regardless of how many of them we could get and regardless of which opinions we could get. The very fact that mere opinions are not valid is why we spend all the time on ratings systems such as RCR in the first place.<br /><br />Unfortunately, moderate variability is believed to exist with respect to how variable or “wild” the real nature of coaches in the regular season is. While you are never going to see a completely incompetent coach coaching an NBA team in the regular season, you will at the low end see moderately or “somewhat” incompetent coaches from time to time and at the high end there will be brilliant coaches from time to time.<br /><br />Very unfortunately, the RCR system by itself can NOT automatically identify brilliant regular season coaches who will be great playoff coaches. The regular season sub rating is not a fine enough instrument to accomplish that even if an attempt is made to flush out bias when looking at a particular coach. Further, if a brilliant coach is stuck with especially poor players, there will be little if anything that even he can do that will show up in anything you can easily see.<br /><br />However, if you are looking for a great coach, RCR can in some cases point you in the right direction. For example, the playoffs sub rating might show that a brilliant coach has won one or two playoff games he was not supposed to win in just one or two series. Then you might be aware that the team he is coaching is doing better than most people expected. These two pieces of evidence (neither of which come from the regular season sub rating system) would strongly suggest (but would not prove beyond a shadow of a doubt) that you have discovered a great coach.<br /><br />The regular season sub rating receives the same high level of general quality control that all QFTR systems and ratings do. General quality control primarily means that the model as a whole and everything specifically in it are continually reviewed to make sure they exactly match reality in accordance with everything known about (in this cases pro basketball coaching) in reality. Quality control also means that all correlations in the model are supposed to closely match correlations in the real world. Recalibration and iteration are among the primary tools used to achieve quality control.<br /><br />Note that, unlike many other basketball statistical models, QFTR models and systems are subject to continual revisions and expansions. However, as of late 2010, QFTR asserts that both the regular season and the playoff sub rating components of the RCR system are well and extensively developed and will not in the future be subject to major overhauls or major expansions. More specifically, most or all variables that can correctly be incorporated have already been correctly incorporated. Future changes will most likely be limited to relatively minor adjustments that will change results only a little.<br /><br />Variable validation is where a specific key result has an average value (or perhaps some other statistical attribute) that is in accordance with model design. In the moderately complicated models QFTR uses, validation of key variables is often possible. But in most simple models, no such validation is possible. The regular season sub rating model is (intentionally) simple and there are no variables in it that can be or need to be statistically validated.<br /><br /><span style="color:#ff6600;">STATISTICAL ERROR IN THE PLAYOFFS SUB RATING MODEL</span><br />How good the model is can always be argued, and naturally the designer will be at least a little biased in favor of his or her model. The QFTR Real Coach Ratings Playoffs Sub Rating model is based first and foremost on efficiency of teams, which is extremely highly correlated with NBA playoff results. The model sets playoff expectations based on those efficiencies and then looks at actual results versus those expectations for each coach. QFTR is extremely confident that this is a very valid and strong model for correctly comparing coaches with respect to playoffs coaching.<br /><br /><span style="color:#ff6600;">WARNING: STATISTICAL ERROR IN PLAYOFF SUB RATINGS FOR COACHES WHO HAVE COACHED FEWER THAN 25 PLAYOFF GAMES MAY BE EXCESSIVE</span><br />This is the most important caution and warning! For coaches who have coached fewer than 25 playoff games, playoff sub ratings are calculated and published despite being subject to possible excessive statistical error, but no official recommendations are given. Therefore, if in any way you use playoff sub ratings for coaches who have coached fewer than 25 playoff games, do so with extreme caution. For these coaches, variances between expected and actual playoff results could be caused mostly or entirely by injuries rather than coaching. Therefore, to use sub ratings for coaches who have coached fewer than 25 playoff games, you would have to research which players didn’t play in the series to see if your inexperienced playoff coach was lucky or not with respect to injuries (to his players and to the players of his opponents.) The “manual injury adjustment” of the Real Team Ratings system can be used to do this. See the User Guide to Real Team Ratings.<br /><br />Except for coaches who have coached few playoff games, especially fewer than 25, the variances between expected and actual playoff results are going to be due mostly to coaching. The other possible reasons: injuries and players playing better or worse than they did during the regular season (not due to coaching) are going to mostly statistically cancel themselves out for coaches who have coached more than 25 playoff games and are going to virtually completely cancel themselves out for coaches who have coached more than 50 playoff games. As the number of playoff games coached rises from 25 to 50, little of the difference will be due to the other factors and most of the difference will be due to the coaching. As the number of playoff games coached rises above 50, essentially all of the difference between expected wins and actual wins will be due to the coaching. Therefore, QFTR relatively confidently issues official recommendations for all coaches who have coached between 25 and 50 playoff games and QFTR extremely confidently issues official recommendations for all coaches who have coached at least 50 playoff games.<br /><br />The next thing to look at is how variable or “wild” what we are looking at is in the real world. One of the very most important themes of the entire QFTR project is that coaches in the playoffs vary by more than most people think and by enough to easily change the outcome of close series. At the very least it can be said that coaches in the playoffs have a fairly high variability: the worst of them are much worse than the best of them. Roughly speaking, the worst playoff coach needs at least one more star player than the best playoff coach needs in order to have an even chance of beating the best coach.<br /><br />But with respect to cautions the real issue with regard to variability is not with what we are focused on as the end product but with other “wild” factors that could explain differences in ratings. Unfortunately, injuries are a very large wild factor that is out there. As already explained, this factor invalidates playoff sub ratings for coaches who have coached fewer than 25 playoff games and makes caution in order regarding playoff sub ratings for coaches who have coached between 25 and 50 playoff games.<br /><br />In order to validly use playoff sub ratings for inexperienced playoffs coaches, you must adjust the ratings for injuries. Using the manual injury adjustment as shown in the User Guide to Real Team Ratings is recommended for coaches who have coached between 25 and 50 playoff games and it is required for coaches who have coached fewer than 25 playoff games. QFTR has no specific way to do this at this time. At this time you have to devise your own adjustments to the playoffs sub ratings based on the injuries you find out about.<br /><br />Another possible wild factor is players playing better or worse than they did in the regular season for reasons unrelated to the coaching. QFTR research indicates that this is a relatively minor factor which would not cause much statistical error at all except possibly for coaches who have coached fewer than 10 playoff games, and even for these coaches it would be unlikely.<br /><br />Other than injuries and players better or worse “on their own”, there are no other known factors (other than coaching, obviously) that could explain differences between expected and actual results in pro basketball playoff games.<br /><br />The playoffs sub rating receives the same high level of general quality control that all QFTR systems and ratings do. General quality control primarily means that the model as a whole and everything specifically in it are continually reviewed to make sure they exactly match reality in accordance with everything known about (in this cases pro basketball coaching) in reality. Quality control also means that all correlations in the model are supposed to closely match correlations in the real world. Recalibration and iteration are among the primary tools used to achieve quality control.<br /><br />Variable validation is where a specific key result has an average value (or perhaps some other statistical attribute) that is in accordance with model design. In the moderately complicated models QFTR uses, validation of key variables is often possible. But in most simple models, no such validation is possible.<br /><br />For the moderately complicated playoffs sub rating model, a validation on an extremely key variable is possible and has been done. This variable, expected versus actual wins for away teams, is at the core of the model and it should have an average value of zero as soon as the number of playoff games studied is large enough to be rid of any significant sample size error. The database contains enough playoff games in it that any error from sample size is extremely small, so that is not a problem. The reason the value should be zero is that the scale which translates differences in net efficiencies into expected number of wins is correct only if actual real world results result in a long term average of:<br /><br />Expected number of wins minus actual number of wins of the away teams equals zero (or at least very close to zero).<br /><br />Validation was performed and the initial result was that the scale and the model were slightly in error. After recalibration validation was redone. Now, the sum of all of the expected number of wins minus the sum of all of the actual number of wins divided by the number of playoff series equals .108. This is extremely close to zero and additional recalibration is neither required nor recommended. However, later in 2011 another recalibration may possibly be performed. Alternatively, the home court adjustment may be very slightly tweaked.<br /><br />Other less important requirements for validity are that the overall range (of the scale) is correct and that the rates of change in various sections of the range are correct. The overall range has been verified as correct; specifically, if the difference in net efficiencies is twelve or greater, it is essentially impossible for the lower team to win even one playoff game in a series (unless there is a major injury to the higher team).<br /><br />Although the rates of change in various sections of the range have not been completely and exactly verified because it is extremely difficult and time consuming to do so, it is unnecessary to do this because any possible error due to the rates of change in sections of the range is very small. Specifically, the highest possible error would translate into approximately five points (up or down) for a coach in a playoff series.<br /><br />Now we will proceed to a few other cautions.<br /><br /><span style="color:#ff6600;">BE CAREFUL REGARDING THE VERY LARGE TIME SCALE OF THESE RATINGS</span><br />Keep in mind that each coach is rated using information from every season that he has ever been a head coach in the NBA. Some coaches will currently be substantially better than their overall career ratings indicate. On the other hand, it is very possible that a small number of current coaches could be substantially worse than their overall career ratings indicate. Much more likely would be that a very small number of coaches would be just slightly worse right now than they have been on average.<br /><br />While I am on this subject, I want to warn you to not make the assumption that all or even most coaches get better as they accumulate more and more experience. Most coaches who have coached for less than five seasons will be getting at least a little better from one year to the next. Many coaches who have coached for between five and ten seasons will be getting a little better from one year to the next. Beyond ten years, very few coaches will be getting even a little better from one year to the next.<br /><br />In any event, there is no empirical evidence I know of to back up a sweeping generalization stating that coaches always get better with experience, and nor is that assumption obvious or even likely to be true most or much of the time.<br /><br />It is very plausible that most coaches do not really improve that much after roughly five or six years of experience. One thing that might prevent the more experienced coaches from automatically getting better is that many of the heaviest experience coaches may not have completely updated their beliefs and coaching schemes to reflect the current ways of basketball. Some older coaches may not have fully adjusted to rule changes of recent years, for example. They may be hurting their teams a little or even a lot by persisting with strategies and tactics that used to work well years ago but are not working very well in the NBA in 2011 and 2012.<br /><br /><span style="color:#ff6600;">THE INFAMOUS WIDELY DIFFERENT AMOUNTS OF EXPERIENCE PROBLEM</span><br />In the very early days of RCR back in 2007, it was feared that the widely different amounts of experience among NBA coaches would doom the system to either total failure, or at the very least, to being much less valid and reliable than Real Player Ratings are. This problem originates in the huge discrepancies in the amount of experience between long-term veteran coaches and much younger coaches. To some extent this makes comparing NBA coaches like trying to compare apples and carrots rather than like trying to compare various apples.<br /><br />In general, some points of comparison will be biased in favor of newer coaches while other points of comparison will be biased in favor of long veteran coaches.<br /><br />As recently as 2009 QFTR was still very worried about this. But after several years of thinking about the problem and introducing changes to RCR in response to it, we think we have now largely “solved” it. That is, we think now that the ratings and the evaluations based on the ratings that we publish are fair and unbiased to all coaches regardless of their experience level.<br /><br />The following aspects of RCR largely solve the “apples and carrots problem”:<br /><br />(1) The experience points available for regular season games (for the regular season sub ratings) differ depending on the experience level of the coach. Long veteran coaches get virtually no experience points; newer coaches get the maximum. Coaches in the middle get about half way between the maximum and the minimum.<br /><br />(2) Rookie coaches are given 200 experience points from the get go, which eliminates the experience bias that would otherwise exist against those brand new coaches.<br /><br />(3) No experience points are given for any playoff game coached beyond 200 playoff games. This cuts down on the bias in favor of the long veteran coaches who have coached the most playoff games.<br /><br />(4) No evaluation scales, no official evaluations not using scales, and no official recommendations are produced or given for the overall ratings. At this time, only unofficial usage of overall ratings is done. This is obviously a powerful way to respond to the problem; it’s basically a divide and conquer strategy, where the overall ratings exist but are largely ignored in favor of the two sub ratings that add up to the overall ones. The main reason why ignoring the overall ratings is advised is that certain long veteran coaches are poor playoff coaches but they are decent to good regular season coaches and they also have a lot of regular season experience points. Therefore, the overall ratings of these coaches are very misleading when it comes to the playoffs.<br /><br />Even though QFTR does not officially use the overall Ratings, we unofficially do, and may officially use them if and when a valid way to precisely calibrate the regular season and playoff sub ratings becomes available. The following cautions apply to the overall ratings.<br /><br /><span style="color:#ff6600;">CAUTIONS REGARDING THE OVERALL REAL COACH RATINGS</span><br />Where we are right now on the overall ratings is that we still have a small problem left with the experience discrepancy problem. In a nutshell, in the overall ratings we decided to take the risk that the problem is not completely solved so as to avoid being overly harsh toward certain long-term coaches. "First, do no harm..." Although many hours have been spent trying to solve the problem, and although much progress has been made, the RCR system still can not completely bridge the gap created by the huge differences in experience.<br /><br />The worst of the long-term veteran coaches most likely have overall ratings that are higher than what they really should be. If a Coach has received some "lucky breaks" by not being fired after bad losing seasons, and/or after bad losses in the playoffs, and he has over the years now accumulated 1,000 or more regular season games and 100 or more playoff games, his rating will very likely still be distorted on the high side relative to the other coaches. This is because the long-time veteran Coach, who could have been fired a long time ago but was not fired, will max out on the experience points, and he will also have a few winning seasons to go with the losing seasons. The sum of the maximum experience points plus any positive net from winning seasons will tend to more than offset all the losses from the year(s) he might have been fired, despite the heavy negatives that losses carry.<br /><br />Another way of thinking about this issue is that assuming a long-term veteran Coach has a too high rating due to the above, keep in mind that Coach would not even be in the ratings had he actually been fired. Coaching a professional sports team is about the worst job in existence for job security, since the vast majority of coaches are involuntarily fired. If all coaches who are “supposed to be” fired were fired, this distortion would disappear from the RCR system!<br /><br />Yet another way of focusing on this problem is realizing that pro basketball coaches are fired or not fired based on different criteria, because managers and owners of pro teams do not all think in similar ways.<br /><br />We can not simply remove experience from the set of factors, since in every single career that exists, the more experience you have, the better you tend to be. Moreover, even if we did reduce or remove the experience factor, the same problem would still be there in the case of coaches who probably should have been fired, but are not and then end up fortunately coaching very skilled teams in subsequent years, thus piling up wins with those teams.<br /><br />In other words, we have no choice but to proceed as if all coaches face the same criteria as to whether they are fired or not, even though we know that some coaches, especially veteran coaches, are treated much more leniently than others.<br /><br /><span style="color:#ff6600;">CAUTION ABOUT THE AGE OF COACHES</span><br />One other thing to keep in mind about long-term veteran coaches (the ones with more than 1,000 regular season games coached) is that once such a Coach gets older than 60, 65, and then maybe even 70 years old, that Coach's abilities will probably be less than they were when he were younger. Whereas almost all coaches with little experience are under the age of 55.<br /><br />For example, Utah Jazz Coach Jerry Sloan is 68 years old on March 28, 2010, so it is possible that he is a little too old now for maximum effectiveness.<br /><br />The bottom line is that there will be a small number of older, veteran coaches whose ratings are misleading on the high side. Unfortunately, we are unable to completely correct for this or to properly estimate the amount of the unavoidable distortion at this time. So we advise you when looking at the ratings to make sure you give the benefit of the doubt to younger coaches who seem to have good potential.<br /><br /><span style="color:#ff6600;">PROBABLE DOWNSIDE DISTORTIONS IN THE OVERALL RATINGS</span><br />If you have a younger coach who has just started out, and he has a bad team to start with (and a lot more new coaches start with bad teams than good ones) then his rating will be much lower than it will be in future years if he avoids getting fired and in the future gets much better teams to work with.<br /><br />However, it is also very possible that in most cases the worst teams get only the medium and poor coaches, that in other words the really good coaches never have to start out coaching a bad team, so that any downside distortions are small and mostly moot points.<br /><br />Here is an interesting excerpt from what was probably the very first User Guide for Real Coach Ratings, written in 2008 when I tackled the big experience differences problem for the first time:<br /><br />“As I was working on this I often had a sinking feeling that trying to fairly compare coaches with more than 10 years of experience with those with less than 2 years experience would be in the end impossible. But I persevered and scrapped and fought my way to the goal line and got it done. I achieved all of the balancing that I needed to achieve. Specifically, for example, I kept the points given for experience within reason, while making sure that regular season and playoff losses were penalized to the full extent they should be.”<br /><br /><span style="color:#ff6600;">FUTURE CHANGES TO REAL COACH RATINGS</span><br />Are the factors set in stone forever and ever? No, and unlike many sites that make use of statistics, QFTR will make radical changes in models and procedures whenever new basketball discoveries are made. But the odds are that changes to the RCR system will be relatively minor in future years, with one notable exception. As you may already be aware, QFTR will try in the future to develop a valid way to combine the regular season sub ratings and the playoff sub ratings, so that the overall ratings are considered completely valid and official. The only way to do this is to achieve a total solution to the experience discrepancy problem.<br /><br />In summary, although this is not a perfect system, it is at the very least a very good system, and it is light years ahead of having no system at all with which to fairly compare coaches of radically differing amounts of professional basketball head coach experience. In fact, as surprising as this may sound, RCR is literally the only known coach rating system publicly published that is based on sound statistics, sound statistical modeling, real information, and extensive quality control.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-32404392150946465292011-01-03T17:44:00.001-08:002011-03-28T07:35:20.226-07:00User Guide for Real Team Ratings as of January 2011<em>Note: This guide is fully updated and reflects the state of the art for Real Team Ratings (RTR) as of January 4, 2011. In general Quest for the Ring (QFTR) ratings systems do not change unless and until a new User Guide is available and is published. So until and unless this Guide is updated and republished, this Guide explains completely and accurately how RTRs at QFTR have been constructed. </em><br />
<em></em><br />
<em>Previous versions of this User Guide are no longer relevant and may be removed from QFTR Reference. Prior versions that are not removed, are clearly and at the top labelled as legacy versions. </em><br />
<br />
<span style="color: #ff6600;">=====SECTION ONE: INTRODUCTION=====</span><br />
<br />
Real Team Ratings (RTR) is a custom designed, accurate way to rate and rank NBA teams during the regular season. It is designed to rank and rate the teams according to how they would (and in many cases actually will) do in the playoffs. It is not designed to rate and rank according to any theory about how basketball “should be” or “is supposed to be” played. Rating and ranking how easily each team can win playoff games is the one and only ultimate objective of RTR.<br />
<br />
As with all Quest for the Ring (QFTR) systems, RTR is as complicated as it needs to be to meet the objectives for it and no more complicated than that. QFTR always makes sure models and systems are no more complicated than they have to be because the more complicated formulae, models, and systems are, the less robust they are and the more likely it is that they do not correctly and accurately reflect reality. Unfortunately, the vast majority of basketball statistics sites and seemingly all "statistics gurus" use formulae, models and systems that are needlessly and excessively complicated. There are some needlessly detailed assumptions embedded in those that do not accurately reflect reality. At the extreme opposite end of the spectrum, much of the general public thinks that statistics, whether simple or complicated, can never accurately reflect reality, and this is also dead wrong.<br />
<br />
So where is the happy medium to be found? It’s found here at QFTR, which bridges the gap between on the one hand the big majority of the public and an even bigger majority of the general basketball sites which incorrectly think that statistics are not important and on the other hand a very small number of very statistically-oriented basketball sites (which are really academic sites with basketball as the subject matter for academic work). These statistical basketball sites very often go overboard with statistics and use unnecessarily complicated formulas and models.<br />
<br />
Meanwhile, QFTR goes for and hits that sweet spot right down the middle that everyone else generally misses.<br />
<br />
Quest spends a lot of time making absolutely sure that our formulas and models precisely reflect reality, whereas other statistical sites spend most of their time on the statistics themselves. We keep revising formulae and models to reflect the latest basketball knowledge up to including completely getting rid of those that don't stand the test of time, whereas the statistical sites virtually never get rid of any of their complicated formulae and models. To sum this up, at QFTR basketball comes first and statistics is just a tool whereas at other basketball sites that use statistics heavily it is the opposite: statistics comes first and basketball is just a tool.<br />
<br />
<span style="color: #ff6600;">BASKETBALL PLAYOFF RESULTS ARE RELATIVELY EASY TO PREDICT</span><br />
Of all the popular American sports Leagues, the NBA is the one where the better team is most likely to avoid being upset in the playoffs. In other words, the NBA playoffs are more predictable than playoffs in any other major sport. There really are some right ways and many wrong ways for a team to play the game if the objective is defeating other teams in the playoffs. RTR is designed to identify and measure which basketball characteristics are the ones that will win playoff games and to rate and rank teams according to those characteristics.<br />
<br />
RTR can therefore also be used for, among other things, to determine whether how good various players played led to an upset or not, to signal where coaching led to an upset or not, and to get a good idea of how much better or worse than expected teams played in playoff series.<br />
<br />
In general, factors that only sometimes impact winning are NOT included; only factors that always or at least almost always impact winning are included. Also and in general, factors that always or almost always impact winning that are major are included via a separate factor, whereas factors that always or almost always impact winning, but that are not major, are incorporated in (or "contained in") other factors.<br />
<br />
Real Team Ratings (RTR) are NOT simply a system that shows how well the teams are doing in the regular season. Instead, it is a rating system designed to reveal the capability of winning playoff games and series of each team.<br />
<br />
The ratings are calculated for all teams, even though 14 of the 30 NBA teams do not qualify for the playoffs. Even though they will not be playing any playoff games, the ratings for the lower teams nevertheless give an accurate measure of how well those teams would most likely do if they were in the playoffs. So for those lottery teams, RTR is an interesting hypothetical.<br />
<br />
<span style="color: #ff6600;">BRIEF HISTORY OF REAL TEAM RATINGS</span><br />
Quite honestly this system started out in a more crude fashion than do most systems here at Quest for the Ring. Therefore, there were several major changes to the system historically.<br />
<br />
For example, in 2009, the RTR rating system was much improved from prior versions. It was improved to make absolutely certain that you can predict the outcome of the playoffs in advance as accurately as possible. All crucial factors except for home court advantage, the injury situation, and some aspects of coaching in the playoffs versus the regular season were now included and weighted very carefully. See below for how to adjust RTR scores for the first and second of these three items. Specifically, the biggest and most important improvement for 2009 and beyond was the introduction of points for wins over and points subtracted for losses to the top sixteen teams (which would be the playoff teams themselves.)<br />
<br />
In 2010 RTR was upgraded substantially (but not quite as dramatically as in 2009). In early 2010 the important intermediate level factor Recent Wins and Losses began. In very late 2010 the Paint Defense factor started. The defense overweight factor remained so the net effect is that paint defense is over weighted relative to perimeter defense.<br />
<br />
Finally, in late 2010 all of the factors were recalibrated to reflect state of the art knowledge of exactly how playoff games and NBA Championships are won. Recalibration is critical because that is how optimization is achieved. All of the pieces have to fit together in just the right way. Much iteration is involved. One of the highlights of the recalibration was that the smaller factors were upgraded to become not as small as they were. This was done mostly to reflect the real world reality that the smaller factors determine many playoff series (especially Conference and NBA finals) because the teams are very close after you look at the larger factors, so then the smaller factors decide it.<br />
<br />
<span style="color: #ff6600;">SECTIONS OF THIS GUIDE</span><br />
This Guide is divided into six primary sections and a special introduction to three of the sections. Within each section there are sub sections indicated by headers in capital letters. The sections are:<br />
<br />
--Section One: Introduction<br />
--Section Two: Discussion of the Seven Factors<br />
--Section Three: Technical Discussion of the Seven Factors<br />
--Introduction to Sections Four, Five and Six: How and When to use the Ratings and These Sections to Accurately Predict Playoff Series<br />
--Section Four: Interpretation of Ratings and Predicting Playoff Series<br />
--Section Five: Cautions<br />
--Section Six: Manual Injury Adjustments<br />
<br />
<span style="color: #ff6600;">=====SECTION TWO: DISCUSSION OF THE SEVEN FACTORS=====</span><br />
There are seven factors in all, three larger ones and three smaller ones.<br />
<br />
LARGER FACTORS<br />
--Net Efficiency<br />
--Performance versus playoff teams<br />
--Recent wins and losses<br />
<br />
SMALLER FACTORS<br />
--General defense overweight adjustment<br />
--Paint defense<br />
--Quality of offense<br />
--Pace<br />
<br />
<span style="color: #ff6600;">NET EFFICIENCY</span><br />
Efficiency in basketball is usually considered to be the number of points per 100 possessions. The original and continuing foundation of Real Team Ratings (RTR) is defensive and offensive efficiency. But sometimes, and more often than you think, there will be a team which has a high net efficiency (offensive minus defensive efficiency) but isn’t playing in the right or smart way for winning playoff games. Such a team wins a lot of regular season games against mediocre and bad teams but then gets bounced out in the first, second, or third round of the playoffs, sometimes by a team that didn’t win as many games against the middle and low end teams. Conversely, there are sometimes teams which have a surprisingly low net efficiency but end up in the NBA Championship, even possibly winning it. The best example of this is the 1994 and the 1995 Houston Rockets.<br />
<br />
But the general rule is that net efficiency alone predicts who will win playoff games and the Championship. Therefore, net efficiency is the most important factor and it is the foundation of RTR. But since there are a substantial number of exceptions to the rule, other factors need to be identified and included in RTR.<br />
<br />
Teams can have the same net efficiency with very different team makeups and/or strategies. For example, a team with a mediocre offense but the best defense in the League could have the same exact net efficiency as a team with the best offense in the League but only a mediocre defense.<br />
<br />
There are two medium level factors. Both of them look at actual wins and losses. Since these factors combined are almost as important as net efficiency, anyone who claims that RTR does not consider actual wins and losses (but only performance statistics) is completely wrong. In fact, RTR blends both key performance measures and actual wins and losses in an optimal way.<br />
<br />
<span style="color: #ff6600;">PERFORMANCE VERSUS PLAYOFF TEAMS</span><br />
The first of the two medium level factors is called “Performance Versus Playoff Teams” and as the name suggests is the ability of teams to win games against the better and the best teams. Wins versus losses against the 11th through the 16th best teams (out of 30 NBA teams) are weighted at one while wins and losses against the best 10 teams are double weighted.<br />
<br />
Take two teams with about the same regular season records and maybe even about the same net efficiencies and beware, because you might actually be looking at two very different teams when it comes to the playoffs, one much better than the other. One reason is that some teams and some types of coaching do better against medium and lower teams in the regular season than they do in the playoffs against the best teams. But it is not simply that it is easier to win games against all teams in the regular season (which it is because the regular season is less intense than the playoffs). It is also that for the playoffs, the game of basketball itself shifts and becomes a game that is a little different. What teams do is awarded or penalized a little differently in the playoffs compared with the regular season. And even little changes will often determine who wins playoff series and the Championship when two closely matched teams are playing. When the game changes a little for the playoffs, the coaching, strategies, and tactics of some teams will now become more of an advantage than they already were. Meanwhile, other teams will be left “holding the bag”, unable to win the Conference Final or the Championship when their coaching, strategies, and tactics become more of a disadvantage.<br />
<br />
<span style="color: #ff6600;">RECENT WINS AND LOSSES</span><br />
The Real Team Ratings system was substantially improved in April 2010 with the advent of the second of the two medium factors. This factor is called “Recent Wins and Losses” and it reflects recent performance (in about the last two months). This partially gets at several previously ignored items that will help determine who will win and lose in the playoffs.<br />
<br />
The key features and attributes of the recent wins and losses factor are:<br />
<br />
--Functionally it over weights the most recent performance, from the most recent 25 games.<br />
<br />
--It factors in momentum and morale going into the playoffs.<br />
<br />
--It factors in coaching strategies and tactics that have finally produced good (or bad) results just in time for the playoffs. In other words, it substantially but indirectly and roughly reflects the likelihood that coaching strategies and tactics will work or not in the playoffs<br />
<br />
--It factors in the performance of new players acquired for the stretch run of the regular season and for the playoffs.<br />
<br />
--It substantially but indirectly and inexactly reflects the current injury situation of teams. It especially factors in injuries that have occurred within the last couple of months or so and that may be carrying over into the playoffs. In other words, this factor is extremely useful for correcting RTR for injuries that occurred in February and March.<br />
<br />
--The last five games of the Regular Season are ignored due to playoff coaches resting key players and due to other distortions. So the final Real Team Ratings for a season will cover from the 53rd game of a team through and including the 77th game of a team, while games 78 through 82 are ignored.<br />
<br />
<span style="color: #ff6600;">LIMITATIONS OF THE RECENT WINS AND LOSSES FACTOR</span><br />
Injuries that occurred in the last few weeks are only partially corrected by this factor. Moreover, injuries occurring during the playoffs themselves remain completely outside of the RTR system. Finally, when one or more players were injured and unavailable in February / March but are completely ready to go for the playoffs, the new factor may inadvertently distort the rating of the team downward.<br />
<br />
For these and other reasons, the "Manual Injury Adjustment" is essential. See Sections Five and Six for complete information about this.<br />
<br />
<span style="color: #ff6600;">ALTHOUGH SMALL, THE FOUR SMALLER FACTORS RATHER OFTEN DECIDE THE CONFERENCE FINALS AND THE CHAMPIONSHIP</span><br />
There are four smaller factors. Since in most years the best teams are closely rated after the biggest three factors are calculated, the smaller factors are rather often going to decide who wins the NBA Championship. In other words, when you get to the Conference finals and the NBA Championship, the teams often have fairly close net efficiencies and fairly close wins and losses against top teams and fairly close wins and losses recently. When those things are very close, the smaller factors will decide who wins the Conference Finals and the NBA Championship.<br />
<br />
Note that unfortunately injuries sometimes trump every single factor, large and small, and determine by default who wins playoff series. But injuries do not often decide who wins the Championship itself because most teams which reach the Championship do not have substantial injury problems.<br />
<br />
<span style="color: #ff6600;">GENERAL DEFENSE OVERWEIGHT ADUSTMENT </span><br />
The first of the four smaller factors is “general defense”. Actually, the full name for this is “general defense overweight adjustment”. So many people know that defense is more important in the playoffs than in the regular season that it is practically common knowledge. This factor slightly over weights defensive efficiency, whereas the much larger net efficiency factor treats offensive and defensive efficiency equally. As a reminder, although numerically the general defense factor is relatively small, it often (along with other “small” factors) decides actual Conference and NBA Championships.<br />
<br />
Aside from being a sub rating, this shows you exactly how the NBA teams rank defensively.<br />
<br />
<span style="color: orange;">DO NOT MAKE THE MISTAKE OF OVERSTATING THE IMPORTANCE OF DEFENSE</span><br />
Note that there is no corresponding general offense sub rating in RTR. This is on purpose of course because, again, defense is a little more important in the playoffs than is offense. Statistically, if there was a general offensive rating it would offset the defensive one and both of them would then be a meaningless waste of time.<br />
<br />
But don’t fall into a trap here; don’t get carried away. In basketball defense is relatively less important than it is in many and very possibly most other sports. Basketball is designed to be a game that favors the offense more so than for many, many other sports.<br />
<br />
The tightrope here is that on the one hand you have to realize that defense is more important in the playoffs than it is in the regular season. On the other hand you have to understand that in basketball exactly how important the defense can be is limited fairly strictly. Defense alone can not possibly win you a Championship in basketball.<br />
<br />
By contrast, in American pro football the limitations on how important the defense can be are far weaker, meaning that unlike in basketball, you can win the Super Bowl Championship in football pretty easily with the best defense in the League but a below average offense. For example, the Pittsburgh Steelers have done this several times over the years. But in basketball it is extremely difficult to win the Championship (and you are going to need some luck) to win it with even the best defense in the League but only the 20th best offense (out of 30). What you really need in basketball to go along with the best defense in the League is at the very least the 15th best offense (out of 30); and to have a good chance you need at least the 10th best offense to go along with the best defense.<br />
<br />
So even though in basketball defense is more important in the playoffs than it is in the regular season, the magnitude of the change is not really all that large; in basketball defense is only a little more or, arguably in some cases, moderately more important in the playoffs than in the regular season.<br />
<br />
Note also that, ironically, the teams that are the very best defensively in the regular season are unable to increase the quality of their defending in the playoffs as much as teams that come into the playoffs with lower ranked defenses. Coming into the playoffs, teams with one of the best two or three offenses in the League but whose defenses are down around 10th best are generally more likely to win the Championship then teams which come in with one of the top two or three defenses but only about the 10th best offense.<br />
<br />
It’s obvious that teams have the opportunity to be better defensively in the playoffs than they were in the regular season; after all, this happens all the time. Defensively in the playoffs, it’s mostly a matter of doing the same things that were done in the regular season harder, faster, and/or smarter. But the opportunity for a team to be better offensively in the playoffs than it was in the regular season is very limited. In other words, offensively, what you saw in the regular season is pretty much all you are going to see in the playoffs. Teams should assume they can improve a little defensively but they should never ever assume they can get substantially better offensively when the playoffs come, because that is unlikely to happen.<br />
<br />
This is indirectly another reason why teams that run slightly organized offenses are much smarter and more likely to win The Quest for the Ring than are the teams that run more street ball type offenses. Coaches who run the street ball type offenses often think that that strategy will work better in the playoffs than in the regular season. They may think that unlike a slightly organized offense a street ball type offense can be ramped up in the playoffs. And they may think that a street ball type offense is exactly what you want to try to offset the ramped of defenses you see in the playoffs.<br />
<br />
All of these suppositions are false to one extent or another. First, street ball type offenses work less well in the playoffs against ramped up defenses than they do in the regular season against lesser defenses. Second, you can not substantially ramp up any type of offense in the playoffs including the street ball type. For offense more so than defense, it is crucial that in the regular season you are playing in a way that will allow you to win in the playoffs. For defense it is theoretically very recommended but not required that you in the regular season play in a way that will allow you to win in the playoffs. Third, ramped of defenses are relatively more effective against street ball type offenses than they are against slightly organized offenses.<br />
<br />
<span style="color: #ff6600;">PAINT DEFENSE</span><br />
The second of the four smaller factors is paint defense. Paint defense is measured by how many points are scored by opponents from within the painted area. When defense is ramped up in the playoffs, it is ramped up even more so “in the paint” than outside it. In the playoffs defense in general is more important than in the regular season. At the same time, paint defense in particular is more important than perimeter defense relative to how important both were in the regular season.<br />
<br />
<span style="color: #ff6600;">QUALITY OF OFFENSE</span><br />
Quest for the Ring is in the process of developing innovative quality of offense performance measures. Since these are not yet finalized and since even if they were there is no existing data bank for them, we now use for the Quality of Offense sub rating a sophisticated performance measure that fortunately is available on the Internet. We use the percentage of field goals that are assisted, which is going to closely track our custom designed measures and be a rough summary of them. For the sake of efficiency, we may indefinitely use this if we continue to think in the future that this measure reliably tracks what we are identifying. <br />
<br />
Quality of offense is the newest factor and is being introduced as of the first RTR of 2011. <br />
<br />
Numerically, quality of offense is equal to paint defense. So you can think of those two as the offensive and defensive “extra factors” which determine who wins playoff series and the Championship. Note that even though these two factors are offsetting as far as defense versus offense is concerned, the general defense factor (which remember is separate from the paint defense factor) maintains the reality that defense is a little more important than offense is in the playoffs. (There is no general offense factor to offset the general defense factor.)<br />
<br />
The rest of the discussion of this factor refers to the QFTR development project in this area. <br />
<br />
Both overall assists and who makes those assists make up this important factor.<br />
<br />
From the early days of QFTR we have innovated in developing models and formulas for quality of offense. Look for the “rule of 10” in various future reports. (The rule of 10 actually appears in previous reports but not identified by that title). In 2009 QFTR formally introduced three quality of offense measures: Playmaking Identity, Playmaking Quality, and Playmaking Power. See this reference article for a full explanation of these.<br />
<br />
Because neither the general public nor even “advanced” statistics sites recognize the concept of “Quality of offense,” this subject can be thought of as the frontier of basketball. Right now it seems to be QFTR or nothing for this area, and unfortunately, we have not so far had all the resources necessary to make huge progress in this area. To be blunt, we’ve had too many other things to do.<br />
<br />
For one thing, since no one else even recognizes the concept (let alone think it to be important) we can’t rely on existing databases in any way, shape, or form for our quality of offense models and formulas. Instead, a lot of manual database construction is needed. This in turn means that unfortunately, we have not been able to come up with all the time necessary for developing this area.<br />
<br />
The new factor appearing in Real Team Ratings is only a start, and more work needs to be done. But this new factor, called Quality of Offense, which is new as of the beginning of 2011, is a very important effort to bring quality of offense the attention it deserves.<br />
<br />
<span style="color: #ff6600;">PACE</span><br />
The fourth of the four smaller (and overall the seventh of the seven) factors is the pace (or pace adjustment). Pace is number of possessions per game. This is the total possessions for both teams combined. (You can theoretically calculate offensive and defensive possessions but this would be largely meaningless and I have never seen anyone calculate or use this breakdown).<br />
<br />
Teams that run a lot of fast breaks and/or take shots early in the 24 second shot clocks have fast paces, and vice versa.<br />
<br />
The pace adjustment is a small but valid adjustment that slightly modifies the ratings of teams according to the effects of pace on ability to win playoff games. The best pace is a little below the League average pace.<br />
<br />
Strictly speaking pace is neither an offensive nor a defensive factor. Instead, it is both. Actually, depending on what the pace is, it can be more or less an offensive and at the same time it can be more or less a defensive factor. In other words, how pace affects offense and defense is actually a rather complicated topic. Although the details may be complicated, we can determine which pace is the best pace for winning playoff games and Championship.<br />
<br />
The reason for the pace adjustment is that there is a relatively small but definite correlation between slower pace and winning playoff series. It is a little more difficult, on average, for fast pace teams to win playoff series than it is for slow pace teams to win them. Therefore, a small adjustment called the pace overweight adjustment is factored in to RTR.<br />
<br />
Why exactly do average and a little slower than average paced teams have a slightly easier job winning playoff series? Consider an example. For example, consider the Denver Nuggets. They are usually one of the fastest paced teams in the NBA during the regular season. If you just look at the efficiency measures, the Nuggets might appear to be almost identical to another, much slower team. But these two teams would be very different when you look at efficiency and pace together. In theory, slower paced teams can more reliably reproduce their nice regular season net efficiency in the playoffs than can faster paced teams, mostly because the playoffs feature a higher defensive intensity and aggressiveness, which automatically slows down the pace.<br />
<br />
Suppose that in the playoffs, the fast paced Nuggets and a slow paced team play. Each team had almost exactly the same offensive, defensive, and net efficiency numbers during the regular season. By playing extra hard on defense, the slow pace team can automatically slow down the game to some degree, which will disrupt the offensive (and possibly the defensive) efficiency of the Nuggets, the team that was fast pace in the regular season. In other words, there will be fewer possessions for the fast pace team in the playoff games than it typically had in the regular season. This in turn means that the fast pace team will be disrupted from what they did during the regular season to one extent or another.<br />
<br />
This means that for the fast pace team, both the offensive and the defensive efficiency could change in the playoffs from what it was in the regular season, due to all of the changes forced on the fast pace team by the change of pace. Both the offensive and the defensive efficiency might change, and each change could be either for the better or for the worse, but by far the most likely changes would be that the offense would be substantially less efficient, while the defense would not be changed much. A much less efficient offense, but about the same defense, is exactly what we have seen from the Nuggets in their numerous playoff series losses in recent years.<br />
<br />
In extreme cases, such as the fastest pace team being slowed down dramatically in the playoffs by an extremely slow team, the pace adjustment may be inadequate, so that there may still be some forecast error even after everything we have done.<br />
<br />
The bottom line is that in all known cases, faster paced teams do not do as well in the playoffs as they do in the regular season, all other things equal. If a fast paced team wants to win in the playoffs, it would be wise to do some things better in the playoffs than they did those things in the regular season, in order to compensate for being forced to operate at a slower pace.<br />
<br />
But in 2010 it was realized that pace can be too slow also. Now we know that the optimal pace is a little slower than the League average. RTR awards the highest pace adjustment to the team with the 20th fastest pace (the one with the 10th slowest pace) out of the 30 teams. The team with the 20th fastest pace has the best possible pace for winning playoff games. The further from that a team is the lower the pace sub rating. The lowest rating possible is for the team with the fastest pace. The slowest team in the League has a moderate pace sub rating rather than the highest rating as in earlier versions of RTR.<br />
<br />
See the technical section (immediately following) for more details on how the pace factor is calculated.<br />
<br />
<span style="color: #ff6600;">=====SECTION THREE: TECHNICAL DISCUSSION OF THE SEVEN FACTORS=====</span><br />
<br />
This section takes each factor and first explains in words how that factor is calculated. And then the formulas are given.<br />
<br />
After all the factors are technically explained, the overall or primary RTR formula (that combines all the factors) is shown.<br />
<br />
The basic rationales behind the calculations are covered above in the Factor Discussion Section. But where appropriate, the more technical reasons behind calculations are included in this Technical Section.<br />
<br />
The seven factors are discussed in order of on average how much weight they have toward RTR. However, for many teams the order of the factors by importance would differ from the average order. Also, the weights, or in other words the importance of the last four factors are extremely similar, so the order among the last four has very little significance or meaning.<br />
<br />
<span style="color: #ff6600;">1. NET EFFICIENCY</span><br />
Offensive efficiency minus defensive efficiency equals net efficiency. Offensive efficiency is points scored per 100 possessions. Defensive efficiency is points scored per 100 possessions. In the RTR formula, a weight of 3.0 is applied to net efficiency. This large weight reflects how crucial this factor is and correctly calibrates this factor with the others.<br />
<br />
As an example, if a team has an offensive efficiency of 107.0 and it has a defensive efficiency of 104.6; net efficiency is 107.0 minus 104.6 equals 2.4. The net efficiency factor is them three times 2.4 which is 7.2.<br />
<br />
<span style="color: #ff6600;">2. WINS OVER AND LOSSES TO PLAYOFF TEAMS</span><br />
Each team's win-loss record is accessed for games it played against the top sixteen teams and, separately, for games it played against the top ten teams. These two records are added together, which has the effect of double weighting wins and losses versus top ten teams while leaving wins and losses versus the 11th through the 16th best teams single weighted. So obviously the idea here is to look very, very closely at how well the team does against the teams that are the contenders to reach the playoffs, especially the Conference Finals and the Championship.<br />
<br />
Next the winning percentage of the wins and losses combined as just explained is calculated to three decimal points (for example, .550). Next the difference between each team’s winning percentage and a base of .333 is calculated and then this difference is multiplied by 70. This process accurately reflects how important this factor is and correctly calibrates this factor with all of the others.<br />
<br />
As an example, for the team with a winning percentage of .550, the factor added to RTR is (.550-.333) X 70 = .217 X 70 = 15.19.<br />
<br />
Note that the base of .333 is approximately the actual threshold between playoff and non-playoff teams.<br />
<br />
Note also that the use of the winning percentage as opposed to raw wins and losses almost completely corrects for different number of games played by teams against top teams.<br />
<br />
This factor, wins over and losses to playoff teams, was the key 2009 improvement over the very early versions of RTR and helped to clearly establish Real Team Ratings as the most accurate playoff predictor possible. By counting in the overall formula actual wins and losses in games between the likely playoff teams, you have gone in a straight line directly to evidence for the question we are out to answer: how good are the teams really going to be in the playoffs, according to everything known now?<br />
<br />
<span style="color: #ff6600;">3. RECENT WINS AND LOSSES</span><br />
The Real Team Ratings system was substantially improved in April 2010 with the arrival of a new factor that reflects recent performance (in about the last two months). The calculation here uses the win loss record from the last 25 games. The rating is simply the difference between wins and losses. Although in the future this raw difference may be modified with a factor and/or the winning percentage may be used, as of this date the calibration of the other factors is such that the straight up difference between the wins and losses in the last 25 games is accurate and correct.<br />
<br />
For example, if in the last 25 games a team is 15-10, the Recent Wins and Losses factor is 5.0.<br />
<br />
The last five games of the regular season are ignored since sometimes in those games when playoff positioning is set, lineups and playing times are distorted.<br />
<br />
<span style="color: #b45f06;"><span style="color: #ff6600;">4. GENERAL DEFENSE OVER WEIGHT ADJUSTMENT</span> </span><br />
As discussed previously this is the way that RTR reflects what most people already know, that defense is more important in the playoffs than it is in the regular season. In summary, this adjustment gives an increase or a decrease in every team's rating in accordance with how each team ranks in defensive efficiency in the NBA.<br />
<br />
Defensive efficiency is the number of points given up per 100 possessions (on average). First the teams are sorted by defensive efficiency. Then, using a range from 5.8 to -5.8, points are assigned, in equal increments of 0.4, to each team in order of how it ranks in defensive efficiency. Specifically, the team with the best defensive efficiency (fewest points allowed per 100 possessions) is given 5.8 points, the second most defensively efficient team gets 5.4 points, and the third most defensively efficient team gets 5.0 points, and so on, until the least defensively efficient team gets minus 5.8 points.<br />
<br />
Here are some example scores according to how teams rank on defensive efficiency:<br />
<br />
1st best 5.8<br />
5th best 4.2<br />
10th best 2.2<br />
15th best 0.2<br />
20th best -1.8<br />
25th best -3.8<br />
30th best -5.8<br />
<br />
Note that any team with a better defense than average gets a positive score and that any team with a worse defense than average gets a negative score.<br />
<br />
The amount of the adjustment is carefully calibrated to be sufficient without being excessive. Since for one thing almost all teams ramp up their defense in the playoffs to one extent or another (which means less relative advantage than if only the good defensive teams ramped up) you have to be careful here to avoid getting carried away and putting in adjustments that are too large.<br />
<br />
<span style="color: #ff6600;">5. PAINT DEFENSE ADJUSTMENT</span><br />
This factor is sort of an adjustment of an adjustment, namely of the general defense over weight adjustment. In the playoffs, defense in general is more important than it is in the regular season. But the importance of paint defense goes up by more than the importance of defense in general, so that is why this is a valuable and important factor for RTR.<br />
<br />
Paint defense is number of points given up in the paint per game (on average). First the teams are sorted by paint points surrendered per game. Then, using a range from 5.8 to -5.8, points are assigned, in equal increments of 0.4, to each team in order of how it ranks in paint defending. Specifically, the team with the best paint defense (fewest points allowed in the paint per game) is given 5.8 points, the second best paint defending team gets 5.4 points, and the third best paint defending team gets 5.0 points, and so on, until the team with the worst paint defense gets minus 5.8 points.<br />
<br />
Here are some example scores according to how teams rank on paint defending:<br />
<br />
1st best 5.8<br />
5th best 4.2<br />
10th best 2.2<br />
15th best 0.2<br />
20th best -1.8<br />
25th best -3.8<br />
30th best -5.8<br />
<br />
Note that any team with a better paint defense than average gets a positive score and that any team with a worse paint defense than average gets a negative score.<br />
<br />
Note also that numerically the paint defense adjustment is equal to the general defense adjustment. This means in reality that for the playoffs, paint defense is understood to be about twice as important as defense in general.<br />
<br />
Finally, note that this factor is biased against teams with a fast pace (because their opponents have more possessions and get more points per game). This is one of the reasons why fast paced teams are at a disadvantage in the playoffs (and to a lesser extent in the regular season). It is often more difficult for fast paced teams to defend the paint. The last factor (#7 which is below) is pace. One of the reasons why that factor is kept small is that the paint defending factor already reflects it to some degree.<br />
<br />
<span style="color: #ff6600;">6. QUALITY OF OFFENSE ADJUSTMENT</span><br />
As discussed above, this is an especially unique and innovative concept introduced and used by QFTR. It is “out on the frontier”. Here we show you how we calculate this. However, be aware that we would calculate it a little differently if resources permitted. Also be aware that future tweaking will probably slightly change this methodology.<br />
<br />
Right now we are using a measure that closely tracks and is a rough summary of what the future QFTR quality of offense measure(s) will be. Right now we are using the percentage of field goals that are assisted. For example, if on average there is an assist for 6 out of every ten field goals a team makes it has a percentage of field goals assisted of .600 or 60%. <br />
<br />
Although we could calculate that ourselves, we don't even have to, since it it is available on the Internet.<br />
<br />
Every year most teams will be in the normal range for the percentage of field goals that are assisted. The normal range runs from .520 or 52% to .640 or 64%. Teams that are at 60% or higher have high quality offenses that are difficult to defend in the playoffs. Teams that are below 56% have low quality offenses that are easy to defend in the playoffs. Teams that are higher than 56% but lower than 60% are in the mid-range.<br />
<br />
The team with the highest percentage of field goals assisted gets 5.8 points. The team with the second highest percentage of field goals assisted gets 5.4 points. The team with the third highest percentage gets 5.0 points. Each subsequent team gets 0.4 points less until the team with the lowest percentage of field goals assisted gets minus 5.8 points.<br />
<br />
Here are some example scores according to how teams rank on quality of offense:<br />
<br />
1st best 5.8<br />
5th best 4.2<br />
10th best 2.2<br />
15th best 0.2<br />
20th best -1.8<br />
25th best -3.8<br />
30th best -5.8<br />
<br />
Here is a discussion of the nature of and the importance of this factor. <br />
<br />
In basketball in general but especially in the playoffs, how well organized a team’s offense is is more important than many people, including many basketball coaches, think it is. It turns out that the ability of a team to fall back on tried and true offensive plays that they know like the back of the hand is more important than the element of surprise that comes from being very unpredictable. In other words, if a team is extremely good at running basketball plays that are components of its offensive organization, the fact that the other teams know you are going to run it, and even when during a game they know exactly when you are going to run them, does not prevent the team running those plays from using them to win playoff games. In other words, if you think that element of surprise (being unpredictable) will be enough to win you a lot of playoff games, you are wrong.<br />
<br />
Number of assists is by far the most important factor showing offensive organization or lack of it. Seemingly very small differences in assists reflect big differences in how organized an offense is.<br />
<br />
Whether a team can be too organized and how big a risk this is is something being carefully investigated these days. However, at this time there is no evidence at all that in real life a team can be “too organized”. Because what happens in real life is that when a team is playing “too organized” it automatically (and probably unconsciously) realizes this and becomes less organized. The risk all seems to be in the other direction. All of the risk seems to be that teams are not organized enough. And teams can easily lose in the playoffs, especially at the Conference Finals or NBA Championship, because their offense is not organized enough.<br />
<br />
<span style="color: #ff6600;">7. PACE ADJUSTMENT</span><br />
As previously stated in the Discussion of Factors Section, pace is a small but important to include factor that, however, is already reflected in some other factors. For a complete discussion of why this factor is important for RTR, see that Section above.<br />
<br />
Pace for each team is the average number of possessions per game for that team's regular season games. The first step in the calculation of this factor is that all the team paces are obtained and then the teams are sorted by pace.<br />
<br />
Now points are awarded according to how the teams rank. The state of the art as of 2011 is that we now know that the optimal pace is a little slower than the League average. RTR awards the highest pace adjustment to the team with the 20th fastest pace (the one with the 10th slowest pace) out of the 30 teams. The team with this pace has the best possible pace for winning playoff games.<br />
<br />
Teams even slower than this get progressively lower adjustments. The 25th fastest (5th slowest) team sill has a decent, positive pace adjustment. Teams slower than this, though, get very little, and the slowest pace team gets a pace adjustment of about zero. In other words, the slowest pace team in the NBA has a pace that is neither an advantage nor a disadvantage in the playoffs.<br />
<br />
Teams faster than the 20th fastest in the NBA get progressively lower adjustments. The twelve fastest teams are at a disadvantage in the playoffs and they appropriately get negative pace adjustments. But only the fastest six teams or so get relatively large negative adjustments. The bottom line is that fast pace teams and especially the fastest pace teams do not do as well in the playoffs as they do in the regular season unless they have other things going for them to make up for the disadvantage of a fast pace.<br />
<br />
Here are some example scores according to how teams rank on pace:<br />
<br />
1st fastest -3.7<br />
5th fastest -2.1<br />
10th fastest -0.1<br />
15th fastest 1.9<br />
20th fastest 3.9<br />
25th fastest 1.9<br />
30th fastest -0.1<br />
<br />
Note again that the 20th fastest team gets the highest score while the fastest team gets the lowest score.<br />
<br />
<span style="color: #ff6600;">CALCULATION OF REAL TEAM RATINGS USING THE SEVEN FACTORS: THE PRIMARY FORMULA</span><br />
The easiest way to describe the final calculation of RTR is to give you the formula.<br />
<br />
REAL Team Rating =<br />
Net Efficiency X 5.0<br />
<br />
Plus<br />
<br />
Winning percentage versus the top 16 and versus the top ten teams combined minus .333) X 70<br />
<br />
Plus<br />
<br />
The difference between wins and losses in the last 25 games (with the last five games of the regular season ignored)<br />
<br />
Plus<br />
<br />
The general defense overweight adjustment (from +5.8 to -5.8 according to defensive efficiency rank)<br />
<br />
Plus<br />
<br />
The paint overweight adjustment (from +5.8 to -5.8 according to points surrendered in the paint rank)<br />
<br />
Plus<br />
<br />
The quality of offense adjustment (from +5.8 to -5.8 according to the formula detailed in the technical section of this guide above)<br />
<br />
Plus<br />
<br />
The pace overweight adjustment (from +3.9 to -3.7 according to the formula detailed in the technical section of this guide above)<br />
<br />
<br />
<span style="color: #b45f06;">INTRODUCTION TO SECTIONS FOUR, FIVE, AND SIX: HOW AND WHEN TO USE THE RATINGS AND THESE SECTIONS TO ACCURATELY PREDICT PLAYOFF SERIES</span><br />
To predict the playoffs you need to interpret differences in ratings. Section Four is the primary section for interpreting ratings. But there are common circumstances where the Ratings and Section Four combined will NOT be enough for you or anyone else to be able to correctly predict a series. One or more injuries is the most common circumstance we are talking about here. <br />
<br />
<span style="color: #ff6600;">SECTION FOUR DOES NOT RELIABLY APPLY WHEN THERE ARE RECENT INJURIES</span><br />
Section Four shown below fully applies only if there are no injuries that are not reflected in the Ratings. Early season injuries are mostly reflected in the Ratings. But the more recent the injury and the more important the player injured the less this injury will be reflected in Real Team Ratings. Whenever there is an injury (or trade) after about February 1 to an average or especially to an above average player, use of Real Team Ratings and Section Four alone is definitely not recommended. You can still however start with the Ratings and with Section Four and then make adjustments as discussed in Section Five and especially in Section Six. <br />
<br />
The same warning applies when players, especially above average players, have been traded away from or on to a team.<br />
<br />
In other words the warning to not rely only on the Ratings and Section Four applies whenever an average or above average player was available for much of the regular season but is not available for the playoffs. And it applies on the flip side: the warning applies whenever there is a new average or above average player available for the playoffs who was not available for much of the regular season. <br />
<br />
Whenever the warning applies you either have to quit trying to predict the series or you have to use Section Five and/or Section Six along with Section Four (and the Ratings). That is, when the warning applies, you start with the Ratings and Section Four and then you make any adjustments called for in Sections Five and Six.<br />
<br />
Besides injuries there are two other factors that can not be included in Real Team Ratings that are sometimes involved in a series and that sometimes cause an upset to occur. These three factors leading to upsets are summarized and broken down as follows: <br />
<br />
<span style="color: #ff6600;">NBA PLAYOFF SERIES UPSETS BROKEN DOWN</span><br />
--Total of all Upsets: 24.6% of all playoff games and series are upsets <br />
--Upsets Due to Injuries: 48.7% of all upsets which is 12.0% of all playoff games and series <br />
--Upsets Due to Coaching: 35.1% of all upsets which is 8.6% of all playoff games and series <br />
--Upsets Due to Players: 16.2% of all upsets which is 4.0% of all playoff games and series.<br />
<br />
Section Five of the User Guide (NOT shown below) concentrates on these three factors that cause upsets that are not now and may never ever be includable in Real Team Ratings. Section Six covers in detail how you adjust ratings for injuries, which as you can see is the biggest factor that is not included in the Ratings. There is a full adjustment procedure and a newer, quick adjustment procedure that takes just a few minutes to do. <br />
<br />
As for adjusting for coaching and player performance that is NOT already reflected in Real Team Ratings, QFTR does this in text reports since this is at the very heart of the mission of QFTR. We quantify everything that can be quantified but for some things that can not be quantified the only way we can get at them and the only way you can know about them is to read QFTR Reports. <br />
<br />
Although complete quantification of the coaching factor (and for that matter the player motivation and specific performance factor) are not now and will probably never be possible, Real Coach Ratings have been developed to the point where if they are used in conjunction with Real Team Ratings you can come very close to full quantification. However, there is as of yet no formal and quantifiable way to combine the use of Real Coach Ratings with Real Team Ratings so as of now any user including QFTR itself must use a combination system that he or she creates and that he or she thinks is reasonable. <br />
<br />
Upsets occur in about 1/4 or about 25% of all series. So the Ratings and Section Four that follows will accurately explain how to predict about 3/4 or about 75% of all playoff series. For the other 1/4 or 25%, Sections Five and Six of this User Guide are very useful and are often but not always enough for correctly predicting series. <br />
<span style="color: #ff6600;">INJURIES ARE LIKE WARNING FLAGS FOR UPSETS TO COME</span><br />
If you know for sure that NO recent (recent is roughly considered to be during or after January) injuries to average and above average players are affecting a particular series, it is much more likely but still not guaranteed that the combination of the Ratings and Section Four will correctly predict the series. If on the other hand there are one or more substantial injuries involved, especially ones occurring during or after January, the Ratings and Section Four become much less useful and in general can no longer be used to correctly predict the series.<br />
<br />
The probabilities here in Section Four below are hedges for coaching factors NOT included in Real Team Ratings and for player performance in the playoffs above or below "what it should be." These probabilities in no way shape or form take into account injuries. In other words, Section Four and the probabilities in Section Four assume no injuries to average or above average players. To be more precise, they assume that every player who was available for much of the regular season is still available for the playoffs. <br />
<br />
<span style="color: orange;">======== SECTION FOUR: INTERPRETATION OF RATINGS AND PREDICTING PLAYOFF SERIES ========</span><br />
RTR can obviously used to see exactly how well or poorly teams are set up for the NBA playoffs. Note that teams with negative RTRs are roughly the very same teams who do not qualify to play in the playoffs. Beyond this, using RTR to predict particular playoff series is a very useful thing. When you see playoff series turning out in accordance with RTR, you will see that RTR is valid. The best way to use RTR to predict playoff series is as follows.<br />
<br />
You start with Real Team Ratings (RTR) as reported here at QFTR and the first thing you do next is to add seven points to the ratings of the teams with home court advantage. You can stop right there and by using the Interpretation scales (just below) you will already have very good predictions for series where no major injuries are involved.<br />
<br />
Were it not for injuries Real Team Ratings alone would correctly predict the outcome of most playoff series (about 88% of them). But if one or more significant injuries are involved RTR alone becomes much less valuable for predicting results. If you have the time and you want to be more accurate you need to do the full method manual injury adjustments shown in Section Six of this Guide as needed. There is a new shortcut manual injury adjustment which does not take much time to do at all. See the final section of this Guide, Section Six: Manual Injury Adjustments.<br />
<br />
After you have adjusted the RTRs for home court and for injuries, you then compare them for the two teams playing and find out what the difference is. Finally you can now use either the "quick prediction scale" just below and/or you can use the descriptions in the "detailed guide" that you will see below the quick prediction scale.<br />
<br />
<span style="color: #ff6600;">QUICK PREDICTION SCALE FOR PLAYOFF SERIES</span><br />
0 to 6.9 Complete toss-up: flip a coin<br />
7 to 13.9 Roughly 60% chance the higher team will win<br />
14 to 20.9 Roughly 70% chance the higher team will win<br />
21 to 27.9 Roughly 79% chance the higher team will win<br />
28 to 34.9 Roughly 87% chance the higher team will win<br />
35 to 41.9 Roughly 94% chance the higher team will win<br />
42 to 48.9 Roughly 97% chance the higher team will win<br />
49 to 55.9 Roughly 99% chance the higher team will win<br />
56 or more Roughly 100% chance the higher team will win<br />
<br />
<span style="color: #ff6600;">DETAILED GUIDE TO INTERPRETATION OF DIFFERENCES BETWEEN TEAMS IN REAL TEAM RATINGS</span>In the detailed interpretation guide that follows, the word "roughly" is repeatedly used in front of the probability numbers, as reminders of the small amount of unavoidable statistical error and to emphasize that unknown factors, including injuries, especially injuries for which no manual adjustment has been made, will in some cases result in substantially different actual probabilities.<br />
<br />
Whether or not you are doing manual injury adjustments, do not forget to add six points to the RTRs of the teams that have home court advantage. Injury adjustments are highly recommended unless neither of the teams have significant injuries.<br />
<br />
The probability percentages in both the quick chart above and in the descriptions below are based on historical results in the NBA.<br />
<br />
<span style="color: #ff6600;">DIFFERENCE IN RATINGS IS BETWEEN 0 AND 6.9</span><br />
The series is a complete toss-up when statistical error is considered. There is a strong possibility of a 7 game series. The higher team has a 50% to 55% chance of winning, depending on what exactly the difference is. These probabilities are too low for anyone to have any confidence in using RTR to say who will win. All series of this type are decided quite simply by who plays better, by who coaches better, or both.<br />
<br />
<span style="color: #ff6600;">DIFFERENCE IN RATINGS IS BETWEEN 7.0 AND 13.9</span><br />
The series can easily go either way, although the higher team has a small edge, and has between a 55% to 65% chance of winning, depending on where in the range the difference is. There is a very substantial chance of a 7-game series. If the lower team wins, it is a small upset. Slight differences in the quality of coaching, certain players playing a little better or a little worse than they did in the regular season, or both could be responsible for an upset at this level.<br />
<br />
<span style="color: #ff6600;">DIFFERENCE IN RATINGS IS BETWEEN 14.0 AND 20.9</span><br />
The series can go either way and this type of difference gives a significant chance for a 7-game series. But the higher team has a clear edge. The higher team has between a 65% and a 75% probability of winning, depending on where in the range the difference is. If the lower team wins, it is a moderate upset. Slight differences in the quality of coaching, certain players playing a little better or a little worse than they did in the regular season, or both could be responsible for an upset at this level.<br />
<br />
<span style="color: #ff6600;">DIFFERENCE IN RATINGS IS BETWEEN 21.0 AND 27.9</span><br />
The higher team has roughly between a 75% to 85% probability of winning, depending on where in the range the difference is. There is a chance, but only a small one, for a 7-game series. If the lower team wins, it is a fairly big upset. Coaches, certain players, or both could be responsible for an upset at this level.<br />
<br />
<span style="color: #ff6600;">DIFFERENCE IN RATINGS IS BETWEEN 28.0 AND 34.9</span><br />
The higher team has roughly between an 85% to a 93% probability of winning, depending on where in the range the difference is. In this kind of series, often the only way the lower team can win the series is by extending the series out to 7 games and then somehow winning the 7th game, thus taking the series 4 games to 3. However, it is not uncommon, assuming there is an upset in this type of series, for the lower team to so severely disrupt the favored team that the lower team upsets the higher, favored team 4 games to 2. Whichever way it does it, if the lower team does win coming in down by this amount, it should be considered a major upset. In many such cases, the coaching would have to be very wrong and/or negligent.<br />
<br />
<span style="color: #ff6600;">DIFFERENCE IN RATINGS IS BETWEEN 35.0 AND 41.9</span><br />
The higher team has roughly between a 93% and a 97% probability of winning depending on where in the range the difference is. In this kind of series, often the only way the lower team can win the series is by taking the series 7 games and winning the 7th game, thus taking the series 4 games to 3. However, there have been a tiny number of series where a team with this amount of a RTR deficit has won the series by so severely disrupting the favored team that it is able to win the series 4 games to 2. In the vast majority of such cases, the coaching for the higher team was severely wrong and/or negligent. Whether accomplished in 6 games or 7, the lower team winning despite being this far behind in RTR is extremely rare, and would be considered a very major and very surprising upset.<br />
<br />
<span style="color: #ff6600;">DIFFERENCE IN RATINGS IS BETWEEN 42.0 AND 48.9</span><br />
The higher team has roughly between a 97% and a 99% probability of winning, depending on where in the range the difference is. Obviously, an upset would be extremely rare, shocking, and historical. It would in most cases be caused substantially by incompetent and/or severely negligent coaching or by one or more major injuries. With this amount of difference, any upset would almost certainly have to be with the series going all seven games.<br />
<br />
<span style="color: #ff6600;">DIFFERENCE IN RATINGS IS 49.0 AND 55.9</span><br />
The higher team has a roughly 99% probability of winning the series. Obviously, an upset would be extremely rare, shocking, and historical. It would in most cases be caused substantially by incompetent and/or severely negligent coaching or by one or more major injuries. With this amount of difference, any upset would almost certainly have to be with the series going all seven games.<br />
<br />
<span style="color: #ff6600;">DEFFERENCE IN RATINGS IS 56.0 OR MORE</span>It is close to a 100% certainty that the higher team will win the series. Obviously, an upset would be extremely rare, shocking, and historical. It would in the vast majority of cases be caused substantially by incompetent and/or severely negligent coaching. With this amount of difference, any upset would almost certainly have to be with the series going all seven games.<br />
<br />
<br />
<span style="color: #ff6600;">======== SECTION FIVE: CAUTIONS ========</span><br />
<br />
Although Real Team Ratings is a state of the art system strongly believed to be the best basketball playoffs model in existence, it is not without its limitations. In this section, these imperfections are discussed along with some solutions to them.<br />
<br />
<span style="color: #ff6600;">BASE STATISTICAL ERROR</span><br />
Due to a small amount of unavoidable statistical error in RTR, there has to be about a five point difference between teams before you can start to have any confidence at all that the higher team will defeat the lower in a playoff series. The base statistical error for the final, end of season RTRs is about 3 points.<br />
<br />
Statistical error is of course greater with less data, which means that the earlier that Real Team Ratings come out during a season, the higher the base statistical error. The first RTR Report is scheduled to come out in the last week of December. The base statistical error at that point is about eight points. Aside from statistical error, of course there is the much larger fact that a lot can change between the end of December and late April that has nothing to do with statistical error.<br />
<br />
<span style="color: #ff6600;">COMPARABILITY AND EVOLUTION</span><br />
Unlike academic statistical sites that use basketball data (and may appear to the casual observer to be basketball sites) QFTR makes major changes to its formulae and models over time. Therefore, the actual numerical ratings from RTR Reports between 2007 and 2011 are not comparable. Also, the RTR ratings from 2010 on are more reliable than those from prior to 2010.<br />
<br />
On the other hand, starting with 2011 and going forward, RTR ratings are likely to be comparable from one year to the next, because it will most likely not be possible to substantially improve RTR from the evolved 2011 version of it. Eventually, once it is certain that the ratings have reached near perfection, an evaluation scale will be produced.<br />
<br />
<span style="color: #ff6600;">KEY FACTORS THAT CAN UNFORTUNATELY NOT BE COMPLETELY INCLUDED IN REAL TEAM RATINGS</span><br />
RTR can be approximately used to predict who will win playoff series. However, there are factors not included in the RTR because their impact can not be known until the playoffs are played and/or because calculating those factors is extremely difficult (and has never been done by anyone anywhere). The RTR system is the best playoff prediction scheme that can be done during the regular season. But there are still some factors that can not be included in RTR itself that will help determine playoff series. To get even better accuracy than base RTR, you have to know exactly what the injury situation is at the time playoff games are played. You need to know who has home court advantage. And you also would want to know how specific coaching tactics in particular playoff series will work out (which technically is not completely possible because until the playoff series happens you don’t know for sure what the tactics of the coaches are going to be).<br />
<br />
One factor not included can sometimes be huge and can easily flip a series: late regular season and during the playoffs injuries. Among factors not included in RTR that often impact winning playoff games and series, recent injuries is by far the biggest one. Injuries do not automatically change who wins playoff series but unfortunately they often do.<br />
<br />
However, the great news is that the recent games factor partially but substantially accounts for injuries that occurred in the most recent two months. Injuries occurring more than two months ago were already and remain largely covered by RTR simply because the effects of long term injuries inevitably show up very well in the biggest factor: net efficiency. The remaining problem (and it can be a whopper) involves injuries that have occurred within the last few weeks.<br />
<br />
Another factor not directly included in RTR is coaching. The really good news is that coaching is reflected in every factor of RTR (some more than others) and it is believed that coaching is substantially already included in RTR as a whole. In fact, you can look at RTR as a sort of rough but important coaching guide for coaching in the NBA playoffs.<br />
<br />
However, the bad news is that there are some aspects of coaching, including ones Quest for the Ring likes to cover in Reports, which are not completely included in RTR. For example, if the coach switches lineups and/or playing times in the playoffs from what he had in the regular season (as from time to time mostly bad playoffs coaches who get nervous or upset about playoff coaching do) this would not show up in RTR. Similarly, if the coach is not a good morale booster or motivator, this would be much more of a disadvantage in the playoffs than in the regular season, but RTR would not be able to pick this up and show this in the ratings. There are any number of other playoffs coaching details that can not be fully reflected in RTR.<br />
<br />
Obviously, home court advantage can not be included in RTR before it is known which team has that advantage. But obviously it is very easy to include it once you are looking at a particular series and know which team has the home court advantage. See Section Five: Interpretation of Ratings (above) for details.<br />
<br />
In summary, the three main factors in basketball not fully covered by RTR are injuries, coaching, and home court advantage.<br />
<br />
Of these three, manual adjustments are available for two of them: injuries and home court advantage. We now show you how when playoff time comes to adjust RTR for home court advantage and for the injury situation. Adjusting for home court advantage is extremely simple but adjusting for injuries is much more complicated and there is a separate section (#6) for this.<br />
<br />
Note that there is no manual adjustment factor for coaching factors not included in RTR. However, many QFTR Reports cover in great detail these little known factors that sometimes decide playoff series and Championships. Precise quantification of these coaching details remain elusive.<br />
<br />
<span style="color: #ff6600;">COACHING IN THE PLAYOFFS VERSUS COACHING IN THE REGULAR SEASON</span><br />
Certain coaches deploy offensive and/or defensive strategies in the regular season that do not work as well in the playoffs as they do in the regular season. A team using this kind of strategy makes the playoffs but sooner or later gets bounced in the playoffs by a team using one or more strategies rewarded the most by basketball.<br />
<br />
In other words, and more broadly, it is known to us here at QFTR that how a team is coached, including what schemes it is using on offense and defense, can have a different impact in the playoffs than it and they had in the regular season. This would not be picked up by the RTR.<br />
<br />
The negative impact on RTR of coaching that works better in the regular season than in the playoffs is at this time believed to be between small and not so small, up to an absolute maximum of about 20 RTR points. But a 15-20 point hit would be plenty big enough to swing any close series. Coaches who coach well in the regular season but not in the playoffs will cost their teams’ playoff series they probably could have won, although this will not happen in every series. It will happen mostly in series where the RTR differential is between 5 and 25 points.<br />
<br />
This type of coaching will certainly be in the long run ruinous to the objective of going as far as possible in the playoffs, simply because in every playoff run any playoff team will sooner of later face teams with similar base RTR ratings. In fact, the deeper in the playoffs, the closer the ratings of the teams playing. Often, RTR differences are extremely small in the Conference Championships and in the NBA Championship.<br />
<br />
One of the primary objectives of the Quest for the Ring is to identify and explain offensive and defensive strategies that work better in the regular season than they do in the playoffs, and vice versa.<br />
<br />
Unfortunately, we don’t yet have any scheme, manual or otherwise, for quantifying coaching that is better or worse in the playoffs versus the regular season. However, we are working on it and there is a proposal to add a factor for this in RTR itself. If and when that happens, no manual adjustment for coaching would everl be necessary.<br />
<br />
<span style="color: #ff6600;">MANUAL ADJUSTMENT FOR HOME COURT ADVANTAGE</span><br />
It is usually impossible to know who will have home court advantage in all of the round one playoff series until after the entire regular season is over.<br />
<br />
Home court advantage is estimated to be worth between 6 and 8 points. You should generally add seven points to the team that has home court advantage, although you can add as few as six or as many as eight if you know for sure that the home court advantage is much less or much more important than usual.<br />
<br />
There is an important exception. Due to the unusual format of the NBA Championship, the team with the home court advantage should receive only 4 or 5 points.<br />
<br />
<span style="color: #ff6600;">MANUAL ADJUSTMENT FOR PLAYERS UNAVAILABLE (OR PLAYING POORLY) DUE TO INJURES</span><br />
This is by far the most important of the two manual adjustments to RTR that are needed to arrive at an almost perfect prediction of who will win playoff series. The injury adjustment can very easily be a much bigger adjustment than the home court advantage adjustment. Note that although injuries are by far the most common reason why players are not available, you can use this manual adjustment any time a player is not available for any reason.<br />
<br />
With the advent of the “most recent developments” factor (aka the “last 25 games factor”) manual injury adjustments are now easier, smaller, and more statistically valid than before. As a result, manual injury adjustments can now be highly recommended. On the other hand, manual injury adjustments can not be and are still not very easy or quick to do. An entire section (#6) just below is devoted to manual injury adjustments.<br />
<br />
<span style="color: #ff6600;">POSSIBLE FUTURE TWEAKS TO REAL TEAM RATINGS</span><br />
RTR will be tweaked further in the future as necessary, although we think the new 2011 version is “almost perfect”. As of 2011, RTR has already reached the point where most possible improvements would cost more than they benefit. The existing factors already cover all large and intermediate factors, and those factors encompass virtually all small factors that you could identity.<br />
<br />
Having said that, there is a proposal to (perhaps in 2012) include a small adjustment for coaching, based on the annual Real Coach Ratings, which themselves were substantially improved in 2009 and were very substantially improved again in 2010. Any new coaching adjustment will be small since coaching is already reflected in all of the other factors, but a small adjustment to reflect playoff experience and playoff performance of coaches appears to be warranted and is on the drawing board.<br />
<br />
Most of the future RTR tweaks will involve perfecting existing factors rather than introducing new ones.<br />
<br />
<br />
<span style="color: #ff6600;">=====SECTION SIX: MANUAL INJURY ADJUSTMENTS=====</span><br />
<br />
<span style="color: #ff6600;">SHORTCUT MANUAL INJURY ADJUSTMENTS</span><br />
Beginning in 2011 we present this shortcut method which is a rough but reasonable approximation of correct manual injury adjustments. Following the mechanics of the shortcut method will be those of the regular, full method. The full method is recommended always, but especially so for series where the teams are close and for Conference Finals and for NBA Finals.<br />
<br />
The obvious advantage of the shortcut method is that it can be done in less than five minutes, whereas the full method could take up to about half an hour. The disadvantage of the shortcut method is that it could lead to incorrect predictions of who is going to win series. The statistical error when using the shortcut method is usually small but in some cases it is not small. For very close series using the shortcut method could easily lead to the wrong team being predicted to win the series.<br />
<br />
As you might suspect, the shortcut method is rather simple.<br />
<br />
<span style="color: #ff6600;">SHORTCUT METHOD STEP ONE</span><br />
Determine who is injured and not available. You should consider all players listed as out and all those listed as doubtful as unavailable. You should consider all players listed as probable (and all those not on the injury list) as available. You will have to use your best judgment regarding players listed as questionable.<br />
<br />
<span style="color: #ff6600;">SHORTCUT METHOD STEP TWO</span><br />
Using Real Player Ratings at QFTR determine the evaluation level of each player who is unavailable.<br />
<br />
<span style="color: #ff6600;">SHORTCUT METHOD STEP THREE</span><br />
Count unavailable players as follows:<br />
<br />
Major Historical Superstars 20<br />
Historical Superstars 16<br />
Superstar Players 13<br />
Star Players 11<br />
Very Good Players 9<br />
Major Role Players 7<br />
Good Role Players 5<br />
Satisfactory Role Players 3<br />
Marginal Role Players 1<br />
Poor Players 0<br />
Very Poor Players 0<br />
Extremely Poor Players 0<br />
<br />
<span style="color: #ff6600;">SHORTCUT METHOD STEP FOUR</span><br />
Add up the hits for all the unavailable players to get the total adjustment. Subtract this from the teams Real Team Rating to get the adjusted Real Team Rating.<br />
<br />
Follow this same shortcut process for the opponent. Now you can compare the two teams with the players not available taken into consideration.<br />
<br />
<span style="color: #ff6600;">FULL SCALE MANUAL INJURY ADJUSTMENTS (RECOMMENDED)</span><br />
Use the following instructions to adjust RTRs of teams for situations where players are not available due to injuries (or rarely, for other reasons) in the playoffs. The best manual injury adjustments can not be done until at least a day or two before a playoff series starts. In fact, due to the big and inherent uncertainty regarding injuries, manual injury adjustments often will need to be updated during or after game one of a series. This is because one or more of the players you thought would not play have played and / or one or more players you thought would play have not played due to injuries.<br />
<br />
There are many complications involving the impact of injuries on who is going to win playoff games. I'll mention a few of them. One big complication is that the injury situation changes more rapidly than any of the other factors. Another complication is that early season injuries, even if the player never comes back, are not as bad for the playoffs as are late season injuries. Yet another complication is that there is very often conflicting information out there about just how bad different injuries are. For example, one source may say a player is probable (75-85% chance of playing) while another says the player is questionable (40-50% chance) while still another says doubtful (20-30% chance).<br />
<br />
The overall magnitude of the injury adjustment will range from zero to 40 points for most NBA playoff teams, but it is theoretically possible for there to be as much as a 75 points downward adjustment for a totally devastated team. Many first round playoff series are nothing more than injury washouts, where teams heavily damaged or devastated by injuries are basically automatically defeated.<br />
<br />
Among the most important variables regarding players who can’t play in the playoffs are:<br />
<br />
-How good are the injured players? The QFTR Real Player Rating system is a perfect way to find out.<br />
<br />
-To what extent are other players able to step up and replace the injured player or players? This depends mostly on how good the replacement(s) is or are and on how good the coaches are.<br />
<br />
-For how long was the player injured? For players who never played at all, no adjustment in base RTR at all is necessary. The more the player played during the regular season, the GREATER the adjustment necessary.<br />
<br />
Players who were injured the entire season are irrelevant, except of course they are very relevant in the hypothetical sense of how the season could have been different. Players who were injured relatively early in the regular season, in November or December, are only slightly relevant, and the loss of them would be a much smaller number of reduced RTR points than when the loss is later. Players who were injured late in the season, from mid-February to mid-April, have the most relevancy to whether playoff series can be won or lost, and the manual injury downward adjustment to RTR for them is much higher.<br />
<br />
<span style="color: #ff6600;">MECHANICS OF THE INJURY ADJUSTMENT</span><br />
The first thing to do of course is to find out which players are injured. For best results, use the <a href="http://thequestfortheringinjuries.blogspot.com/">Quest for the Ring injury page </a>to get the latest information and to review by far the most sources of injury information.<br />
<br />
<span style="color: #ff6600;">1. MANUAL INJURY ADJUSTMENT BASE</span><br />
The base or starting point is the quality of the player, as shown by his Real Player Rating (including the hidden defending adjustment.) The base adjustment is the Real Player Rating of the player minus .500 times 20. For example, if the player injured has a RPR of .700, the base manual injury adjustment is (.700 - .500) X 25 = .200 X 25 = 5. As another example, if the player is a superstar and has a Real Player Rating of .950, the base manual injury adjustment is (.950 - .500) X 25 = .450 X 20 = 11.25.<br />
<br />
.500 is subtracted from the ratings because players whose ratings are below .500 are virtually worthless in the playoffs. If such players are not available but would play if they were available, it would be an advantage rather than a disadvantage not to have them.<br />
<br />
We now for each injured player take the base and adjust it for variables regarding the injury. The variables are as follows. There are five variables numbered 2, 3, 4A, 4B, and 4C. (The base was numbered “1”).<br />
<br />
<span style="color: #ff6600;">2. STATUS (PROBABILITY PLAYER WILL PLAY) ADJUSTMENT</span><br />
This adjustment is for manual injury adjustments to RTR when it is uncertain whether the player will be able to play in the game or not. Unfortunately uncertainty is the norm, not the exception.<br />
<br />
Also unfortunately, sources of injury information sometimes conflict. When they do, you have to use your judgment as to which source is most correct, or else you can average out the designations.<br />
<br />
The following tells you what to multiply the base manual injury adjustment base by based on the injury designation being reported.<br />
<br />
Probable (There is about a 80% chance the player will play): multiply the base by .30<br />
<br />
Game Time Decision (There is about a 60% chance the player will play): multiply the base by .55<br />
<br />
Questionable (There is about a 45% chance the player will play): multiply the base by .75<br />
<br />
Doubtful (There is about a 25% chance the player will play): multiply the base by .90<br />
<br />
Out (There is about a 0% chance the player the player will play): multiply the base by 1.0<br />
<br />
The status designations can be used not only as probabilities players will play but as rough but valid approximations of the severity of injuries, which in turn reflects the impact on the playoff series even if the player plays. Players who play slightly injured are seldom if ever going to be as good as they were with no injury at all. Therefore, the above adjustment factors not only reflect the probabilities the player will play but also the reality that even if the player plays, the team will be harmed by the injury situation.<br />
<br />
<span style="color: #ff6600;">3. WHEN IN THE SEASON THE PLAYER WAS LOST</span><br />
Find out when the player became injured by checking game logs which are part of most statistical data sets for NBA players at most major sites including ESPN.<br />
<br />
If the player has been unavailable on an on and off basis, assume the player was not available for the entire range of time, unless he was available at least 75% of the games within the range, in which case use the most recent date he became unavailable.<br />
<br />
Use the following factors:<br />
November .10<br />
December .30<br />
January .50<br />
February .70<br />
March .90<br />
April 1.0<br />
<br />
<span style="color: #ff6600;">4. IMPORTANCE OF PLAYER TO THE TEAM</span><br />
We actually have three separate adjustments which together show the importance of the player to the team.<br />
<span style="color: #cc6600;"></span><br />
<span style="color: #ff6600;">4A MINUTES PER GAME OF THE INJURED PLAYER</span><br />
At ESPN or another good site, find the minutes for the player for the current season. Be careful not to use minutes per game from any other season. Use the following adjustment factors:<br />
<br />
30 mpg and more: 1.0<br />
27 to 29.9: .9<br />
24 to 26.9: .8<br />
21 to 23.9: .7<br />
18 to 20.9: .6<br />
15 to 17.9: .5<br />
12 to 14.9: .4<br />
9 to 11.9: .3<br />
6 to 8.9: .2<br />
3 to 5.9: 1<br />
Less than 3: 0<br />
<br />
This factor indirectly gets at to what extent other players can make up for the player who is not available due to injury.<br />
<br />
<span style="color: #ff6600;">4B OVERALL DEPTH OF THE TEAM</span><br />
Go to the latest Real Player Ratings Report for the team. Such Reports are posted at The Quest for the Ring for most or all playoff teams in late March or early April. Near the very beginning of such Reports you will see all the key players listed by category. Count the number of players according to category as follows, but DO NOT COUNT any players who are not available due to injuries or for any other reason.<br />
<br />
Specifically, for purposes of this factor:<br />
<br />
-Players listed as out should not be counted<br />
-Players listed as doubtful should not be counted<br />
-Players listed as questionable should be counted at 1/2<br />
-Players listed as game time decision should be counted<br />
-Players listed as probable should be counted<br />
<br />
Note that in some cases you will be counting players as available even though you are calculating an injury hit on the team for them. This is paradoxical in the narrow sense but is part of a valid overall calculation.<br />
<br />
Here are the team depth count factors:<br />
<br />
Major Historical Superstars: Multiply the number of them by 10.<br />
Historical Superstars: Multiply the number of them by 8.5.<br />
Superstars: Multiply the number of them by 7.<br />
Stars: Multiply the number of them by 6.<br />
Very Good / Solid Starters: Multiply the number of them by 5<br />
Major Role Players / Good Enough to start: Multiply the number of them by 4<br />
<br />
Add it all up and then apply the following factors to the manual injury adjustment base:<br />
<br />
50 and more: 0<br />
49: .1<br />
48: .2<br />
47: .3<br />
46 .4<br />
45: .5<br />
44: .6<br />
43: .7<br />
42: .8<br />
41: .9<br />
40 and less: 1.0<br />
<br />
What this means is that if a team is so chock loaded that its remaining, available players add up to 50 or more points then it can completely make up for the injured player. If the sum of the remaining players is 40 or less, the team most likely can not at all make up for the injury.<br />
<br />
In practice you will find that this test will often spit out the 1.0 factor since, unfortunately, few teams have enough good and great players to make an injury even partially irrelevant.<br />
<br />
<span style="color: #ff6600;">4C POSITION SHORTAGES</span><br />
This factor is unique in that it can result in an increase rather than a decrease in the base manual injury adjustment factor. Find out if you don’t know already which position the injured player plays. Then check the depth chart for the team at ESPN or perhaps CBS Sports or Yahoo Sports. Find out how many available players there are at the injured players’ position.<br />
<br />
The minimum reasonable number of players for each position for a completely healthy team is two and the maximum is four. A team impacted by one or more injuries at a position will have between zero and three players at the position following the injury. Use the following factors:<br />
<br />
3 Players Still Available at the Position: .8<br />
2 Players Still Available at the Position: 1.0<br />
1 Player Still Available at the Position: 1.2<br />
0 Players Still Available at the Position: 1.5<br />
<br />
<span style="color: #ff6600;">AN EXAMPLE: THE 2010 UTAH JAZZ</span><br />
Ok, now lets consider an example to see exactly how this manual injury adjustment works.<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP ONE</span><br />
Find out who is and who may be injured.<br />
<br />
We have Carlos Boozer and Andrei Kirilenko, the second and third best players on the Utah Jazz, who are playing the Denver Nuggets in the first round, affected by injuries. Mehmet Okur may possibly be affected. According to two well regarded sources, here was the situation the day before the playoff series began (April 16, 2010):<br />
<br />
-Carlos Boozer power forward, is questionable for Saturday's game against Denver due to a strained right oblique/rib cage.<br />
<br />
-Andrei Kirilenko small forward, will miss at least the first round of the playoffs due to a strained left calf.<br />
<br />
-Mehmet Okur, center, is probable with a strained left Achilles tendon.<br />
<br />
In all of the calculations that follow: we round to the nearest tenth of a point; there is very little need to be more exact than that.<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP TWO</span><br />
Obtain Real Player Ratings from QFTR for the injured players. Hopefully QFTR has published the current year ratings for the players you are investigating. If not, you can use last year’s ratings which in most cases are good approximations of this year’s.<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP THREE</span><br />
Compute the base manual injury adjustments in accordance with (1) above:<br />
<br />
Boozer: (1.005 - .500) X 25 = .505 X 25 = 12.6<br />
Kirilenko: (.970 - .500) X 25 = .470 X 25 = 11.8<br />
Okur: (.806 - .500) X 25 = .306 X 25 = 7.7<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP FOUR</span><br />
Adjust for the status (probability the player will play) factor in accordance with (2) above:<br />
<br />
Boozer is “questionable” so the factor to use is .75:<br />
12.6 X .75 = 9.5<br />
<br />
Kirilenko is “out” so the factor to use is 1.0:<br />
11.8 X 1.0 = 11.8<br />
<br />
Okur is “probable” so the factor to use is .3:<br />
7.7 X .3 = 2.3<br />
<br />
Note that although Okur is actually very likely to play, the Jazz will be at least slightly harmed by his minor injury whether or not he plays, so the small hit they will take on their Real Team Rating due to the minor injury for Okur is justified.<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP FIVE</span><br />
Using the method described at (3) above, find out when in the season the player was lost (or mostly lost).<br />
<br />
The Boozer situation just developed in April; the factor for April is 1.0, so the Boozer number remains 9.5.<br />
<br />
The Kirilenko situation developed in March and the factor for March is .90. So for Kirilenko:<br />
<br />
11.8 X .9 = 10.6<br />
<br />
The Okur situation just developed in April and the factor for April is 1.0. So the Okur number remains 2.3.<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP SIX</span><br />
Adjust for minutes per game of each player affected by injuries as shown in (4A) above.<br />
<br />
Boozer’s minutes per game are 34.5 and the factor to use is 1.0 so Boozer’s number remains 9.5.<br />
<br />
Kirilenko’s minutes per game are 29 and the factor to use is .9:<br />
10.6 X .9 = 9.5<br />
<br />
Okur’s minutes per game are 29.4 and the factor to use is .9:<br />
2.3 X .9 = 2.1<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP SEVEN</span><br />
Find the overall depth of the team not counting injured players.<br />
<br />
Following the rules described (at 4B) above, Kirilenko is removed from the roster and we are left with:<br />
<br />
-Deron Williams: major historical superstar, worth 10 points<br />
-Carlos Boozer: historical superstar, worth 8.5 points<br />
-Kyle Korver: star, worth 6 points<br />
-Paul Milsap: star, worth 6 points<br />
-Mehmet Okur: very good / solid starter, worth 5 points<br />
-Ronnie Price: major role player / good enough to start, worth 4 points<br />
<br />
Williams, Korver, Milsap, Okur, and Price are all available and they total 31 points. Boozer is questionable and he is a historical superstar. So he counts as 1/2 X 8.5 = 4.3. So the Jazz depth count is 35.3. So according to the table above, the factor to use (for all three of the Jazz players with injury situations) is 1.0, so the numbers of all three carry forward as what they were in the preceding step: Boozer: 9.5, Kirilenko: 9.5 and Okur: 2.1.<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP EIGHT</span><br />
Check for position shortages as shown in (4C) above:<br />
<br />
Boozer is a power forward and without him the Jazz have just one power forward so the factor to use is 1.2:<br />
<br />
9.5 X 1.2 = 11.4<br />
<br />
Kirilenko is a small forward and without him the Jazz have just one small forward so the factor to use is 1.2:<br />
<br />
9.5 X 1.2 = 11.4<br />
<br />
Okur is a center and without him the Jazz have two centers so the factor to use is 1.0:<br />
<br />
2.1 X 1.0 = 2.1<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP NINE</span><br />
Add up the manual injury adjustments for the three Jazz players:<br />
<br />
11.4 + 11.4 + 2.1 = 24.9<br />
<br />
<span style="color: #ff6600;">EXAMPLE STEP TEN</span><br />
Subtract the manual injury adjustment from the Jazz’ Real Team Rating to get the RTR adjusted for injuries:<br />
<br />
39.6 – 24.9 = 14.7.<br />
<br />
So the Jazz Real Team Rating once injuries are accounted for is 14.7. Then if you do the same thing for the Nuggets, you can compare the two and find out who is probably going to win this series and what the probability is. Then in turn you can evaluate how well the teams do in the series given the situation. You can for example find out how much of an upset it would be if the Jazz beat the Nuggets (assuming their injuries make them underdogs as is apparently the case).<br />
<br />
As you can see, the manual injury adjustment is not a quick or easy thing. But once you have done a few you can almost always do them for a team in less than thirty minutes and usually you can do them for a team in less than 20 minutes.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-88174683272808076752010-09-05T12:20:00.000-07:002010-09-05T12:46:01.743-07:00User Guide for the Real Player Rating Calculators on Quest for the Ring Toolbox, September 2010Welcome. Exactly how good players are does not have to be a mystery anymore! <a href="http://www.thequestfortheringtoolbox.blogspot.com/">The Quest for the Ring Toolbox</a> is the only known place on the Internet where anyone can find out almost exactly how good basketball players are. The Toolbox enables you to rate players by entering game or season performance measurements. The most important rating calculated is called the Real Player Rating (RPR). That and the three other ratings that you can calculate on Toolbox are explained in detail at the User Guide for Real Player Ratings on the <a href="http://nuggets1reference.blogspot.com/">Quest Reference page.</a> That and almost all User Guides are periodically updated and only the latest versions are kept. Look for and find the latest version there.<br /><br />The Toolbox is ahead of its time in many ways, including because, as of 2010, even most document sites on the Internet including Google Documents do not allow for interactive spreadsheets to be placed on Internet pages. But we found a site which does provide that capability. Due to how tricky it is to use that source and for other reasons, it took awhile to get all of this perfected. For awhile there was an error on Toolbox we were blissfully unaware of. But as of summer 2010 we are sure we have finally achieved the capability to provide full spreadsheet interactivity on web pages and we are sure Toolbox is working perfectly.<br /><br />Most of what you can do with any Excel file you can do on the calculator that appears in the embedded Excel at the Quest for the Ring Toolbox site. For example and to the point, you can quickly calculate player ratings right on the Quest Toolbox Web page.<br /><br />As of January 2010 there were two calculators on Toolbox which are almost identical. One of them included the Hidden Defending Adjustment (HDA) and the other one did not. As of September 2010 having two was regarded as more confusing than it was worth and so now there is just one Real Player Rating (RPR) calculator on Toolbox.<br /><br />Real Player Ratings with no HDAs are relatively crude (but still valuable). HDA makes RPRs state of the art and world class. HDA requires a large separate section in this User Guide. The HDA section will follow the following section, which gives you basic instructions on how to use the calculator on <a href="http://www.thequestfortheringtoolbox.blogspot.com/">the Toolbox</a>.<br /><br /><span style="color:#ff6600;">========== BASIC TOOLBOX CALCULATOR INSTRUCTIONS ========== </span><br /><span style="color:#ff6600;"></span><br /><span style="color:#ff6600;">SIMPLY REFRESH THE PAGE TO START OVER</span><br />Sometimes with Excel, the mouse will "do something" unintended and will foul up a cell. It’s as if you made a mistake even though you really didn’t make a mistake. Sometimes in other words you may lose control over what the Excel worksheet is doing. If you can not correct the malfunction any other way, you can refresh the entire Toolbox page and start over. So let’s start by saying that if you ever make a mistake and you don't know how to reverse what you did using Excel, you can simply refresh the entire Toolbox page with your browser and start over.<br /><br />How to use Excel at a high level is beyond the scope of this Guide. But even if you know nothing about Excel, you should be able to nevertheless calculate Real Player Ratings and the associated measures using the Toolbox page. You definitely do not need to know much of anything about Excel to be able to calculate Real Player Ratings using the Toolbox Internet page.<br /><br />On the other hand, if you are well versed in Excel, you can make changes because the spreadsheet is fully (or almost fully) interactive. Specifically for example, you can change the formula used for calculating Real Player Ratings to one you for whatever reason think is more appropriate.<br /><br /><span style="color:#ff6600;">HOW TO USE THE REAL PLAYER RATING CALCULATOR</span><br />IMPORTANT FIRST STEP: Before you start entering points, rebounds, and so on, you must click "click to edit" at the very top of the calculator (which is a spreadsheet.) The spreadsheet will not be interactive until you click this.<br /><br />You need the items shown on the calculator to find out what the Real Player Rating is for one or more players for a single game or for multiple games. Specifically, you need:<br /><br />-Minutes<br />-Points<br />-3-Point Shots Made<br />-3-Point Shots Attempted<br />-2-Point Shots Made<br />-2-Point Shots Attempted<br />-Free Throws Made<br />-Free Throws Attempted<br />-Offensive Rebounds<br />-Defensive Rebounds<br />-Assists<br />-Steals<br />-Blocks<br />-Turnovers<br />-Personal Fouls<br />-Hidden Defending Adjustment (HDA)<br /><br />The last item, HDA, is very recommended but not required. How to enter Hidden Defending is explained in great detail shortly. If you are skipping HDA than simply leave the cells for it blank.<br /><br />Simply enter all of the above items in any order you wish to enter them in the cells. When calculating RPR for multiple games you enter the combined totals for all games for each item. When calculating RPR for a single game you enter the counts for that single game for each item and for each player.<br /><br />If you make a mistake in any of the item cells, simply click the cell and then click delete and enter the correct or revised data.<br /><br />Type the first name initial and the last name of the player(s) you are rating just above where you enter the counts, where it says "Name of Player >>>>>". Very long names will not entirely fit in the cell but presumably you will know who it is from just most of the name.<br /><br />Below where you enter the items you see the performance measures starting with Real Player Rating itself. Stay clear of this area with your mouse, do not click any of these cells, and definitely DO NOT enter anything into any of the cells corresponding to these performance measures. These cells are formatted to show you the ratings based on what you enter in the items above them. The whole point of this tool is that it will calculate these things for you based on the counts for the basic basketball actions entered. If you enter anything in any of the four performance measure rows, the spreadsheet will no longer calculate that item in that cell anymore and you then might have to refresh Toolbox and start all over. At the least, you will have “lost” that column.<br /><br />When all the items above have been entered for all players the following will be automatically calculated for you:<br /><br />-Real Player Rating<br />-Real Player Production<br />-Offensive Sub Rating<br />-Defensive Sub Rating<br /><br />Complete explanations of these four ratings are at the User Guide for Real Player Ratings on the <a href="http://nuggets1reference.blogspot.com/">Quest for the Ring Reference Page</a>.<br /><br />The calculator on Toolbox is set up to allow for as many as twenty players to be calculated at a time.<br /><br />High level evaluation of ratings requires knowledge and experience. See the evaluation section in this Guide, which is one of the later sections below, and you may also want to see the evaluation section of the overall User Guide to Real Player Ratings.<br /><br /><span style="color:#ff6600;">YOU CAN USE THE CALCULATOR FOR ANY TIME FRAME YOU NEED</span><br />Provided you have the correct statistics, you can look at a player's performance for an individual game, for his or her entire career, or for anything in between, such as a season.<br /><br /><span style="color:#ff6600;">YOU CAN USE THE CALCULATOR TO COMPARE TEAMS</span><br />You can also use the tool to rate and compare entire teams, simply by using the combined measures for all the players. Suppose you have two teams in a League that were considered extremely close, and they play in the Championship, and the Championship is decided in overtime. In such a case you might not be convinced that the team that won the Championship was really the better team. To investigate, you could compare the team RPRs of the two teams to try to get at which was really and truly the better team.<br /><br />One interesting idea for Team RPR is to use combined team RPR (the sum of the player RPRs) to compare the same team from one year to another, which would go a long way towards answering a question that everyone asks all the time but that often no one ever has a very good answer for: which team was better: last year's or this year's?<br /><br /><span style="color:#ff6600;">CUSTOMIZED RATING</span><br />What if you have a formula you want to use instead of ours? If you know Excel well you can simply change the formula in the interactive spreadsheet. Or, you can request a customized calculator by emailing thequestforthering1 at Gmail.<br /><br /><span style="color:#ff6600;">========== THE HIDDEN DEFENDING ADJUSTMENT ==========</span><br />The following instructions are for how you supply a Hidden Defending Adjustment (HDA) so that you will have an overall rating very close to perfect. If you are opting to skip the HDA, though, you can simply leave the cell(s) where the HDA is supposed to go blank.<br /><br />HDA is basically what is left out from the everyday scorekeeper counts of points, assists, blocks and so on. Unfortunately what is left out by scorekeepers is very important. Scorekeepers can not possibly calculate HDA during a game so you can not blame this situation on them or on those who mange them or on the League commissioner or on anyone else.<br /><br />For its’ regular NBA coverage, Quest for the Ring (QFTR) uses a multi-step, statistically valid process to fairly and competitively rate NBA players on their “hidden defending,” which are all actions NOT recorded by scorekeepers that succeed at preventing scores by the opponent. Here are many of the things that HDA measures:<br /><br />--effective man to man defending<br />--effective rotation / switching on defense, especially off screens and picks<br />--effective pick and roll defense<br />--effective defensive recognition<br />--quickness of defensive reaction<br />--energy and hustle on defense<br />--effective taking of charges (causing a driving offensive player to be called for an offensive foul)<br />--effective hustling after loose balls<br />--effective calling of time-outs, for example, to avoid a jump ball being called<br /><br />These things would be counted by scorekeepers if it were possible. Not only can these things individually not be counted exactly, but also there is in general no way to know exactly how many shots a defender has changed from being a score to a miss. But you can indirectly and relatively find out and we have a way to do that.<br /><br />In this Guide we are giving you many but not all details about HDA. See the HDA section of the User Guide to Real Player Ratings for full details about the HDA and about RPRs.<br /><br /><span style="color:#ff6600;">TO USE OR NOT TO USE THE HDA, THAT WAS AND IS THE QUESTION<br /></span>Back in 2008 and 2009, the accepted doctrine was that HDAs would be used only when more than 300 minutes of playing time data was available. This is because HDA uses the basic sampling theory of statistics and a 300 minute sample is the minimum needed for high statistical validity.<br /><br />For more about exactly how HDA is calculated, see the full Real Player Rating User Guide on the Quest for the Ring Reference page.<br /><br />Since 300 minutes obviously covers multiple games, the conception was that HDA would be associated with and also be mandatory for partial or full season RPR calculations. Therefore, RPR with HDA included could not be calculated for full teams until roughly mid January because it would take until then before all of the main reserves had played 300 minutes or more. RPR for single games (and technically for whenever less than 300 minutes of playing time data is available) would be without HDA. RPR without HDA is generally called Base RPR.<br /><br />The problem is that Base RPR may be a valuable thing but it is not quite an extremely value thing. HDA on average constitutes about one fifth of a players’ RPR. For the defensive specialists, HDA can constitute as much as two fifths (40 percent) of the RPR. So HDA is so important that leaving it out makes reporting RPRs for playoff games limited in value. In general, without the HDA included, Real Player Ratings are not a complete and totally accurate representation of basketball players.<br /><br />QFTR is striving to make every single Report we do very or extremely high value so we decided in the spring of 2010 to somehow bring HDA into RPR calculations for single NBA playoff games. This is not yet accepted procedure for regular season single games; for them Base RPR is still the by the book way. For regular season games we will probably be supplementing Base RPR with a separate reporting of players’ HDAs from the prior or possibly the current season.<br /><br />But for NBA playoff games the "HDA doctrine" was modified as of Spring 2010. It was decided that HDA would be included in RPR calculations for single playoff games.<br /><br />But how did we do it, given that by are own admission HDA can not be validly calculated for a single game? (In fact, not only can it not validly be calculated but calculating it at all for a single game apparently requires a large investment of time at a little known advanced basketball site and we are not totally sure it can be done at all.)<br /><br />We had to compromise so we did. For the NBA playoff games, we decided to use HDAs from the full regular season just prior, which are of course statistically valid for that season.<br /><br />In most cases, the value of a players’ defending in the playoffs will be close to the value of his defending in the regular season. But not in all cases, so unfortunately in some cases a players’ RPR for a playoff game will be either too high or too low. There will sometimes be players who do not defend quite as well in the playoffs as they did in the regular season and there will sometimes be players who defend a little better in the playoffs than they did in the regular season. Worse still (and I say worse because the magnitude of this problem will often be greater than the other problem I just mentioned) in a particular game a player might defend much worse, or much better, than he did on average in the regular season.<br /><br />So in summary there are two problems with transferring regular season HDAs to playoff game RPRs. The first problem is that players will sometimes in general and overall be better or worse in the playoffs compared with the regular season. The other problem is that in individual games players will sometimes be much better or much worse defensively than they were on average in the regular season. Therefore, including HDAs in playoff RPRs is controversial.<br /><br />However, not including HDA at all is worse than including it knowing that in some cases it is inaccurate. If you don’t include it at all then obviously all the good defensive players come up looking worse (less valuable) than they are and vice versa. Also, it needs to be noted that HDA is only about one fifth of the average players’ overall RPR, so if it is wrong for a particular game it is not going to mean that the overall RPR is wildly inaccurate. In general it would be very rare for the RPR to be distorted up or down by more than .100. For comparison, the average RPR is about .700.<br /><br /><span style="color:#ff6600;">YOU ALMOST CERTAINLY CAN NOT DO HDA THE WAY WE DO</span><br />That extended excerpt from the full User Guide for Real Player Ratings was provided mainly to impress on you the importance of the HDA. Exactly how we validly calculate HDA for NBA teams is explained in that full Guide.<br /><br />Unfortunately the method we use for the NBA can not be used by you because most likely the data needed is not available to you. The data needed is how many points scored by opponents while the player is on the court for at least 300 minutes of playing time. There is no known place to find this data for any League other than the NBA. And we are lucky, actually, to have the needed data for the NBA. The needed data is only available from 2004-05 on.<br /><br />Even that data is not enough because then you would also have to be able to translate that data into a valid HDA. QFTR uses several sophisticated Excel worksheets which contain numerous formulae to do this. This is very high technology and is not as of yet completely explained in total detail even in the full Guide. The bottom line of this discussion is that you need HDAs to make your Toolbox calculations high value but it is completely unrealistic to think that you can calculate HDAs the way we do it for the NBA.<br /><br />But does that mean you should get out the white flag? No it does not. Just as QFTR compromised a little when it started including HDA in single playoff games, we are going to instruct you to compromise a little statistical validity so that you can have high statistical value. We are instructing you to not let the perfect be the enemy of the good.<br /><br /><span style="color:#ff6600;">HOW TO CORRECTLY ESTIMATE HIDDEN DEFENDING ADJUSTMENTS</span><br />First let’s look at the actual final product of the QFTR HDA and eventually we will end up giving you exact instructions on how you can include HDAs in your calculations.<br /><br />The Quest for the Ring Hidden Defending Rating has a scale running from 0 to .330. The ratings more or less follow a “bell curve” statistically. The vast majority of NBA players have ratings between .030 and .290. Only about the top 1% of all defenders have hidden defending ratings higher than .300. Only about the bottom 1% of all defenders have hidden defending ratings lower than .020. At least 95% (19 out of 20) basketball players have hidden defending ratings between about .040 and .275. The average hidden defending rating is about .140, which is about 20% of the average overall RPR which is about .700.<br /><br />In order to incorporate hidden defending into Real Player Ratings (and into defensive sub ratings) you should use your knowledge of how well the player stops scores using hidden defending actions, which include the following:<br /><br />--effective man to man defending<br />--effective rotation / switching on defense, especially off screens and picks<br />--effective pick and roll defense<br />--effective defensive recognition<br />--quickness of defensive reaction<br />--energy and hustle on defense<br />--effective taking of charges (causing a driving offensive player to be called for an offensive foul)<br />--effective hustling after loose balls<br />--effective calling of time-outs, for example, to avoid a jump ball being called<br /><br />You need to make the most reasonable statistical estimate you can make even though you lack hard data. So you simply look at any player you are rating and ask yourself: how good is that player, compared with other players, in the above (and perhaps a small number of other related) actions that prevent the other team from scoring points it would have scored.<br /><br />Notice I said “compared with other players”. This is very important. You are making relative statistical estimations. In order to give any player an HDA which is a good estimate, you need to be aware of how good that player is compared with as many other players as possible.<br /><br />In fact, the best way to do this (at least the first time you do it) is to estimate HDAs for many players simultaneously and then bring those HDAs to the calculator. When you do this you want to keep changing your HDA estimations until they all “fit together,” until in other words they make as much sense as possible and seem to be as close to perfect as possible.<br /><br />After you have experience you will not necessarily have to do it this way; once you instinctively know the scale and once you are extremely familiar with how players stack up defensively, you can instantly rate one player alone without doing a lot of HDA estimations and corrections beforehand.<br /><br /><span style="color:#ff6600;">THINGS YOU MUST NOT CONSIDER WHEN YOU DO YOUR HIDDEN DEFENDING ESTIMATES<br /></span>This is very, very important. When correctly estimating HDA you MUST avoid bringing in things that are not part of HDA.<br /><br />Be very careful not to simply rate a player’s defensive or overall style: this is a relatively common mistake that many basketball fans and sometimes coaches make. Managers, though, seldom consider a player’s style when deciding on acquisitions and contracts and that is one of the reasons they are managers.<br /><br />For about the same reason, be careful not to consider a player’s personality when you estimate his hidden defending. Remember, styles and personalities are completely irrelevant: the only thing ultimately relevant is whether and to what extent what the player does on defense prevents what would have been scores from being scores.<br /><br />You also must NOT include tracked defensive actions in your estimations:<br /><br />--Defensive Rebounds<br />--Steals<br />--Blocks<br />--Personal Fouls<br /><br />You must DISREGARD all of these while estimating hidden defending. It is crucial that these things not be thought of or considered in your estimates, because these things are already included in the calculator (outside of HDA). Be warned that there are some players who get a lot of the above but are actually not very good hidden defenders and vice versa: there are some players who don’t make many defensive rebounds, steals, or blocks but are actually very good as far as hidden defending is concerned. There is some correlation between HDA and those four items, but less than you think, and for some players there is virtually no correlation at all.<br /><br />To emphasize, when you estimate how good a player's hidden defending is, do not be biased either for or against players who make a lot of defensive rebounds, blocks, and/or steals.<br /><br />In fact, players who make a large number of defensive rebounds and blocks often have lower hidden defending ratings than do "defensive specialists" who do not make a truly large number of defensive rebounds and blocks. This makes sense insofar as that it is not automatic or all that easy for players to be extremely good at rebounding and blocking and at for example man to man defending at the same time. To some extent with defending, it is an either/or proposition. Great defenders can be either great rebounders and blockers or alternatively they can be great man to man defenders and defensive recognizers and rotators. Only a small number of great defenders are great at both tracked and hidden defending.<br /><br />There can be any number of combinations. For example, there will also be players who are average in rebounding and a little above average in man to man defending. It's just that it would be rare for a player to be an outstanding rebounder, blocker, and man to man defender all at the same time.<br /><br />And obviously, you should avoid bias for or against good offensive players. Or for or against bad offensive players. Quite honestly, how good or how bad a specific player is on offense has almost nothing to do with how well or bad that player is on defense, although broadly speaking across the whole universe of players there is a limited degree of correlation.<br /><br /><span style="color:#cc9933;"><span style="color:#ff6600;">NOW THAT YOU UNDERSTAND EXACTLY WHAT YOU ARE DOING IN THEORY, THIS IS HOW TO PROCEED</span><br /></span>What you want is your best estimate of the combined effect of the quantity and the quality of the player’s hidden defending actions. Both the quantity and the quality must be considered, not just one or the other. The best defenders use high quality hidden defending most of the time. Defenders who are just “ok” will be for example high quality hidden defenders but they are too lazy or whatever to show the high quality very often. Other defenders who are just “ok” will be players who try hard most of the time but they simply don’t at this time have the skills needed for high quality hidden defending. The higher the quality of the defending, the more often it will turn what would have been scores into stops.<br /><br />The most important thing, of course, is to be objective and fair, which is really saying about the same thing with two different words. To sum this up in one sentence, you have to judge how good a player is, relative to other players, in terms of the quantity and the quality of his hidden defending.<br /><br />Once you have in your head how good the player is relative to all other players, use the following to give that player a hidden defending rating. The percentage shown on each of the following lines is how the player stacks up compared to all other players with respect to hidden defending:<br /><br /><span style="color:#ff6600;">HIDDEN DEFENDING ESTIMATION SCALE</span><br />1% > better than 99% of other players: about .320<br />2% > better than 98% of other players: about .310<br />5% > better than 95% of other players: about .295<br />10% > better than 90% of other players: about 275<br />20% > better than 80% of other players: about .250<br />30% > better than 70% of other players: about .220<br />40% > better than 60% of other players: about .180<br />50% > better than 50% of other players: about .140<br />60% > better than 40% of other players: about .110<br />70% > better than 30% of other players: about .85<br />80% > better than 20% of other players: about .65<br />90% > better than 10% of other players: about .45<br />95% > better than 5% of other players: about .30<br />98% > better than 2% of other players: about .20<br />99% > better than 1% of other players: about .10<br /><br />If you are estimating more than one player, when you are done, if you have not already done so (as recommended above) review all your estimates by making sure that your players correctly rank according to who really is better and who is worse with respect to hidden defending.<br /><br /><span style="color:#ff6600;">VERY HIGH, VERY LOW, AND VERY AVERAGE RATINGS</span><br />Theoretically, a player who never changes any shots from makes to misses would have a hidden defending rating of as low as .000. But even most of the bad defensive players in terms of "made them miss" defending, via untracked actions will generally have hidden defending ratings of between about .040 and .060. Exactly in the middle players in terms of hidden defending will have hidden defending ratings of between .130 and .150. And the best defensive players in terms of hidden defending will generally have hidden defending ratings of between .250 and .280, although the absolute best such player in your League can theoretically deserve a rating of up to an absolute maximum of .330.<br /><br /><span style="color:#ff6600;">========== EVALUATION OF CALCULATED RATINGS ==========</span><br />The following evaluation scales are as of 2010 the same ones used for the high level professional players of the NBA. Since obviously the players in your League might not be as great, you may want to adjust the scales (unless you want to compare them relative to NBA players). You will need to compare and contrast many players at the level you are looking at in order to come up with a completely valid evaluation scale that will be customized to the level of players you are looking at. To make things easier, you can if and when you construct your own scale keep the descriptions and change only the numbers. Of course, you will probably be lowering the numbers (thus making it easier for players to reach categories).<br /><br />Every Quest for the Ring Evaluation Scale uses terms that the vast majority of basketball fans, coaches, and managers understand as important descriptions of just how valuable players are to the team and also as explaining the usual role of players.<br /><br />At one time there was just one QFTR evaluation scale but now there are more than a dozen. In giving you the following scales, we will keep things as simple as possible without sacrificing high value and quality.<br /><br /><span style="color:#ff6600;">EVALUATING A SINGLE GAME OR A SMALL NUMBER OF GAMES<br /></span>Use the following scale if:<br />--You are using HDA<br />--You want to rate players in general, without regard to position<br />--You are rating a single game or more than a game but less than 300 minutes of playing time<br /><br />Perfect Player for all Practical Purposes / Major Historic Super Star 1.200 and more<br />Historic Super Star 1.080 1.199<br />Super Star 0.960 1.079<br />A Star Player / A well above normal starter 0.860 0.959<br />Very Good Player / A solid starter 0.780 0.859<br />Major Role Player / Good enough to start 0.700 0.779<br />Good Role Player / Often a good 6th man, can possibly start 0.620 0.699<br />Satisfactory Role Player / Generally should not start 0.540 0.619<br />Marginal Role Player / Should not start except in an emergency 0.460 0.539<br />Poor Player / Should never start 0.380 0.459<br />Very Poor Player 0.300 0.379<br />Extremely Poor Player and less 0.299<br /><br />Use the following scale if:<br />--You are NOT using HDA even though it is very recommended<br />--You want to rate players in general, without regard to position<br />--You are rating a single game or more than a game but less than 300 minutes of playing time<br /><br />Perfect Player for all Practical Purposes / Major Historic Super Star 1.060 and<br />Historic Super Star 0.940 1.059<br />Super Star 0.820 0.939<br />A Star Player / A well above normal starter 0.720 0.819<br />Very Good Player / A solid starter 0.640 0.719<br />Major Role Player / Good enough to start 0.560 0.639<br />Good Role Player / Often a good 6th man, can possibly start 0.480 0.559<br />Satisfactory Role Player / Generally should not start 0.400 0.479<br />Marginal Role Player / Should not start except in an emergency 0.320 0.399<br />Poor Player / Should never start 0.240 0.319<br />Very Poor Player 0.160 0.239<br />Extremely Poor Player and less 0.159<br /><br /><span style="color:#ff6600;">EVALUATING A SEASON OR AT LEAST MANY GAMES</span><br />Use the following scale if:<br />--You are using HDA<br />--You want to rate players in general, without regard to position<br />--You are rating at least 300 minutes of playing time up to an entire season. But if you are rating a player for more than a season (for two seasons or for a career for example) then do not use this scale, there is a better one below to use.<br /><br />Perfect Player for all Practical Purposes / Major Historic Super Star 1.100 and more<br />Historic Super Star 1.000 1.099<br />Super Star 0.900 0.999<br />A Star Player / A well above normal starter 0.820 0.899<br />Very Good Player / A solid starter 0.760 0.819<br />Major Role Player / Good enough to start 0.700 0.759<br />Good Role Player / Often a good 6th man, can possibly start 0.640 0.699<br />Satisfactory Role Player / Generally should not start 0.580 0.639<br />Marginal Role Player / Should not start except in an emergency 0.520 0.579<br />Poor Player / Should never start 0.460 0.519<br />Very Poor Player 0.400 0.459<br />Extremely Poor Player and less 0.399<br /><br />Use the following scale if:<br />--You are NOT using HDA even though it is very recommended<br />--You want to rate players in general, without regard to position<br />--You are rating at least 300 minutes of playing time up to an entire season. But if you are rating a player for more than a season (for two seasons or for a career for example) then do not use this scale, there is a better one below to use.<br /><br />Perfect Player for all Practical Purposes / Major Historic Super Star 0.960 and<br />Historic Super Star 0.860 0.959<br />Super Star 0.760 0.859<br />A Star Player / A well above normal starter 0.680 0.759<br />Very Good Player / A solid starter 0.620 0.679<br />Major Role Player / Good enough to start 0.560 0.619<br />Good Role Player / Often a good 6th man, can possibly start 0.500 0.559<br />Satisfactory Role Player / Generally should not start 0.440 0.499<br />Marginal Role Player / Should not start except in an emergency 0.380 0.439<br />Poor Player / Should never start 0.320 0.379<br />Very Poor Player 0.260 0.319<br />Extremely Poor Player and less 0.259<br /><br /><span style="color:#ff6600;">EVALUATING MULTIPLE SEASONS AND CAREERS</span><br />Use the following scale if:<br />--You are using HDA<br />--You want to rate players in general, without regard to position<br />--You are rating more than a season (generally two or more seasons, up to and including a career).<br />--Note, HDA is considered mandatory for multiple season and career evaluations; therefore, there is no scale shown here for HDA not being used.<br /><br />Perfect Player for all Practical Purposes / Major Historic Super Star 1.000 and more<br />Historic Super Star 0.925 0.999<br />Super Star 0.860 0.924<br />A Star Player / A well above normal starter 8.000 0.859<br />Very Good Player / A solid starter 0.750 0.799<br />Major Role Player / Good enough to start 0.700 0.749<br />Good Role Player / Often a good 6th man, can possibly start 0.650 0.699<br />Satisfactory Role Player / Generally should not start 0.600 0.649<br />Marginal Role Player / Should not start except in an emergency 0.550 0.599<br />Poor Player / Should never start 0.500 0.549<br />Very Poor Player 0.450 0.499<br />Extremely Poor Player and less 0.449<br /><br /><span style="color:#ff6600;">ADJUSTING FOR POSITIONS: HOW TO “WASH OUT” POSITON BIASES WHEN EVALUATING PLAYERS</span><br />Not all positions are created equal. These are the average ratings by position among all NBA players who play 300 minutes or more. There are very few small forwards and shooting guards who are superstars. Most (but definitely not all) superstars are players who can play point guard, power forward, or center.<br /><br />Point Guard .750<br />Shooting Guard .640<br />Small Forward .640<br />Power Forward .720<br />Center .750<br />All Positions / All Players (NBA Overall Average) .700<br /><br />As you can see, point guards and centers on average have RPRs about .050 higher than the NBA average. Power forwards average out to about .020 higher than the NBA average. Shooting guards and small forwards average out to about .060 below the NBA average.<br /><br />What if you want to evaluate players after taking out the position advantages and disadvantages shown just above? What if you want to, in other words, compare all players at all positions on a completely even plane? When you do this, you will be adjusting reality a little for the cause of getting a direct, fair comparison of all players.<br /><br />If you want to rate your players after removing any advantage or disadvantage they get from their position, you could adjust the scales above by the difference between the average for the position and the overall NBA average. Quest for the Ring of course has these position-specific evaluation scales.<br /><br />But you don’t need them; you can accomplish the same thing by changing the Ratings you calculated themselves. Then you can use the same scales above with your new. To do it this way, add or subtract the following from your players’ ratings:<br /><br />Point Guard: Subtract .050; for example, a .750 becomes a .700<br />Shooting Guard: Add .060; for example, a .640 becomes a .700<br />Small Forward: Add .060; for example, a .640 becomes a .700<br />Power Forward: Subtract .020; for example, a .720 becomes a .700<br />Center: Subtract .050; for example, a .750 becomes a .700<br /><br />Now you can in effect compare all of your players without regard to position. For example, now you can fairly compare a shooting guard with a center.<br /><br /><span style="color:#ff6600;">SAVING DATA TO YOUR OWN COMPUTER</span><br />You can save your data (your ratings) all you wish but the calculator is copyrighted and it is illegal to place a copy of the calculator on any website.<br /><br /><span style="color:#ff6600;">CAUTIONS ABOUT REAL PLAYER RATINGS</span><br />See the <a href="http://nuggets1reference.blogspot.com/">User Guide for Real Player Ratings</a> for more detailed information about how to evaluate the ratings, and also for cautions about using the Ratings. The latest Guide will be found on the page that the above link leads to.<br /><br />As the main User Guide will inform you, although Real Player Ratings are very valid and extremely valuable, there are nevertheless reasons why they are not absolutely perfect and why they can not be the absolute final word on basketball players. See the cautions section of the <a href="http://nuggets1reference.blogspot.com/">User Guide</a> for complete details on this subject.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-68648509673426127732010-05-23T14:59:00.001-07:002011-07-02T05:17:29.095-07:00User Guide for Real Player Rating Reports, May 2010<span style="color:#ff6600;">REAL PLAYER RATINGS BY TEAM USER GUIDE</span><br />
<span style="color:#cc9933;">SECTIONS UPDATED WITH THIS UPDATE</span><br />
--Strategically Using RPR (Most of the Evaluation scales were slightly improved.)<br />
--Defensive and Offensive Sub Ratings Section (The procedure for determining accurate and unbiased Hidden Defending Ratings has been extensively improved.)<br />
<br />
<span style="color:#cc9933;">SECTIONS<br />
</span>This guide has the following main sections, with sub sections as highlighted within each section.<br />
<br />
Introduction Section<br />
Cautions Section<br />
Strategically Using RPR Section<br />
Mechanics of Real Player Ratings and Real Player Production Section<br />
Defensive and Offensive Sub Ratings Section<br />
Summary of Primary Formulas Section<br />
<br />
<span style="color:#ff6600;">========== INTRODUCTION SECTION ==========</span><br />
<br />
<span style="color:#cc9933;"><span style="color:#cc9933;">INTRODUCTION TO THE CONCEPT OF REAL PLAYER RATINGS</span> </span><br />
The Real Player Rating (RPR) is a very carefully constructed all inclusive performance measure. Most things of value that a basketball player can do are carefully recorded by official NBA scorekeepers who sit right along the edge of the court, mid-court, and who are trained to observe and record everything that happens in a game.<br />
<br />
Since these days all of these counts are immediately input into continually updated public data bases online, such as at ESPN, it is possible to in real time combine everything together into an overall performance measure for each player that is intended to evaluate how valuable each player is toward winning games. This is what the RPR does.<br />
<br />
Real Player Rating or RPR is everything tracked by scorekeepers that a player does, good and bad, added and subtracted (with negative things such as turnovers and missed shots being subtracted). Very carefully calibrated factors, or weights, are applied to the different elements.<br />
<br />
The calibration, as you would expect, is done to reflect the different value toward winning games that different actions on the court have. These factors are subject to very small annual adjustments as knowledge about how games are won and lost is fine tuned.<br />
<br />
Then, all of the good and bad combined together is divided by minutes, yielding RPR, which is really the rate per unit of time of the good minus the rate per unit of time of the bad. This is what we need to determine the overall quality or value of the player toward the objective of winning basketball games.<br />
<br />
<span style="color:#cc9933;">QUALITY (RPR) AND QUANTITY (RPP} SUMMARIZED</span><br />
RPR reports show for each player the RPR (Real Player Rating) which tells you how good a player did (all the good things minus all the bad things) out on the court per unit of time. The RPP (Real Player Production) report tells you how much in total (the sum of the of the good things minus the sum of the bad things) a player did out on the court, without regard to playing time.<br />
<br />
Many and maybe most sports watchers and an unknown but probably disturbingly large number of sports managers make the mistakes of exaggerating the importance of quantity and overlooking to some extent quality. These reports allow you to expand your horizons. These reports put quantity and quality side by side, which is extremely valuable, because both are roughly equally important in explaining accurately why and how the team is playing the way it is.<br />
<br />
<span style="color:#cc9933;"><span style="color:#cc9933;">SIMPLICITY, RELIABILITY, TRANSPARENCY, AND FOCUS ONLY ON "WINNING POWER"</span><br />
</span>Like everything statistical we do at Quest, we have kept this process as simple and reliable as possible, while at the same time spending as much time as necessary on design, quality control and performance evaluation. Unlike some other practitioners, we avoid what you might call layered complexity, which leads to formulas which can not be understood without studying them and which high traffic sites will not show on any of their web pages for fear that the public will rebel against the statistic. At Quest, we think that our rating systems can be understood and evaluated by most high school graduates, and we keep everything out in the open through User Guides such as this one.<br />
<br />
Basketball statistical gurus frequently forget that no matter how intricate their formulas are, they are very heavily manipulating process items such as assists and rebounds while most likely spending very little time on how these things fit together to produce wins and losses. We think that they are making the mistake, whether or not they are aware, of injecting value adjustments regarding how they think the game should be played and value adjustments about which playing styles are better than others.<br />
<br />
Whereas, the primary objectives of the relative simplicity (small number of formulas, to be more precise) of the Quest RPR is to avoid all how the game should be played and how players should play value judgments. We don't care about the styles, only about the results. The RPR is concerned first and in fact exclusively with the impact each player has on the potential for winning games.<br />
<br />
Quest thinks it makes more sense to minimize the manipulation of process items, and to focus much more on coming up with the best possible estimation of how the process items impact points for and points against in games, which in turn of course determines wins. Whereas other "advanced statistics" might give you more depth and flavor regarding how a particular player plays (his style) the Quest RPR is a way for the reader to, in a very quick and easy way, determine what the overall value of the player is with respect to producing wins or losses.<br />
<br />
In other words, the foundation of RPR is and will always be measurement of a player's power to help win basketball games, whereas the foundation of other, more complicated statistics may include preferences about how the game should be played and about the style of players, with winning power measured less accurately as a result of those focuses.<br />
<br />
<span style="color:#cc9933;">IMPORTANCE OF PER UNIT OF TIME</span><br />
Because it is per time, RPR is immediately in the running to be the best possible measure of the net quality of a basketball player, or simply "how good" the player is (on average) for each minute of playing time. All per game statistics are inferior to any reasonably good per unit of playing time measure. For example, points per minute (or per 40 minutes or any number of minutes) is a much better thing to look at than points per game.<br />
<br />
<span style="color:#cc9933;">REAL PLAYER RATING REPORTS CAN BE FOR THE WHOLE NBA, FOR A TEAM, FOR A GAME, OR FOR A CAREER</span><br />
With a Real Player Ratings Report for the entire NBA, you can see very rapidly who the best players in the NBA have been during the course of the season.<br />
<br />
With a Real Player Ratings Report for a Team for the Regular Season, you can see very rapidly who the best players on the team have been during the course of the season. You can use this information to investigate the possibility that the coach is not perfect. Well, we know that no coach is perfect. So really, with the benefit of 20/20 hindsight, we can investigate and determine what mistakes the coach has apparently made with regard to rotations and playing times. Furthermore, by using the Ratings, basketball knowledge, a little creativity, and logical deduction, we can also investigate and perhaps determine whether the coach has made incorrect decisions regarding which strategies and plays are best for his team's offense and defense.<br />
<br />
Real Player Ratings for games are a major part of Reports called Ultimate Game Breakdowns.<br />
<br />
Real Player Ratings for a player's career, year by year and in total, are obviously very valuable looks at how the player changed over the years. Of course, most players get better from how they started in their rookie years.<br />
<br />
[End of Introduction Section.]<br />
<span style="color:#ff6600;"></span><br />
<span style="color:#ff6600;">============ CAUTIONS SECTION ============</span><br />
<br />
To be completely honest and clear, although it is the best possible overall real life measure, RPR is still not a perfect or absolute, "final word" measure on any player. In general, you must remember that all performance measures including this one for the NBA are relative rather than absolute measures. The ratings are relative to the team context. Players do not exist in a vacuum, especially in basketball.<br />
<br />
Several specific cautions will now be described.<br />
<br />
<span style="color:#cc9933;">RPRs ARE RELATIVE TO TEAMS, AND ARE SUBJECT TO THE CROWDING OUT EFFECT</span><br />
Because basketball is a team game, and more so than most other sports, players who are on really good teams might have their own performances "crowded out" to some extent by just as good players and especially by even better players. So paradoxically, ratings of players of all ratings levels who are on better teams will generally have slightly lower ratings than they would have if they were on a not as good team. Conversely, players (at all ratings levels) who are on bad teams will have slightly higher ratings than they would have if they were on a better team. Numerically, a player on the best NBA team could easily have a RPR that is 20% less than what it would be if he was a player on the worst NBA team.<br />
<br />
Always remember this important point, which we restate for emphasis. If a good player is on a good team where there are a number of players as good and even better than he is, than his RPR will likely be lower than it would be if he were on a not as good team.<br />
<br />
Position in the team context can impact RPR as well. If a good player plays a certain position for which his team has an even better player, then it's probable that the better player will crowd out the lesser player to one extent or another, so that the lesser player's RPR will be lower than what it would be if he were the best player at the position on the team. Conversely, the best player at a position on a bad team can have a RPR which is higher than what it would be on many other teams.<br />
<br />
<span style="color:#cc9933;">ACTUAL RPR DIFFERENCES BETWEEN TEAMS ARE GREATER THAN APPARENT DIFFERENCES</span><br />
An important implication of crowding out and relativity is that the average RPR among the best five, six, or seven players of the best teams in most cases will understate the real "potential RPR" of those players, where potential RPR is RPR with the least possible crowding out. In other words, the potential RPRs of players on the best teams is higher than their actual RPRs. Conversely, the long-run, true potential RPRs of the apparently better players on bad teams is actually lower than their actual RPRs.<br />
<br />
This plays out at the team level in a very important way. Always remember this: the actual underlying gap in the real quality of the players between good and bad teams is greater than the actual RPRs are indicating. The true RPR differential between the best and the worst NBA team could easily be 20-30% greater than the apparent differential. In other words, team RPR averages understate real quality differences between teams.<br />
<br />
<span style="color:#cc9933;">PLAYERS NEED THE BALL FOR HIGHER RPRs</span><br />
Players need not only playing time but possession of the ball in order to produce many of the things that count in the ratings. So if, for whatever reason, a player does not get the ball as often as he would on a different team, or with a different coach, or with whatever other circumstances you can dream of, then his RPR will be lower than what it could or would be.<br />
<br />
<span style="color:#cc9933;">DO NOT FORGET WHAT THE RATINGS YOU ARE LOOKING AT ARE MEASURING</span><br />
Many ratings that you see on Quest are only for the current season. It has recently been discovered that many player's ratings often change up or down by 10% from year to year even on the same team, and ratings can change by about 15% up or down without too much trouble from one year to the next even on the same team. Moreover, over the course of a player's entire career, RPR ratings by year can and often do vary by 50% or even more when you compare the highest year or two to the lowest year or two. Although there are a fairly good number of exceptions, many NBA players have much lower RPRs in their first year or two in the NBA than they will eventually average out to.<br />
<br />
<span style="color:#cc9933;">INJURIES AND RECOVERIES FROM INJURIES</span><br />
Players often play with minor injuries. They also often start playing again before they are 100% recovered from an injury. They sometimes even postpone surgery that has become necessary due to injury until the off-season, and play with some type of impairment in the meantime. In all of these situations, RPR will be lower than it would be were the player not dealing with any injury.<br />
<br />
<span style="color:#cc9933;">MAGNITUDE OF THE ADJUSTMENT FOR HIDDEN DEFENDING</span><br />
Those who think defense in basketball is much more important than offense will consider the magnitude of the defensive adjustment to be inadequate. They will contend that defensive specialists who are poor offensive players should have a higher rating.<br />
<br />
While we realized that we needed to adjust the ratings for defending not tracked by NBA scorekeepers, and while we put in a huge effort to come up with a valid adjustment system, we continue to believe that players who are great defensive specialists but poor or undeveloped offensive players should in most cases rank no higher than the Major Role Player/Good Enough to Start level, which is the level just below the Solid Starter level. In a few relatively rare cases, defensive specialists who have decent offensive games will be ranked as Solid Starters.<br />
<br />
None of this is to say that having a "defensive specialist" is a disqualifier to winning the Quest. It is merely a caution that coaches often make the mistake of giving them too much playing time.<br />
<br />
<span style="color:#cc9933;"><span style="color:#cc9933;">AVOID BEING CONFUSED BETWEEN RPR AND RPP AND DO NOT MINIMIZE THE IMPORTANCE OF RPP</span><br />
</span>Do not forget that RPR is a per time measure. RPP and not RPR measures total impact of a player. RPR measures how valuable a player has been toward winning basketball games, per unit of time.<br />
<br />
Do not make the mistake of ignoring the importance of RPP, now improved to TRPP. Players with the highest TRPP are showing they have the stamina, knowledge, and trust of the coaching staff to be able to get all the playing time needed to produce that. So even if their RPRs are a little lower than you might expect, players with the highest TRPPs should still be considered as extremely important and valuable players.<br />
<br />
Having said that, one of the most important objectives for any top Coach must be to make sure that his highest RPR players are also found at or close to the top of the TRPP list.<br />
<br />
<span style="color:#cc9933;">THE CLASSIFICATION SCHEME IS RELATIVE TOO</span><br />
The classification scheme, like the ratings, is relative. A role player on a really good team might be a solid starter on a bad team. A star on a bad team might be just a major role player on a really good team. And so on and so forth. A player is a star, a role player, or whatever only in the contexts of the particular season and the particular team involved. If he was on a different team, or if it was a different year, his classification could easily be different.<br />
<br />
So to conclude the Cautions section of this guide, don't think of RPR as the ultimate gospel or bible on how good players are. But do think of it as an extremely accurate and reliable summary of how good the players actually have been in real life in the specific time (season or playoffs) and place (team) involved.<br />
<br />
[end of cautions section]<br />
<br />
<span style="color:#ff6600;">===========STRATEGICALLY USING RPR SECTION============</span><br />
<br />
<span style="color:#cc9933;">RELATIVITY ADJUSTMENT FOR PROJECTED RPR FOR PLAYERS CHANGING TEAMS</span><br />
When you are trying to judge how good a player might be if he were on another team, you need to, due to the relativity factor discussed previously, adjust the expected RPR upwards if the player is moving to a lower quality team and to adjust the expected RPR downwards if the player is moving to a higher quality team. The absolute maximum such adjustment necessary is believed to be about 20%, with that full amount applied only when the player is moving from one of the very worst one or two teams to one of the very best one or two teams, or vice versa.<br />
<br />
For players changing teams, RPR chamges in the 5-15% range will be much more common than changes of about 20% simply because most teams are neither among the very worst nor among the very best teams.<br />
<br />
On top of RPR changes due to different teams, remember that RPRs often change by 10-15% from year to the next regardless of team. The combined RPR change for a player changing teams could therefore be as much as about 35%. This would be true if a player in the same year was intrinsically 15% better, and he moved from one of the very best teams to one of the very worst teams.<br />
<br />
<span style="color:#cc9933;">GREAT VARIABILITY OF PLAYER RPRs FOR INDIVIDUAL GAMES</span><br />
Not as many breakdowns of individual game ratings are going to closely track the overall average for the roster as you might think. This is because one of the interesting things about basketball that makes it different from most other sports is that "how good" a player is from game to game varies radically. The best players sometimes have terrible games where they do almost nothing, while players who normally do not do much can every once in a while have outstanding games, at least if you measure it per minute on the court anyway. If you just looked at actual production, and never at a reserve player's Real Player Rating, you would hardly notice any of his unusually outstanding games, since players who normally do not do much will normally not have much playing time.<br />
<br />
<span style="color:#cc9933;">INTERACTIONS BETWEEN PLAYING TIMES, RPRs, TRPPs, AND THE NEEDS OF TEAMS</span><br />
There are certain things that only certain players can do very well, and if those things are crucial for the team, than those players will have to play more minutes than they might otherwise play. The extra minutes might tend to reduce the player's Real Player Rating, while his total production will rise with the additional minutes. So to fairly and completely evaluate any player, you must always look at both the Real Player Rating (RPR) and the Real Player Production (RPP).<br />
<br />
Furthermore, it is strongly suspected that, in order to compete in the playoffs, a team must have as many players of as high a quality (RPR) as possible, while at the same time having at least one or two players whose actual production is among the highest in the NBA regardless of exactly how high the RPRs happen to be. (All high RPP players will be relatively high RPR players; some will be higher than others.) Specifically for example, LeBron James' actual massive amount of production is most likely just as important to the Cleveland Cavaliers as is his RPR or, in other words, as is his rate of production. Similarly, Kobe Bryant's quantity is probably at least as important to the Lakers as is his quality.<br />
<br />
Whereas, teams such as the Denver Nuggets, who have instructed a possible huge producer, Carmelo Anthony, to "not worry about scoring," may have made a fatal mistake relative to the playoffs, because teams with no extremely high rate producers may be generally doomed to lose quickly in the playoffs even if they have an unusually large number of high quality players as shown by RPR. This is because extremely high RPP players can by themselves "dominate a game" to some extent, meaning they can by themselves possibly win the game for their team, without worrying about complications that come in to play if you need to coordinate several high RPR but ultimately and theoretically limited RPP players.<br />
<br />
Players who over the course of a season appear to rank higher in RPR (quality) but lower in RPP (quantity) may not be getting enough playing time. Players who over the course of a season appear to rank lower in RPR (quality) but higher in RPP (quantity) may be getting too much playing time. But as alluded to earlier, you must not automatically conclude this, because some skills are needed out on the court most of the time, but yet may be available only from a small number players on the roster. Such players may have to get more playing time due to that critical skill in short supply, even if their overall quality does not seem to justify all of that playing time.<br />
<br />
A relatively common reason for unusual playing time will be players who are either truly outstanding defenders (who get extra playing time) or truly bad defenders (who get their playing time reduced).<br />
<br />
Another common reason for extra playing time will be if a team has a point guard who has many more turnovers than the average point guard has. Because the point guard is so important, a good coach has to play his best guard who can make plays at the position for a full set of minutes every game, and he or she must do so almost regardless of how many turnovers that player makes. If you take out your designated point guard due to "too many turnovers," it may end up sort of like cutting your foot off because you have a bad case of athletes foot!<br />
<br />
<span style="color:#cc6600;">EVALUATION OF REAL PLAYER RATINGS</span><br />
<br />
<span style="color:#cc9933;">EVALUATION SCALE FOR REGULAR SEASONS</span><br />
--Meaningful regular season ratings with high statistical validity are not possible until about Jan. 20 of each year.<br />
--The following scale assumes that the Hidden Defending Adjustments have been correctly done and included<br />
<br />
Major Historic Super Star / "Perfect Player" 1.100 and more<br />
Historic Super Star 1.000 1.099<br />
Super Star 0.900 0.999<br />
A Star Player / A Well Above Normal Starter 0.820 0.899<br />
Very Good Player / A Solid Starter 0.760 0.819<br />
Major Role Player / Good Enough to Start 0.700 0.759<br />
Good Role Player / Often a Good 6th Man 0.640 0.699<br />
Satisfactory Role Player 0.580 0.639<br />
Marginal Role Player 0.520 0.579<br />
Poor Player 0.460 0.519<br />
Very Poor Player 0.400 0.459<br />
Extremely Poor Player and less 0.399<br />
<br />
<span style="color:#cc9933;">SHOULD PLAYERS WITH LOW RATINGS BE PLAYING IN THE PLAYOFFS?</span><br />
For the two teams that play in the Championship, players rated below about .560 are almost always a drag on the Championship run. However, such players sometimes get playing time based largely on factors outside of RPR, but valued by coaches and other players, such as:<br />
<br />
--Great energy, effort, and hustle<br />
--Toughness, such as diving after loose balls and taking charges<br />
--Leadership and/or knowledge, especially in the case of veterans.<br />
--Perceived potential for future improvement in terms of real basketball production, especially in the case of young players<br />
<br />
But keep in mind also that the value of these qualities may be and often are overestimated, particularly with respect to playoff games. In general we see that players below .560 are often getting too much playing time in playoff games.<br />
<br />
Many playoff teams are forced to play players with ratings below .560, especially shooting guards and small forwards, simply because they would not have anyone at the position or because they would not have at least eight players available to play if they didn't play any players with ratings below .560. The fewer players below .560 a team has to play the better. One of the worst playff mistakes a coach can make is to play a player whose rating is lower than .560 for more minutes at a postion than another player at that position whose rating is above .560.<br />
<br />
The advice regarding players rated even lower is simple and clear. Players rated below .500 should not be playing at all in the playoffs (except in garbage time) for teams that are serious about winning the Quest for the Ring. Coaches who play players with ratings lower than .500 in the playoffs when any player was available at the position whose rating was higher than .500 should in most cases be fired.<br />
<br />
<span style="color:#cc9933;">EVALUATION SCALE FOR SINGLE GAMES</span><br />
There are two scales for a single game, one for if no hidden defending adjustment is included and one if the new as of June 2010 hidden defending adjustment for a playoff game is included.<br />
<br />
<span style="color:#cc9933;">EVALUATION SCALE FOR BASIC REAL PLAYER RATINGS FOR A SINGLE GAME WITH NO HIDDEN DEFENDING ADJUSTMENT</span><br />
Major Historic Super Star / "Perfect Player" 1.060 and more<br />
Historic Super Star 0.940 0.979<br />
Super Star 0.820 0.939<br />
A Star Player / A Well Above Normal Starter 0.720 0.819<br />
Very Good Player / A Solid Starter 0.640 0.719<br />
Major Role Player / Good Enough to Start 0.560 0.639<br />
Good Role Player / Often a Good 6th Man 0.480 0.559<br />
Satisfactory Role Player 0.400 0.479<br />
Marginal Role Player 0.320 0.399<br />
Poor Player 0.240 0.319<br />
Very Poor Player 0.160 0.239<br />
Extremely Poor Player and less 0.159<br />
<br />
<span style="color:#cc9933;">EVALUATION SCALE FOR REAL PLAYER RATINGS FOR A SINGLE GAME WITH THE SINGLE GAME HIDDEN DEFENDING ADJUSTMENT INCLUDED</span><br />
Major Historic Super Star / "Perfect Player" 1.200 and more<br />
Historic Super Star 1.080 1.119<br />
Super Star 0.960 1.079<br />
A Star Player / A Well Above Normal Starter 0.860 0.959<br />
Very Good Player / A Solid Starter 0.780 0.859<br />
Major Role Player / Good Enough to Start 0.700 0.779<br />
Good Role Player / Often a Good 6th Man 0.620 0.699<br />
Satisfactory Role Player 0.540 0.619<br />
Marginal Role Player 0.460 0.539<br />
Poor Player 0.380 0.459<br />
Very Poor Player 0.300 0.379<br />
Extremely Poor Player and less 0.299<br />
<br />
<span style="color:#cc9933;">EVALUATION SCALE FOR A CAREER (OF A PLAYER</span>)<br />
Remember that many players have lower ratings in their first one to three years than they will have ultimately. Remember also that players in their last season or two before they retire will generally have lower ratings than their career average.<br />
<br />
All Career Real Player Ratings require a Hidden Defending Adjustment (HDA). For active players, the adjustment will be the average of the two HDA adjustments from the most recent two seasons. For retired players, the adjustment will be the average adjustment for the third to last and the fourth to last season (in other words, the final two years of the players' career are skipped and the two years prior to those years are considered).<br />
<br />
<span style="color:#cc9933;">EVALUATION SCALE FOR A CAREER (OF A PLAYER) </span><br />
Perfect for all Practical Purposes / Major Historic Super Star 1.000 and more<br />
Historic Super Star 0.940 0.999<br />
Super Star 0.870 0.939<br />
A Star Player / A Well Above Normal Starter 0.800 0.869<br />
A Very Good Player: A Solid Starter 0.750 0.799<br />
Major Role Player / Good Enough to Start 0.700 0.749<br />
Good Role Player / Often a Good 6th Man 0.650 0.699<br />
Satisfactory Role Player 0.600 0.649<br />
Marginal Role Player 0.540 0.599<br />
Poor Player 0.480 0.539<br />
Very Poor Player 0.420 0.479<br />
Extremely Poor Player 0.419 and Less<br />
<br />
<span style="color:#cc9933;">NOTE ABOUT LOW CAREER RATINGS</span><br />
Players rated below about .600 in their careers often get playing time based largely on factors outside of RPR, but valued by coaches and other players, such as:<br />
<br />
--Great energy and hustle<br />
--Toughness, such as diving after loose balls and taking charges<br />
--Leadership and/or knowledge, especially in the case of veterans<br />
--Perceived potential for future improvement in terms of real basketball production, especially in the case of young players<br />
--See also the User Guide section called "Cautions"<br />
<br />
[End of the Strategic Use of Ratings Section]<br />
<br />
<span style="color:#ff6600;">==========MECHANICS OF REAL PLAYER RATINGS AND REAL PLAYER PRODUCTION==========<br />
</span><br />
<span style="color:#cc9933;">MINIMUM PLAYING TIME RULES</span><br />
As explained further in the adjustment for defending section of this Guide, only players who have played at least 300 minutes can have a hidden defending rating and an overall RPR given to them. Due to the minimum sample size requirement for the adjustment for hidden defending, regular season ratings for NBA players can not be meaningfully done until at least mid January. Generally, we need at least 3 players to have played 1,500 minutes or more before we can or will rate that team's players.<br />
<br />
<span style="color:#cc9933;">REAL PLAYER PRODUCTION</span><br />
Of course, looking at actual production (everything positive added together and everything negative subtracted out) is something that is extremely important too. The total production (everything good and everything bad combined together) is simply called Real Player Production or RPP.<br />
<br />
<span style="color:#cc9933;">BASIC VERSUS TOTAL REAL PLAYER PRODUCTION</span><br />
Basic RPP does not include any estimate of how much value from hidden defending was done by the player. Starting from June 2009, there is an estimate made for the value of hidden defending of each player, calculated from the following formula:<br />
<br />
Hidden Defending Production = Total Scored Defensive Production * (Hidden Defending Rating / Total Scored Defensive Rating)<br />
<br />
The validity of this adjustment is somewhat less than the high validity of the defending adjustments for RPR in general. Therefore, the user is advised to not go overboard in using the results.<br />
<br />
Then of course Total Real Player Production is Basic Real Player Production plus Hidden Defending Production. Note: At this time, RPP still refers to basic RPP, and so TRPP is the adjusted version.<br />
<br />
<span style="color:#cc9933;">SOURCE OF TRACKED BASKETBALL COUNTS</span><br />
The sources for the raw counts of scores, rebounds, steals, turnovers, and so forth are ESPN.com and NBA.com. Other sites used as important data sources are Basketball-reference.com, Knickerblogger.net, and USAToday.com.<br />
<br />
<span style="color:#cc9933;">NOTES ON SOME OF THE TECHNOLOGIES USED</span><br />
Microsoft Excel is extensively used to accurately produce RPR reports. Hundreds of Internet sites have been used to one extent or another in the development and in the continuing production of RPR and related reports. A very small number of sites, however, are relied on for the raw data, especially ESPN.com and NBA.com.<br />
<br />
<span style="color:#cc9933;">THE BASIC FORMULA</span><br />
For 2009-10, the RPR formula has been very carefully and accurately tweaked again and is set to be as follows:<br />
<br />
<span style="color:#cc9933;">POSITIVE FACTORS<br />
</span>Points 1.00 (at par)<br />
Number of 3-Pt FGs Made 1.00<br />
Number of 2-Pt FGs Made 0.40<br />
Number of FTs Made 0 (no "bonus for a made free throw; just the point itself goes into RPR)<br />
<br />
Assists 2.15<br />
<br />
Offensive Rebounds 1.43<br />
Defensive Rebounds 1.31<br />
Blocks 1.80<br />
Steals 2.30<br />
<br />
<span style="color:#cc9933;">NEGATIVE FACTORS</span><br />
3-Pt FGs Missed -1.00<br />
2-Pt FGs Missed -1.03<br />
FTs Missed -1.3256<br />
<br />
Turnovers -1.95<br />
Personal Fouls -1.00<br />
<br />
<span style="color:#cc9933;">ACTUAL COMBINED AWARD OR PENALTY BY TYPE OF SHOT</span><br />
3-Pointer Made 4.00<br />
2-Pointer Made 2.40<br />
Free Throw Made 1.00<br />
3-Pointer Missed -1.00<br />
2-Pointer Missed -1.03<br />
Free Throw Missed -1.3256<br />
<br />
<span style="color:#cc9933;">ZERO POINTS: PERCENTAGES BELOW WHICH THERE IS A NEGATIVE NET RESULT</span><br />
3-Pointer 0 score % 0.200<br />
2-Pointer 0 score % 0.300<br />
1-Pointer 0 score % 0.570<br />
<br />
This means that if a player has a lower percentage than any of the three above, then his RPR would be lower rather than higher as a result of his shooting that type of shot.<br />
<br />
<span style="color:#cc9933;">ASSISTS VERSUS TURNOVERS ZERO POINT </span><br />
Assist/Turnover Ratio That Yields 0 Net Points: 0.908<br />
<br />
Asset/turnover rations greater than .908 are positive with respect to RPR. This also means that any player who has an assist/turnover ration of less than .908 is losing RPR rating when assists and turnovers are considered. He would have to either increase assisting or reduce turnovers to turn the combined effect from assists and turnovers positive.<br />
<br />
<span style="color:#cc9933;">HIDDEN DEFENDING RATINGS</span><br />
A quality of defending rating of between 0 and .324 is added to base or unadjusted RPR. In most cases, the hidden defending rating is between 0.050 and .230. See the Hidden Defending Adjustment to Real Player Ratings sub section that follows just below below here for a very detailed explanation of how we determine player hidden defensive ratings and how we combine them with base RPR.<br />
<br />
[End of Mechanics of Real Player Ratings and Real Player Production Section.]<br />
<br />
<span style="color:#ff6600;">======== DEFENSIVE AND OFFENSIVE SUB RATINGS ====================== </span><br />
<br />
<span style="color:#cc9933;">DEFENDING SUB RATINGS</span><br />
<span style="color:#cc9933;">DEFENSIVE AND OFFENSIVE SPECIALISTS</span><br />
Defensive specialists will have a much higher percentage of their overall RPR determined by the defensive sub rating than the League average of 45%. At the extreme, defensive specialists who are power forwards or centers will have defensive sub ratings that constitute as much as about 75% of their overall ratings. Due to the team nature of basketball, it is not an automatic disqualifier for winning the Quest to have a player unbalanced in this way, provided that the unbalanced player is truly outstanding defensively.<br />
<br />
Offensive specialists will have a much higher percentage of their overall RPR determined by the offensive sub rating than the League average of 55%. At the extreme, offensive specialists who are guards will have offensive sub ratings that constitute as much as about 85% of their overall ratings. Due to the team nature of basketball, it is not an automatic disqualifier for wining the Quest to have a player unbalanced in this way, provided that the unbalanced player is truly outstanding offensively.<br />
<br />
<span style="color:#cc9933;">THE HIDDEN DEFENDING ADJUSTMENT (HDA) TO REAL PLAYER RATINGS</span><br />
The hidden defending adjustment is on average 20.25% of overall RPR. Players will range widely though: as little as virtually 0% and as much as about 45% of a players' full RPR will be the hidden defending component.<br />
<br />
Obviously, some valuable things that basketball players do are never counted by scorekeepers. Many of these uncounted things are defensive, insofar as they prevent scores, or reduce the scoring opportunities of the opponent. These things would include chasing down loose balls, taking charges, and good or great man to man defending. Man to man defending that is good enough to prevent what would have been a score from actually being a score is the most common and important basketball action which can not be and is not tracked by NBA scorekeepers.<br />
<br />
Man to man defending however, although the most important, is not by any means the only defensive element that can not be tracked or scored. Broadly, what is missed or hidden is all the things that the player does to make the possessions of the opposing teams worthless other than what is already counted, which would be rebounds, steals, blocks, and personal fouls. These untracked or hidden actions would include:<br />
<br />
<span style="color:#cc9933;">SOME BASKETBALL FACTORS ESTIMATED BY THE HIDDEN DEFENDING ADJUSTMENT TO RPR</span><br />
--effective man to man defending<br />
--effective rotation / switching on defense, especially off screens and picks<br />
--effective pick and roll defense<br />
--effective defensive recognition<br />
--quickness of defensive reaction<br />
--energy and hustle on defense<br />
--effective taking of charges (causing a driving offensive player to be called for an offensive foul)<br />
--effective hustling after loose balls<br />
--effective calling of time-outs, for example, to avoid a jump ball being called<br />
<br />
These things would be counted by scorekeepers if it were possible. But, for example, there is no way to know exactly how many shots a good (or any kind of) defender has changed from being a score to a miss.<br />
<br />
Quest for the Ring has developed a statistically valid way to accurately estimate the untracked or hidden aspects of defending. This is described in complete detail in the latter sections of this Guide.<br />
<br />
<span style="color:#cc9933;">HDA IS AN UPGRADE TO DEFENSIVE EFFICIENCY RATINGS OF PLAYERS SEEN ON OTHER SITES</span><br />
There are a small number of sites that show you each player's "defensive efficiency," which is number of points allowed per 100 possessions. This sounds nice, but it actaully is not all that valuable. The Hidden Defending Adjustment of RPR is an upgrade for this.<br />
<br />
Probably the most important improvement is that in HDA, players' defending is standardized for team defending. With the defensive efficiency on certain other sites, players who are on good defensive teams have elevated ratings simply because they are on those teams. But obviously, many of the players on a good defensive team are producing that good defense, not just any one of them. The Hidden Defensive Adjustment corrects for this quality of team defense bias, which enables players on different teams to be fairly compared with respect to hidden defending.<br />
<br />
<span style="color:#cc9933;">THE HIDDEN DEFENDING ADJUSTMENT EXPLAINED</span><br />
It took almost two years of hoping, searching for things, planning, and then developing, but finally the basic breakthrough was achieved in the objective of correct evaluation of defending. Now that the breakthrough has come, I am now more certain than ever that RPR is the best overall rating system in existence, and that it is now roughly as good as it will ever or can ever be.<br />
<br />
HDA is a statistically valid way to rate the hidden defending of players, that is, what they do to prevent scores other than rebounding, blocks, steals, and fouls, which were always included in RPR. This would include man to man defending, zone defending, pick and roll defending, defensive recognition, and defensive rotation.<br />
<br />
Although the technique used had to be indirect and subject to a very small amount of statistical error, it validly awards the better defenders with bigger RPR bonuses. It has been validated by comparing results obtained with the player defensive efficiency ratings shown on three different "advanced basketball statistics" web sites. HDA results were shown to be highly correlated with those efficiency ratings.<br />
<br />
Where there are small differences, HDA is better, because of the correction for team defense bias, because HDA uses simple, bedrock statistical theory rather than involved formulas involving assumptions, and for other lessor reasons.<br />
<br />
<span style="color:#cc9933;">USE OF BASIC STATISTICAL SAMPLING THEORY</span><br />
What we are doing is using an indirect and inexact, yet accurate and statistically valid way to discover who the better defenders are. No two players are out on the court for all the exact same minutes. So although for every player, what the other players out on the court do defensively while they are out on the court is a very large factor determining what that player's points per minute allowed will be, when you look at many, many hundreds of minutes, what the individual player does, or does not do defensively, as the case may be, will eventually show up in that particular player's points allowed per minute statistic.<br />
<br />
In other words, what any individual player does defensively has to sooner or later show itself in a differentiation from other players of his points allowed per minute. As the number of minutes rise above 500, and then 1,000 and then, for many players, above 2,000 and even 3,000 for a regular season, what a particular player does or does not do defensively becomes more and more exactly shown by the points allowed per minute number. This is very basic statistical sampling theory in operation. Statistical sampling theory is an easy to understand bedrock theory of statistics.<br />
<br />
Due to the necessity of a large sample of minutes, we will not do defending estimates for any player who has played for fewer than 300 minutes. Quality of defending estimates will be slightly less accurate for players who have only played between 301 and about 600 minutes than they will be for players who have played for more than 600 minutes. We believe that the estimates are going to be extremely accurate for all players who have played 750 minutes or more. The idea is relatively simple: as the number of hundreds of minutes played goes up, the accuracy of this system improves, to the point where it gives you the same information you would have if you knew exactly how many possessions of the other team each player ruined with his defending.<br />
<br />
For your information, after adjustments for pace, all players allow between 1.87 and 2.26 points per minute; most allow between 1.96 and 2.17. The overall NBA average is about 2.06 points per minute allowed.<br />
<br />
<span style="color:#cc9933;">A REMINDER: NOTHING IS HIDDEN HERE</span><br />
Unlike most "advanced statistics" that are published on the internet or in print, we give you all the details about how we do ours, so that you can evaluate the evaluations, so to speak.<br />
<br />
<span style="color:#cc9933;">HOW TO REVEAL HIDDEN DEFENDING IN FOUR STEPS<br />
STEP ONE: CALCUATION OF RAW POINTS GIVEN UP PER MINUTE</span><br />
Where do we start to discover what is hidden? We keep it as simple and yet as accurate as possible. We use the most official and therefore presumably the most reliable data as the building blocks for rating the defense of NBA players. We start with the player minutes and points scored by the other team while the player was on the court that are shown in the plus/minus statistical section at NBA.com.<br />
<br />
There are no value judgments made regarding a player's defending style or, for that matter, regarding a team's defending style. We don't care about style. Using points allowed per minute is looking at results, nothing more and nothing less.<br />
<br />
<span style="color:#cc9933;">STEP TWO: THE PACE ADJUSTMENT</span><br />
After simply dividing points allowed by minutes on the court, we adjust (or standardize, or correct) that rate for the relative pace of the team. Pace is the average number of possessions per game. The faster the pace, the greater the number of possessions per game. Relative pace is average League pace divided by the team's pace. For your information, the average League pace in 2009-10 was 92.7 possessions per game. Fast paced teams will have pace adjustments of slightly less than 1 and slow pace teams will have pace adjustments of slightly greater than 1.<br />
<br />
Then we simply multiply each player's raw points allowed per minute played by his team's pace adjustment.<br />
<br />
It would be grossly unfair to compare the rate of points allowed by a player on a fast paced team to a player on a slow paced team. The player on the fast paced team automatically gives up more points per minute defensively because there are more possessions in a fast paced teams' games and, therefore, more points scored by the opponents. In other words players who are on teams with faster paces give up more points per minute through no fault of their own.<br />
<br />
Similarly, players who are on teams with less efficient defenses give up more points per minute, regardless of how well they defend, everything else held constant. You can not fairly compare players on two or more teams with different paces and different team defense qualities unless you standardize, or in other words control for those differences for all NBA players. The correction for pace has just been described. The correction for team defensive efficiency turned out to be a big problem that was not largely solved until May 2010. See below for how the correction is made for team defensive efficiency.<br />
<br />
<span style="color:#cc9933;">STEPS THREE AND FOUR: CONVERSION OF PACE ADJUSTED POINTS GIVEN UP PER MINUTE TO A SCALE APPROPRIATE FOR REAL PLAYER RATINGS WHILE SIMULTANEOUSLY CORRECTING FOR TEAM DEFENSE BIAS</span><br />
We need to translate the pace-adjusted points allowed per minute into numerical terms that are the most useful with respect to RPR. We also need to as well as we can correct or standardize for very different team defense qualities. Before we describe how we accomplish both of these objectives at the same time, let's back track a little for a brief history...<br />
<br />
<span style="color:#cc9933;">A BRIEF HISTORY OF THE HIDDEN DEFENDING ADJUSTMENT</span><br />
Beginning in January 2009 the Hidden Defending Adjustment (HDA) was included in Real Player Ratings after extensive research and development. However, the early versions of HDA did not correctly and/or did not completely solve the comparability of ratings among players on different teams problem so earlier versions of HDA were replaced in May 2010 by a version that appears to be accurate enough to be the permanent version, subject in the future to only relatively minor tweaking.<br />
<br />
HDA was apparently the first ever effort to rate the defensive efforts of players that are hidden unless you watch all that player's games, because they are not scored or tracked by scorekeepers. The basic problem of course is that much of what players do defensively can not and is not tracked by scorekeepers and box scores.<br />
<br />
As of June 8, 2009, the mechanics of the HDA were slightly changed to increase accuracy.<br />
<br />
As of November 14, 2009, the HDA was upgraded, on the average, from about 40% of the overall defensive sub rating of players to about 45%. Furthermore, as of November 14, 2009, the overall defensive sub rating was recalibrated so that it would now be about 45% of the overall RPR, versus about 42% in 2008-09. The offensive sub rating was recalibrated so that it would now on average constitute about 55% of the overall RPR, versus about 58% in 2008-09.<br />
<br />
<span style="color:#cc9933;">MAY 2010 REFORMULATION OF THE HIDDEN DEFENDING ADJUSTMENT</span><br />
A fairly major problem was discovered in March 2010: the HDA was substantially (but not overwhelmingly, though) biased against players playing for the best defensive teams and it was similarly biased in favor of the players playing for the worst defensive teams. The problem was that the method at that time was to standardize for both pace and the relative team defensive efficiency by blunt multiplication of the raw points allowed per minute by those factors for teams. Team defensive efficiency is number of points surrendered per 100 possessions and relative team defensive efficiency is League average defensive efficiency divided by a team's defensive efficiency.<br />
<br />
What we used to do is multiply the relative defensive efficiency by the raw points scored by opponents when a player is on the court (which is the raw starting point for HDA). The objective of standardizing or correcting for team defense was to prevent poor defenders on good defensive teams from getting a too high rating and to prevent good defenders on bad defensive teams from getting a too low rating.<br />
<br />
When you do this full standarization for relative team defensive efficiency however, it turns out that you "overshoot the mark" and you unfairly and excesssively shrink HDAs of the better defenders on the better defending teams. And vice versa, you unfairly and excessively magnify HDAs of the lesser defenders on the poor defending teams. So in solving one set of problems we created another set of problems.<br />
<br />
The solution was to modify the use of the (relative) team defensive efficiencies and to split the difference between the biases. In other words we are compromising between not adjusting for team defense at all and over adjusting. Very small biases remain for which there is no solution:<br />
<br />
--The best defenders on the best defensive teams have slightly lower HDAs than they deserve.<br />
<br />
--The worst defenders on the best defensive teams have slightly higher HDAs than they deserve.<br />
<br />
--The worst defenders on the worst defensive teams have slightly higher HDAs than they deserve.<br />
<br />
--The best defenders on the worst defensive teams have slightly lower HDAs than they deserve.<br />
<br />
The HDA as redesigned in Spring 2010 is considered rock solid because the biases that remain are very small, at the very most .025 in terms of Real Player Rating. In the vast majority of cases, the remaining bias is less than .010 in terms of RPR. For example, a player who has a RPR of .720 might really deserve only a .710 or as much as a .730 if a perfect HDA was possible.<br />
<br />
Instead of using the relative team defensive efficiencies "in full force" by directly multiplying the raw points per minute by the relative defensive efficiencies, we created a huge evaluation grid (chart) with team relative defensive efficiency on one axis, with Hidden Defending Ratings on the other axis, and with points per minute scored by opponents when the player is on the court (adjusted for team pace) arrayed throughout the interior of the chart. By doing this we can compromise between too much standardization for relative team defensive efficiency and no standardization at all.<br />
<br />
In effect we grade every player's hidden defending "on the curve". The better a player's team is defensively, the lower the points per minute the opponents score while the player is on the court for any given Hidden Defending Rating. For example, let's say a player gives up 2.01 points per minute (adjusted for team pace) while he is on the court. If that player is on one of the best defensive teams, the chart shows that his Hidden Defending Rating shall be .174. But if that player is on one of the worst defensive teams, the chart shows that his Hidden Defending Rating shall be .258. The player deserves and gets a higher HDA for the same points given up per minute if he is on a lousy defensive team.<br />
<br />
But as previously noted the overshooting the mark problem is avoided through the use of the chart as opposed to bluntly multiplying by team relative defensive efficiencies.<br />
<br />
Remember that the chart simultaneously achieves two objectives. First, the very small differences in different players' points allowed per minute are translated into numerical terms that correlate to the role that HDA needs to play within overall RPR. Second, we adjust for the differences between teams' defenses without over adjusting, and we compromise as described above.<br />
<br />
<span style="color:#cc9933;">STEP FIVE: CALCULATION OF REAL PLAYER RATING (ADJUSTED FOR HIDDEN DEFENDING)<br />
</span>The final step is to simply add the hidden defending rating to the Base RPR to yield RPR (Real Player Rating).<br />
<br />
<span style="color:#cc9933;">GUARD AND FORWARD OUTLIER RULES ARE REPEALED</span><br />
With the May 2010 revamping, outlier rules are unnecessary and are repealed.<br />
<br />
<span style="color:#cc9933;">USE OF HIDDEN DEFENDING RATING</span><br />
We now have added in a reasonably good estimate of the value of actions of players that are not even kept track of by scorekeepers! Technically, you could call the final result "Ajusted RPR," but we are trying to avoid that terminology because of how important we think it is to include the hidden defending in the performance measure.<br />
<br />
<span style="color:#cc9933;">SIZE OF THE DEFENDING ADJUSTMENTS</span><br />
Base regular season RPR's for most NBA players range between .400 and 1.000. The total range of possible defending adjustments to the base RPRs is from 0 to .325. In most cases, however, the adjustment will be between 0.075 and .250.<br />
<br />
<span style="color:#cc9933;">THE DEFENDING SUB RATING: PUTTING THE HIDDEN AND THE UNHIDDEN TOGETHER</span><br />
Aside from the Hidden Defending Rating we can find out how well each player does in terms of unhidden or scored defending, can't we? Of course we can.<br />
<br />
Unhidden or tracked defending, is defensive rebounding plus steals plus blocks minus personal fouls, calibrated according to the usual RPR factors. If we extract the combination of those four out of the same counts that underlie the RPR as a whole, and use the usual factors, we get what we are going to call the Scored Defending Production. This could also be thought of as Tracked Defending Production if you prefer. Then if we divide this by minutes, we have a Scored (or Tracked) Defending Rating.<br />
<br />
Finally, if we combine Hidden Defending Rating (HDR) with Scored Defending Rating (SDR) we can have an Overall Defending Rating (ODR).<br />
<br />
Obviously, the HDR scaling is designed to coordinate correctly with both SDR and with RPR as a whole. All of the coordinations reflect the latest undertanding of how basketball games are won and lost. The HDR constitutes about 45% of ODR while SDR constitutes the other 55%. In other words, the value of hidden defending is perceived to be about 45% of the overall value of defending, while the value of scored (unhidden) defending is perceived to be about 55% of the overall value of defending.<br />
<br />
There appear to be many coaches and not a few hardcore basketball fans who believe that hidden defending is actually more important than scored defending, but I am never going to agree with that. I think that although hidden defending is important, and plausibly almost as important as tracked defending, that it can not be more than this. Hidden defending is like a quicksand, in that there seems to be a tendency for a substantial minority of basketball people to get carried away with estimating the importance of it, and then become more and more trapped by their error in terms of how they look at basketball or in terms of how they coach their team if they are coaching.<br />
<br />
<span style="color:#cc9933;">FORWARDS AND CENTERS WILL GENERALLY HAVE SUBSTANTIALLY HIGHER DEFENDING RATINGS</span><br />
Due to having primary responsibility for defense of the paint and for rebounding, centers and forwards are going to inevitably have higher defensive ratings than will guards. Along with much greater opportunity for rebounds and blocks, centers and forwards also have more opportunity for such hidden defending actions as good man to man defending and correct rotations than do guards. Guards out on the perimeter generally should not and do not man to man defend as closely as do interior defenders, due to the well known guideline that it is quite foolish to foul a jump shooter outside of the paint.<br />
<br />
<span style="color:#cc9933;">THE OFFENSIVE SUB RATING</span><br />
The Offensive Sub Rating is all tracked actions other than the defensive ones (defensive rebounding, steals, blocks, and personal fouls) combined together using the RPR weights, divided by minutes. In other words, it is Total Offensive Production divided by minutes. For the list of all tracked actions and the weight factors assigned to each, see the secion titled "The Formula" above.<br />
<br />
<span style="color:#cc9933;">THE BEST GUARDS WILL HAVE THE HIGHEST OFFENSIVE SUB RATINGS</span><br />
The very best guards in basketball are ones who, although they are not afraid to drive to the hoop from time to time, are able to make outside shots at a good rate. Also, guards in general, and especially point guards, are usually primarily responsible for making assists. These two are among the several reasons why the better guards in pro basketball will have the highest offensive sub ratings in the League.<br />
<br />
On the other hand, some of the most valuable players in the NBA are centers and forwards who are great defenders and efficient inside scorers at the same time. Even more unusual and probably for that reason more valuable is a forward who is (a) a great inside defender (b) a great inside scorer and (c) someone who can hit jump shots, perhaps even including threes, from outside the paint. Lamar Odom is an example of this kind of extremely valuable player.<br />
<br />
Some of these big men will have offensive sub ratings that exceed those of the lessor skilled shooting guards and even those of some of the less skilled point guards.<br />
<br />
[End of Defensive and Offensive Sub Ratings Section.]<br />
<br />
<span style="color:#ff6600;">======== SUMMARY OF PRIMARY FORMULAS SECTION =================</span><br />
Real Player Production or RPR = (All tracked or scored actions weighted according to best available analysis of importance / minutes) + Hidden Defending Rating<br />
<br />
Real Player Production or RPP = Total Offensive Production + Total Defensive Production. (All tracked or scored actions weighted according to best available analysis of importance.)<br />
<br />
Offensive Sub Rating = Total Scored or Tracked Offensive Production / Minutes<br />
<br />
Defensive Sub Rating = Total Scored or Tracked Defensive Production + Hidden Defending Rating<br />
<br />
Step One for Hidden Defending Adjustment:<br />
Points Scored by Opponents While Player was on the Court / Minutes Played by Player<br />
<br />
Step Two for Hidden Defending Adjustment:<br />
Result of Step One * Relative Pace Adjustment (Team's Pace Relative to League Average)Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-422779216151601722010-05-06T09:27:00.000-07:002011-01-21T08:26:53.649-08:00[Historical and Non-Current] User Guide for Real Coach Ratings, May 2010IMPORTANT NOTICE: THIS IS A NON CURRENT, LEGACY USER GUIDE THAT WILL EVENTUALLY BE DELETED. <a href="http://nuggets1reference.blogspot.com/2011/01/user-guide-for-real-coach-ratings-as-of.html">AN UPDATED AND CURRENT GUIDE IS LOCATED HERE</a>.<br /><br /><br /><br /><br /><span style="color:#cc6600;">INTRODUCTION</span><br />I am proud and pleased to present what is probably the world's first serious effort to accurately rate and rank all of the current NBA head coaches. The first edition of these annual ratings was published in October 2008. The second edition was published (slightly late) in early December, 2009.<br /><br />Why should the coaches hide behind a black curtain? Concerning coaches, there is virtually a total lack of the kind of statistical comparing and contrasting that goes on with players 24/7. I for one think it is way overdue that coaches be fairly and systematically compared and contrasted.<br /><br />I can pretty much guarantee you that no one has ever, even with the capabilities created by the Internet age, put in as much effort and thought as I have into fairly comparing NBA coaches with widely different lengths of time spent in professional head coaching. And this system CAN be used in other Leagues, other countries, and on other planets. If there are any other basketball planets, that is!<br /><br />For convenience, this Guide is developed into main sections and subsections. The main sections are:<br /><br />--Mechanics of Basic Real Coach Ratings<br />--Usage of Basic Real Coach Ratings<br />--Mechanics of Advanced Coach Ratings<br />--Usage of Advanced Coach Ratings<br />--Cautions Regarding Basic and Advanced Real Coach Ratings<br /><br />Within each section subsections are in caps as shown.<br /><br /><span style="color:#cc6600;">======== MECHANICS OF BASIC REAL COACH RATINGS =========</span><br /><br /><span style="color:#cc6600;">POSITIVE FACTORS THAT AFFECT REAL COACH RATINGS</span><br />1. <em>Number of Regular Season Games Coached: The Experience Factor</em>:<br />One Point is given for each regular season game coached up to 600 games, which is almost 7 1/2 seasons worth of games. If a Coach has not learned just about everything he needs to by this point, he most likely never will, so the award for experience is sharply reduced for all games coached beyond 600. 0.25 points (1/4 of a point) is given for games 601 through 1,000. Nothing at all is given for any games coached beyond 1,000 games. If a coach has not learned everything he can learn after 1,000 games, he is never going to learn it.<br /><br />What about rookie and near rookie coaches? Just because they have never coached in the NBA, should their experience rating be zero? No, I don't believe so. They either have substantial coaching experience in other Leagues, or they were extremely talented and/or intelligent players, or both, or else they would not have been hired to be a head Coach in the NBA. So any coach who has coached for fewer than 200 NBA games is given exactly 200 points for experience. So rookie coaches start out with Real Coach Ratings of 200 and they go up or down from there.<br /><br />2. <em>Number of Playoff Season Games Coached: the Playoff Experience Factor</em>:<br />Four points are awarded for every playoff game coached (regardless of result). The limit is going to be 300 such games. Probably no one will ever reach the limit except for Phil Jackson. He exactly reached 300 playoff games coached after he won his 10th Ring in June 2009. So Jackson will fail to get any more playoff experience points when he coaches more playoff games in 2010. Certainly by June of 2009, Phil Jackson already knew as much as he will ever know about winning NBA playoff games.<br /><br />3. <em>Number of Games Coached With Current Team</em>:<br />This is a supplementary experience score which most benefits coaches who have gone the longest without being fired by their current teams. The points given are 0.25 (1/4 of a point) for all games coached with the team the Coach is currently working for.<br /><br />The one side of the coin regarding this is that the coach must be doing what the organization wants to avoid being fired, and he can't be a total failure basketball wise, so starting with those things he deserves credit in proportion to how long he has kept his post. The other side of the coin is that the more experience a Coach has with a particular team, the more valuable he is to that franchise, because he knows everybody and everything concerned with the franchise better and better with each passing year. Generally speaking, the more successive games a Coach has coached with the same team, the more effectively and efficiently he can help the team squeeze out wins that would otherwise be losses.<br /><br />Jerry Sloan, who coming in to 2009-10 had coached a mind boggling 1,668 games for the Utah Jazz, is the ultimate example of a Coach who due to his many years with the same team is going to be more effective and efficient than he would be if he had just switched to a different team. Due partly to this factor, do not be surprised if the Jazz become a losing team shortly after Sloan finally retires.<br /><br />Another name for this factor might be "franchise specific experience." For 2009-10 the Washington Wizards hired a new head Coach, Flip Saunders, who has a lot of prior experience with other teams and has a relatively high rating. But he is brand new to the Wizards, so be careful not to expect miracles or even to assume that his coaching is going to be as good as it has been in the past from the get go. Look instead for the Wizards to get a little better as the season goes along and in the coming years if Saunders remains the coach. Because Saunders needs time to merge his skills and abilities with the specific factors involved with making the Wizards a winning team.<br /><br />4. <em>Regular Season Wins</em><br />4 points is assigned per regular season win.<br /><br />5. <em>Playoff Wins</em>:<br />20 points are assigned per playoff win. Very slightly more than half the teams make the playoffs in the current NBA: 16 out of 30 teams. Theoretically, unless he is stuck with a truly lousy roster, any good coach can win a lot of regular season games and get his team into the playoffs. Plus, any coach at all, including a bad one, can squeak a very good or great team into the playoffs. For a good coach, merely getting into the playoffs is really not much of an accomplishment at all.<br /><br />Many, many owners, managers, and fans do not seem to understand this, but the only thing that really matters with regard to coaching is what happens in the playoffs. Only the truly good coaches can win in the playoffs. The playoffs are where the wheat is separated from the chaff. In the NBA, the regular season is quite honestly nothing more than the preseason for the "playoff season," which is the the season which really matters when all is said and done.<br /><br />Playoff games are generally more intense in all respects: individual players' efforts, team play as a whole, and coaching efforts are all ramped up.<br /><br />For all of these reasons, it is necessary to factor playoff games as being worth five times as much as regular season games. So for both for wins and losses, playoff games count five times as much as regular season games do.<br /><br /><em></em>6. <em>Championships</em><br />30 points are added for each winning Championship appearance. (That is 30 points regardless of how many games the Championship consisted of.) Since Championships average about 6 games, this is roughly equivalent to adding five experience points for each Championship game coached in the winning effort. Counting the four points every coach gets for experience for every playoff game, the total experience points for each Championship game (where the Championship is won) is approximately nine.<br /><br />12 points are added for each Championship appearance where the Coach lost in the Championship. (That is 12 points regardless of how many games the Championship consisted of.) Counting the four points every coach gets for experience for every playoff game, the total experience points for each Championship game (losing effort) is approximately six.<br /><br /><span style="color:#cc6600;">NEGATIVE FACTORS THAT AFFECT REAL COACH RATINGS</span><br />1. <em>Regular Season Losses:</em><br />5.75 points is charged for each regular season loss.<br /><br />2. <em>Playoff Losses:</em><br />28.75 points is charged for each playoff loss.<br /><br />Now there will be some who leap out of their seats and say "this guy is off his rocker" when they see that the penalty for losing a playoff game is 28.75 points while the award for winning a regular season game is four points. I can assure you, ye of little faith, that I know exactly what I am doing and that this is either precisely correct or possibly the playoff loss penalty should be even greater. I have already explained why playoff games must be valued at least five times the valuation put on regular season games. A regular season loss is 5.75 points, and 5.75 times 5 is 28.75.<br /><br />Now consider the true underlying net positive and negative scores for the four types of games and results, which you get by combining the experience award and the winning or losing number:<br /><br /><span style="color:#cc6600;">TRUE NET SCORES COMBINING EXPERIENCE AND WIN / LOSS SCORES TOGETHER</span><br /><em>Regular Season Win True Net Score</em>: 5 Points: 4 points for the win and 1 point for the experience. But it is 4.25 points for coaches (for new games) who have between 600 and 1,000 games coached since they get only .25 points for experience. And it is just 4 points for coaches (for new games) with more than 1,000 games coached since they don't get any further points for experience.<br /><br /><em>Regular Season Loss True Net Score</em>: Minus 4.75 Points: minus 5.75 points for the loss plus 1 point for the experience. But it is minus 5.5 points (for new games) for coaches who have between 600 and 1,000 games coached since they get only .25 point for experience. And it is minus 5.75 points (for new games) for coaches with more than 1,000 games coached since they don't get any further points for experience.<br /><br />Can you see what I think is the genius of this system? The more experienced coaches get experience points that obviously are not available to less experienced coaches. To partially or in some cases completely offset what would otherwise be an unfair advantage in the rating system, the more experienced coaches are expected to do somewhat better in winning and losing in order to achieve a net positive from their winning and losing toward their ratings. This is a primary mechanism used here that tends to even the playing field between coaches of widely differing amounts of experience, without being unfair to any type of coach. This whole project would have been largely a waste of time if I didn't have a good and fair way of varying the treatment of coaches with radically different amounts of experience.<br /><br />Now here are the true net scores for playoff games:<br /><br /><em>Playoff Win True Net Score</em>: 24 points: 20 for the win and 4 for the experience.<br /><br /><em>Playoff Loss True Net Score</em>: Minus 24.75 points: minus 28.75 for the loss plus 4 for the experience.<br /><br /><span style="color:#cc6600;">PLAYOFF COACH SUB RATING</span><br />Mechanically, the playoff sub rating is simply the rating you get when you factor in only the playoffs-related factors. In the spreadsheet of the report, the Playoff Sub Ratings are just below the overall Ratings.<br /><br />Two of the three sub ratings from 2008 are discontinued beginning 2009. Readers can now scan the raw data and get at least as much information as they could from the discontinued sub ratings. The only sub rating we are still publishing is the playoffs sub rating. (Who would have thought we would key in on that one, laugh out loud.)<br /><br />In the December 2009 Ratings, George Karl is no longer at the very bottom of the playoffs sub ratings; he is ahead of Don Nelson thanks to Karl's Nuggets' 10-6 playoffs campaign in 2009. Golden State Warriors Coach Don Nelson is now dead last in the playoffs sub ratings. However, the deep hole that Karl dug in earlier years was so deep that the Nuggets' miraculous 2009 playoffs campaign was not enough to overall lift him very much in the playoffs sub rating. He is still showing up as a very, very poor playoffs coach, though Karl's rating is not as extremely poor as it was a year ago.<br /><br />As of May 2010 Karl has now won 74 playoff games and lost 93 of them. Prior to the 2009 playoffs, Karl had won just 62 playoff games and lost 83.<br /><br /><span style="color:#cc6600;">======= USAGE OF BASIC REAL COACH RATINGS ========</span><br /><br /><span style="color:#cc6600;">HOW TO INTERPRET DIFFERENCES IN RATINGS</span><br />We will use Phil Jackson versus George Karl from the 2009 Real Coach Ratings, published in early December, 2009. You can see that the best cautious rating system we can produce (the one most in George Karl’s favor) and not be laughed out of the room shows that Los Angeles Lakers Coach Phil Jackson has a rating about ten times that of Denver Nuggets Coach George Karl.<br /><br />You can interpret this in either of two ways. The first way to look at this is similar to the way that the Real Team Ratings are interpreted: It is about ten times more likely that Phil Jackson is a better coach and will defeat George Karl in a playoff series than the other way around, assuming the raw talent and injury situations of their teams are about the same. Given equal teams, Phil Jackson is going to beat George Karl unless something really rare is going on.<br /><br />The other way to interpret this is to think of the differential between the two ratings as an amount which translates into an actual real life coaching difference. Then you plug that difference in with the other differences that determine who wins a playoff series. If the coaching difference and/or the size of the coaching component is big enough, it will result in the lesser skilled team winning the series if they have the better coach.<br /><br />Even though we are unable at this time to properly estimate the actual size of the coaching factor in a playoff series, we know it is NOT negligible, trivial, or even very small. Coaching may be a small rather than a "middle-sized" factor (we don't have exact knowledge of how big a factor it is yet) but if the players between the two teams are evenly matched, then even a small difference in the coaching could determine the series and a large difference between coaches would definitely determine who wins a series between teams with equal players.<br /><br />In any event, the difference between Phil Jackson and George Karl is so large that even if the coaching impact on playoff games is at the low end of the possible range, George Karl would still have to have a much better team to be able to defeat Phil Jackson in a playoff series.<br /><br />The same applies to Phil Jackson versus Boston Celtics Coach Doc Rivers. We think right now (November 2009) that the 2010 Championship is about a toss-up between the Celtics and the Lakers. Phil Jackson is such a great coach that he is clearly better than even good and very good coaches such as Doc Rivers. Were it not for the Lakers' coaching advantage over the Celtics, the Celtics would have to be favored to win the Ring in 2010 by maybe 4 games to 2, since the Celtics are clearly better than the Lakers in terms of raw skill and raw potential.<br /><br /><span style="color:#cc6600;">CERTAIN VETERAN PLAYERS CAN COACH THEMSELVES TO A LARGE EXTENT</span><br />Always keep in mind that older, more veteran teams can coach themselves to one extent or another, particularly if the roster is both highly skilled and highly experienced. It doesn't matter who comes up with the winning schemes and patterns; what matters is that someone does. Younger teams, however, always need a good coaching staff to make headway in the playoffs.<br /><br />Quest for the Ring has gone on record claiming that the 2007-08 Champion Boston Celtics are a good example of a team that could coach itself well to a large extent.<br /><br /><span style="color:#cc6600;">COACH OBJECTIVE #1: TO AVOID BEING FIRED</span><br />Calculations indicate that the average Real Coach Rating is currently 639 and the median is about 200. So the objective of all rookie coaches must be to increase their starting rating of 200 toward the average of 639 as soon as they can do so. You can think of the range between 200 and 600 as "the proving ground" or even the "make it or break it range" for coaches. Most coaches who drop below zero instead of going up from 200 during their first 3-6 years will be bounced out of the NBA.<br /><br />Coaches who have ratings below 200 for more than about five straight years, and especially coaches who have ratings below zero for about five straigt years should be fired unless the managers and owners involved are sure that the coach has not had competitive players to work with, or are sure that the coach is getting better at his job, or unless there is some other unusual mitigating factor.<br /><br />Coaches who maintain their jobs with Real Player Ratings below 200, and especially with Real Coach Ratings below zero, are frequently going to be men who have very cordial relations with the managers and owners. In other words, they are being kept on the payroll because the managers and/or the owners involved personally like the coach in question enough to brush aside any concerns about whether that coach is doing a good enough job for their team. These dubious coaches are given the benefit of the doubt, in other words, or sort of a free pass.<br /><br />It is also true that some managers and owners live in fear that they might go from bad to worse if they exchange one coach for another. They simply do not have enough courage to strike out and try a rookie or a near-rookie coach, or to pick up a coach who has been fired by another team but who deserves a second chance.<br /><br />The key is balance. On the one hand you don't want to be stuck out of caution or fear with a veteran coach who is simply not among the best coaches. On the other hand, you can't just strike out and pick any one who has never coached an NBA team before but seems like he might be a good coach. Rahther, you have to do a lot of homework and research. You have to spend a lot of time and make every effort to find that one in a hundred candidate who will actually become one of the better and maybe even one of the best NBA coaches.<br /><br /><span style="color:#cc6600;">NEVER EVER HIRE A COACH WITH A POOR PLAYOFFS RECORD IF YOU WANT TO WIN A RING</span><br />The Nuggets hired Karl despite the fact that he had a poor playoffs record and rating. When the Nuggets hired Karl, his playoffs record was 59-67. While coaching the Nuggets, Karl's playoffs record is 15-26 as of May 2010. Percentage wise, Karls' playoff record has gotten worse while he has coached the Nuggets, not better (despite 2009).<br /><br />The Nuggets were wrong to hire Karl, and they are also wrong not to fire him unless he wins the NBA Championship within the next year or two. Which by the way, the Nuggets were in 2008, possibly were in 2009, and are again for 2010 talented enough to win a Championship if the coaching was top notch. Coaches with losing playoff records are fired by all truly serious NBA franchises these days regardless of regular season records. Karl had a losing playoffs record when he was hired and it has only gotten worse since.<br /><br />Why did the Nuggets hire Karl? I can only speculate. The Nuggets either knew in advance they would never win the Quest with Karl and hired him anyway, or they figured incorrectly that Karl's playoff record was trumped by better aspects of Karl's record, or they decided that Karl's playoff record could be excused for irrational reasons, or there was some other unknown, off the wall reason for hiring Mr. Karl.<br /><br />The most favored specific theory regarding why Karl was hired is that the Nuggets decided roughly in 2002 to go for a certain kind of player who can be a major bargain because other teams generally avoid that kind of player. The Nuggets decided to go for more volatile players who might need to be contained by a crack the whip type of coach so that they don't "fly off the reservation" and harm team cohesion and morale. Karl is in fact a good coach if you have a bunch of players more emotional and more volatile than average, because for one thing he will not hesitate to bench even players who get enraged about this, that, or the other thing. He will bench anyone at any time and for any reason, good or not.<br /><br />Whatever the Nuggets' management thought, they thought wrongly. If you are a team owner or manager, you can not afford to take any risk or to make any benign assumptions or weak rationalizations when you choose a head coach. If a coach has a poor playoffs record, you have no choice but to not hire that coach if you are serious about winning the Quest. There are going to be coaches who are good enough to do well in the regular season but not good enough to prevail in the playoffs. You should not be the goober who hires one of them, obviously. Let some other franchise/team get stuck in the mud with that type of coach.<br /><br />I have to be blunt here to make sure I am understood. You should never, ever do what the Nuggets did if you are serious about winning the Quest. Your coach should have a good record for BOTH regular season and playoffs. The playoff record is even more important than the regular season record.<br /><br />Finally, before leaving this crucial subject, I am going to state that given the choice between on the one hand a younger coach who is considered to be a good or great up and coming coach, but who has no NBA playoff record at all, and not much of a regular season one, and on the other hand a long-term veteran coach who has a decent, good, or even great regular season record but a poor, losing playoffs record, <em>you are better off choosing the young coach with no playoff record</em>.<br /><br />In point blank and clear summary, hiring a coach with a bad playoffs record is one of the worst things you can do if you want to win the Quest.<br /><br /><span style="color:#cc6600;">======= MECHANICS OF ADVANCED REAL COACH RATINGS =======</span><br /><br /><span style="color:#cc6600;">THE ADVANCED SYSTEM IMPROVES THE PLAYOFF SCORES OF THE BASIC SYSTEM</span><br />The Advanced system is added on to the basic system. Everything stays the same and carries over from the basic system except for playoff wins and playoff losses. All of the mechanics for the basic system shown above apply to the advanced system except that how playoff wins and playoff losses are dealt with by the basic system is null and void in the advanced system. In other words, from basic to advanced everything stays the same except for playoff wins and playoff losses. The advanced system replaces the playoff wins and losses awards and penalties of the basic system with a more sophisticated system.<br /><br />In the advanced version, every playoff series is looked at as a unit. We start with four measures, the offensive efficiency of the two teams and the defensive efficiency of the two teams (from the regular season, of course). Efficiency is how many points scored or how many points given up per 100 possessions. Over the course of the regular season, the thousands of possessions result in precise efficiency numbers where seemingly very small differences are actually big differences between teams.<br /><br />Then we subtract the defensive efficiency from the offensive efficiency to find the net efficiency for each team. Most but not all playoff teams have positive net efficiency numbers and most teams that do not make the playoffs have negative net efficiency numbers.<br /><br />Then we compare the two net efficiencies and whichever team is higher is the favorite. Of course this is true in real life: the team with the better net efficiency beats the other team the vast majority of the time, although when the differences are smaller this is not so certain.<br /><br />The exact difference between the two net efficiencies is crucial, because it determines the likelihood or probability of the favored team winning. The greater the difference in net efficiency, the closer to 100% the probability that the better team will win the series. We have a carefully constructed scale to translate differences in net efficiency to how many games the underdog should win on average in a best of seven game (and a best of five) series. For example, if the difference in net efficiency is 5.0, the underdog will on average win 2.3 games in a best of seven series (with the favored team winning 4 games).<br /><br />Then for each playoff series, we compare the number of games won and lost by the coach versus what the average or standard number of wins and losses are. So then the advanced version breaks down games within playoff series results as follows:<br /><br />Underdog team wins as expected 16<br />Underdog team unexpected playoff wins 76<br />Underdog team expected wins not achieved -84<br />Underdog team losses as expected -23<br /><br />Favored team losses as expected -23<br />Favored team unexpected losses -84<br />Favored team fewer losses than expected 76<br />Favored team wins as expected 16<br /><br />Wins by the favored team get 16 points (instead of 20 that they get in the basic). But unexpected wins, which are extra wins by the underdog team or fewer losses by the favored team get almost five times that many points: 76. Note that if a coach coaches his team to an upset playoff series win, his award would be the difference between the 4 wins it takes to win the series and the number of wins he was “supposed to” get times 76.<br /><br />Unexpected losses are minus 84 points each and consist of underdog teams winning even fewer games than they were supposed to (and still losing the series) and favored teams losing more games than they were supposed to (but still winning the series). If a favored team loses the whole series then the penalty is the difference between the four wins the underdog team won and the number of wins the underdog team was supposed to win in the series.<br /><br />Unexpected wins and losses are rewarded and penalized heavily but not excessively. Unexpected playoff losses are one of the worst things that can happen to a team and a franchise, because they waste the owners’ money, because they partly waste the efforts of a lot of players and managers, and because they make the franchise less likely to attract top free agents. Unexpected playoff losses are a nightmare and the fewer of them you have the better.<br /><br />Note that unexpected playoff losses are in theory supposed to be largely offset by unexpected playoff wins. Most coaches are going to have a series once in awhile where his team performs below standard, but these will be mostly offset by that coaches’unexpected playoff wins.<br /><br />This is the most crucial thing you have to keep in mind: the main purpose of the advanced system is to on the downside flush out and penalize coaches who have more unexpected playoff losses than unexpected playoff wins. On the upside, the primary purpose of the advanced system is to flush out and to award coaches who have more unexpected playoff wins than unexpected playoff losses.<br /><br />In other words, the main purpose of the Advanced Real Coach Rating system over and above the Basic system is to assign unexpected playoff wins and losses to coaches so that coaches whose methods work better in the playoffs than in the regular season are identified and so that coaches whose methods work worse in the playoffs than in the regular season are identified.<br /><br />Quest for the Ring already knows many of the basketball strategies and tactics that work better in the playoffs than in the regular season, and you do to if you read this site because we review and illustrate most of them from time to time.<br /><br /><span style="color:#cc6600;">======= USAGE OF ADVANCED REAL COACH RATINGS =======<br /></span>When every playoff series that a coach has ever coached has been evaluated, we will be able to correctly assign that coach to one of the following categories:<br /><br /><span style="color:#cc6600;">FINAL CLASSIFICATION OF COACHES BASED ON ADVANCED REAL COACH RATINGS</span><br />A: 2,000 and more: An excellent, top of the line coach to have if you want to win the Quest for the Ring<br />B: 1,200 ti 2,000: A good or maybe a very good coach to have if you want to win the Quest for the Ring<br />C: 500 to 1,200: A decent but probably just mediocre coach to have if you want to win the Quest for the Ring<br />D: 0 to 500: A poor to mediocre at best coach to have if you want to win the Quest for the Ring<br />E: minus 500 to 0: A very poor coach to have if you want to win the Quest for the Ring; you have only a very, very small chance to win the Ring with this type of Coach.<br />F: minus 500 and less: A terrible, nightmare coach to have if you want to win the Quest for the Ring; you will definitely not win the Quest with a Coach this bad<br /><br />Once the system is fully operational, Quest for the Ring will guarantee that any coaches who are given an F will never, ever win the Quest. If an F coach ever wins the Quest, we will shut down this site and apologize for being grossly wrong, but trust me, that will never happen. Whether we will issue the absolute guarantee for E coaches is under review; suffice it to say for now that E coaches have only a trivial chance of ever winning the Quest.<br /><br />In general, as you might already realize, the lower the grade of the coach, the better the players have to be to win the Quest for the Ring...<br /><br />Coach is an A: Players need to be at least very good<br />Coach is a B: Players need to be at least very, very good<br />Coach is a C: Players need to be extremely good<br />Coach is a D: Players need to be historically good-one of the best teams of all time<br />Coach is an E: Players need to be about the best team of all time.<br />Coach is a F: There is no possible way any set of players can possibly win the Quest<br /><br /><span style="color:#cc6600;">======= CAUTIONS REGARDING BASIC AND ADVANCED REAL COACH RATINGS ========</span><br /><br /><span style="color:#cc6600;">THE WIDELY DIFFERENT AMOUNTS OF EXPERIENCE PROBLEM</span><br />There is one big hurdle (or notable shortcoming if you want to be negative) in the Real Coach Ratings, and we have largely but probably not completely solved the problem as of 2009. This problem originates in the huge discrepancies in the amount of experience between long-term veteran coaches and much younger coaches. To some extent this makes comparing NBA coaches like trying to compare apples and oranges rather than like trying to compare various apples.<br /><br />In the 2008 User Guide, this was what I had to say about this issue when I tackled it for the first time:<br /><br /><span style="color:#cc6600;">2008 WORK ON THE EXPERIENCE DISCREPANCY PROBLEM</span><br />As I was working on this I often had a sinking feeling that trying to fairly compare coaches with more than 10 years of experience with those with less than 2 years experience would be in the end impossible. But I persevered and scrapped and fought my way to the goal line and got it done. I achieved all of the balancing that I needed to achieve. Specifically, for example, I kept the points given for experience within reason, while making sure that regular season and playoff losses were penalized to the full extent they should be.<br /><br />You must keep in mind that any coach who has been fired for not winning enough in the regular season, for not winning enough in the playoffs, or for both, and has not been rehired by another team, is not on this list. We don't care about them. The whole idea in multi-billion dollar professional sports is to win more than you lose, and that most obviously and most definitely includes the coaches. So a 50/50 record in either the regular season or in the playoffs is not good enough long term, and coaches who are not better than .500 get fired and not rehired sooner or later, and those who have met that fate already are not on this list.<br /><br />To reflect the reality that coaches who can not win more than they lose are sooner or later going to be fired, and will most likely never advance in the playoffs before they are fired, it is necessary to make sure that losses entail a bigger negative number than do wins entail a positive number. But we have to avoid getting carried away. So when I add in the amount given for experience, the apparent gap between the award for winning and the penalty for losing is shrunk down to a small amount.<br /><br /><span style="color:#cc6600;">2009 WORK ON THE EXPERIENCE DIFFERENTIAL PROBLEM<br /></span>Notice that in 2008 I said “we have to avoid getting carried away” in the 2008 attempt to solve this problem. Well, it turns out I probably did get a little carried away. The heavily experienced coaches with a lot of losses were being hammered a little bit too much!<br /><br />So the number of points subtracted for losses were slightly reduced for 2009. Regular season losses are now minus 5.75 (instead of minus 7).<br /><br />However, due to another consideration, playoff losses are slightly greater minuses in 2009 than they were in 2008.<br /><br />Where we are right now is that we are in very good shape overall, but out of respect for conservatism we may still have a small problem left with the experience discrepancy problem. In a nutshell, we decided to take the risk that the problem is not completely solved so as to avoid being overly harsh toward certain long-term coaches. "First, do no harm..."<br /><br />When all is said and done, everyone, including a bad coach, can possibly improve even after many years of not improving. This fact, which we didn’t allow for last year, is the biggest reason why we tweaked the way we did. Unfortunately, the price for this is the real possibility that the experience discrepancy problem is not completely solved.<br /><br /><span style="color:#cc6600;">SLIGHTLY DIFFERENT REWARDS AND PENALTIES BASED ON EXPERIENCE</span><br />Even after the tweaking, this feature of the system goes a long way toward solving the experience differential problem. Here is how it works:<br /><br />In the case of all coaches who have coached fewer than 600 games (which is currently 17 out of 30 of them) since a full point is given for every regular season game for just the experience factor, and since the award for a regular season win is 4 points, and since the penalty for a regular season loss is minus 5.75 points, these younger, less experienced coaches do slightly better than break even just by achieving a 50/50 regular season record. When you combine the win or loss points with the experience point, a win earns a new coach a total of 5 points, while a loss earns him minus 4.75 points.<br /><br />The new coaches are learning, so the system must be slightly easier on them. They can not be expected to know everything right now that they will know in a year or two or three or four. And if they learn the right things, than they might become the next Phil Jackson or Rick Adelman!<br /><br />Coaches who have coached more than 600 games but fewer than 1,000 games must do a little better than .500 in the regular season to achieve a net positive toward their overall Real Coach Ratings. These coaches get 4.25 points for each regular season win and lose 5.5 points for each loss.<br /><br />The long-term veteran coaches who have coached more than 1,000 games get no experience points at all. So they get 4 points for each regular season win and lose 5.75 points for each regular season loss.<br /><br />For the playoffs, all coaches have the same total (including the four experience points) gain or loss: 24 points for a playoff win, and minus 24.75 points for a playoff loss.<br /><br /><span style="color:#cc6600;">REMAINING EXPERIENCE DISCREPANCY PROBLEM</span><br />The worst of the long-term veteran coaches probably have ratings that are slightly higher than what they really should be. If a Coach has received some "lucky breaks" by not being fired after bad losing seasons, and/or after bad losses in the playoffs, and he has over the years now accumulated 1,000 or more regular season games and 100 or more playoff games, his rating will likely still be in effect slightly distorted on the high side relative to the other coaches.<br /><br />This is because the long-time veteran Coach, who could have been fired a long time ago but was not fired, will max out on the experience points, and he will also have a few winning seasons to go with the losing seasons. The sum of the maximum experience points (which is 700 for regular season experience plus four times the number of playoff games) plus any positive net from winning seasons will tend to more than offset all the losses from the year(s) he might have been fired, despite the heavy negatives that losses carry.<br /><br />Another way of thinking about this issue is that assuming a long-term veteran Coach has a too high rating due to the above, keep in mind that Coach would not even be in the ratings had he actually been fired. Coaching a professional sports team is about the worst job in existence for job security, since the vast majority of coaches are involuntarily fired.<br /><br />Yet another way of focusing on this problem is realizing that pro basketball coaches are fired or not fired based on different criteria.<br /><br />We can not simply remove experience from the set of factors, since in every single career there is, the more experience you have, the better you tend to be. Moreover, even if we did reduce or remove the experience factor, the same problem would still be there in the case of coaches who probably should have been fired, but are not and then end up fortunately coaching very skilled teams in subsequent years, thus piling up wins with those teams.<br /><br />In other words, we have no choice but to proceed as if all coaches face the same criteria as to whether they are fired or not, even though we know that some coaches, especially veteran coaches, are treated much more leniently than others<br /><br />One other thing to keep in mind about long-term veteran coaches (the ones with more than 1,000 regular season games coached) is that once such a Coach gets older than 60, 65, and then maybe even 70 years old, that Coach's abilities will probably be less than they were when he were younger. Whereas almost all coaches with little experience are under the age of 55.<br /><br />For example, Utah Jazz Coach Jerry Sloan is 68 years old on March 28, 2010, so it is possible that he is a little too old now for maximum effectiveness.<br /><br />The bottom line is that there will be a small number of older, veteran coaches whose ratings are misleading on the high side. Unfortunately, we are unable to completely correct for this or to properly estimate the amount of the unavoidable distortion at this time.<br /><br />So we advise you when looking at the ratings to make sure you give the benefit of the doubt to younger coaches who seem to have good potential. The coaches whose ratings are most likely distorted upwards would be, at the moment, in order of the most likely amount of distortion, George Karl, Don Nelson, and Larry Brown. It is plausible, for example, that young Miami Heat Coach Erik Spoelstra is as good or better a Coach right now as is Don Nelson.<br /><br /><span style="color:#cc6600;">PROBABLE DOWNSIDE DISTORTIONS</span><br />The flip side of the above distortion is also going to be a problem sometimes. If you have a younger coach who has just started out, and he has a bad team to start with (and a lot more new coaches start with bad teams than good ones) then his rating will be much lower than it will be in future years if he avoids getting fired and in the future gets much better teams to work with.<br /><br />However, it is also very possible that in most cases the worst teams get only the medium and poor coaches, that in other words the really good coaches never have to start out coaching a bad team, so that any downside distortions are small and mostly moot points.<br /><br />Generally speaking, we are still working on a way to make the comparisons between long-time veterans and much younger, newer coaches more valid than they are in the current system. We hope of course to make a breakthrough or two for next October's Report.<br /><br /><span style="color:#cc6600;">BE CAREFUL REGARDING THE VERY LARGE TIME SCALE OF THESE RATINGS</span><br />Keep in mind that each coach is rated using information from every season that he has been a head coach in the NBA. It is very plausible that some of the coaches will currently be substantially better or substantially worse than their overall career ratings indicate.<br /><br />But while I am on this subject, I want to warn you to not make the assumption that all or even most coaches get better as they accumulate more and more experience. There is no empirical evidence I know of to back that sweeping generalization up, and nor is it in my view obvious or even likely to be true most or much of the time. It is plausible that coaches do not really improve that much after roughly 5 or 6 years of experience. It is also plausible that some of the heaviest experience coaches have not completely updated their beliefs and coaching schemes to reflect the current ways of basketball. They may be hurting their teams a little or even a lot by persisting with strategies and tactics that used to work well years ago but are not working very well in the NBA in 2008.<br /><br /><span style="color:#cc6600;">IF YOU COMPLETELY DISTRUST THE RATINGS</span><br />Even if you distrust the ratings themselves, you can evaluate the raw data yourself because Quest for the Ring beginning in 2009 provides the entire spreadsheet on which the Ratings are calculated.<br /><br /><span style="color:#cc6600;">FUTURE CHANGES TO THE BASIC AND THE ADVANCED REAL COACH RATINGS</span><br />Are the factors set in stone forever and ever? No, but adjustments will be few, far between, and minor in the coming years. Although this is not a perfect system, it is at the very least a very good system. And it is light years ahead of having no system at all with which to fairly compare coaches of radically differing amounts of professional basketball head coach experience.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-15098848265568422832010-04-23T13:59:00.000-07:002011-05-16T02:19:54.599-07:00User Guide for Team Grids, April 2010The team grid system allows for quick and easy comparisons between teams. It is also the best foundational tool for managing a basketball team. For example, team grids allow managers, coaches, or anyone else to consider changes in players and in playing times that would improve the chances of winning playoff series and regular season games. At the same time, and just as importantly, team grids allow for quick flagging of coaching errors, some of which can be big enough to cost a team a playoff series or maybe a dozen regular season wins.<br />
<br />
A depth chart shows you team policy regarding who starts and who are the backups and in what order for the five positions. The team grid is based on the depth chart style. However, players (other than players acquired during the season; see below) are placed into first squad, second squad, and third squad according to minutes played, not according to the latest ESPN or any other estimation of what the team policy is. Whoever has played the most minutes at a position is shown in the “1st Squad” whether or not that player starts at the position.<br />
<br />
There is a notable exception to the rule for who goes in which squad. If a player has been acquired during the season and he is listed as the starter on the ESPN depth chart, he will be shown as first squad. Similarly, if a player acquired during the season is shown as the first backup to the starter in the depth chart he will be shown as second squad regardless of minutes. In other words, the depth chart prevails over minutes in the case of players acquired by trade during the season.<br />
<br />
Just to the right of the “3rd Squad" you see two grey areas. From left to right the first one is for players who are probably or definitely out for much or for all of the series for some reason, usually due to injury.<br />
<br />
The second grey shaded area is for players who could play but almost certainly will not play because they played fewer than 300 minutes during the regular season. The 300 minutes threshold is the minimum needed for a hidden defending adjustment and therefore is the minimum needed for a player to get a Real Player Rating. It also is being used here as the threshold for determining whether a player was essentially benched for the season. 300 minutes is less than four minutes a game, which is a very good dividing line for saying whether a player was benched for the season or not. You can get close to 300 minutes with just garbage time, so if you don't play at least 300 minutes, you are basically benched.<br />
<br />
<span style="color: #cc6600;">PLAYERS ACQUIRED BY TRADE</span><br />
Players acquired by trade during the season who have played at least 300 minutes for their new team at the time when ratings for that team are done are treated on the grid as if they were on the team the entire season. The rating you see for them is for their new, current team minutes. The previous team rating is considered to be irrelevant for the grid.<br />
<br />
Players acquired by trade during the season who have NOT played at least 300 minutes for their new team are either:<br />
<br />
--Completely ignored and not shown on the grid if they did not play at least 300 minutes for the team they played for earlier in the season (regardless of whether they ever played at least 300 minutes in any year).<br />
<br />
--They are shown as "more or less benched" if they did play at least 300 minutes for the previous team this season but not at least 300 minutes for the new, current team. The rating you see for them in the "more or less benched" column would have to be and is their rating on their previous team this season.<br />
<br />
<span style="color: #cc6600;">PLAYERS WHO HAVE NEVER PLAYED AT LEAST 300 MINUTES IN ANY SEASON</span>These players will not be listed even in the "benched for the season" column since no rating can be computed for them for any year and since, quite frankly, they are completely irrelevant for the playoff series at hand.<br />
<br />
So players who are listed in the “more or less benched for the season” column are players who played at least 300 minutes during at least one NBA season. The Real Player Rating is shown for those players for the most recent year they played at least 300 minutes. What year that was is shown right next to their rating.<br />
<br />
<span style="color: #cc6600;">TEAM COMPARISONS USING THE GRID</span><br />
First, you can compare specific players for any position. For example, you can see which team had the better starting point guard.<br />
<br />
<span style="color: #cc6600;">COMPARING TEAMS BY POSITION</span><br />
By looking at the “Position Averages” column you can compare the two teams position by position. For each position, only the ratings of the first squad and of the second squad player are considered for the position average. And the rating of the first squad player at each position counts twice as much as the rating of the second squad player at each position. In other words, for each position the position average is two times the rating of the first squad player plus the rating of the second squad player divided by three.<br />
<br />
If there is only one player who played 300 minutes or more at a position, there is a special rule that seeks to come up with a reasonable number for the position. 80% of that single player's rating is considered to be the position average. The 20% reduction is justified because of the fact that one or more players at other positions will have to fill out the position that has only one player (unless the single player plays almost the entire game which is pretty rare). Those other position players will obviously generally not be as valuable at the position as players dedicated to that position are.<br />
<br />
Real Player Ratings vary by position because ultimately some positions are on average more important for winning the Quest than others. We don’t have exact numbers yet but here is a rough estimate of how League average ratings will vary by position:<br />
<br />
Point Guard .780<br />
Center .750<br />
Power Forward .720<br />
Small Forward .650<br />
Shooting Guard .600<br />
<br />
And again we don't have exact numbers yet, but we already know that, approximately, playoff team ratings, at least for the teams that win the first round, which would be eight teams, average out to about .800. The very best teams will have ratings averaging even higher than that. So ideally, and once again with the reminder that teams can and will vary radically from the position pattern shown here, here is a prototypical, "average" round two level NBA playoff team by position and by RPR:<br />
<br />
Point Guard .900<br />
Center .875<br />
Power Forward .825<br />
Small Forward .700<br />
Shooting Guard .700<br />
<br />
Again for emphasis: in reality many playoff teams will have at least one position where the average RPR of the two players who play it the most is greater than .900. And many playoff teams will have at least one position where the average of the top two players at the position is substantially less than .700.<br />
<br />
But Championship teams will seldom have any position where the best two players average below .700 and they sometimes will feature two positions where the average of the top two players is greater than .900.<br />
<br />
<span style="color: #cc6600;">THE SUPERSTAR COMBO GUARD STRATEGY</span><br />
Sometimes the shooting guard is so good that he is effectively also the point guard to some extent and he has a much higher rating than other shooting guards and perhaps a higher rating than other point guards. Kobe Bryant and the Los Angeles Lakers are a very good example. The overall 2-guard League average Real Player Rating is about .575 in the regular and .700 for the final eight teams. Kobe Bryant, of course, is well over 1.000.<br />
<br />
One reason why having a superstar 2-guard who can take responsibility for keeping the ball moving and for being a playmaker is a very good strategy for winning the Quest is that you eliminate the common problem of leaving the 2-guard position as a weak spot in your overall lineup. In other words it is a very good way of optimizing your overall lineup, provided that the "real" point guard understands and can work with the strategy correctly.<br />
<br />
If the "real" point guard does not understand the strategy and / or he disagrees with it, the drawback will be that to the extent you play that real point guard at the same time as your combo guard at the shooting guard position, you may have a player even less useful than a straight up mediocre 2-guard, in which case the strategy has backfired. There are several wrong ways and only a very few right ways to deploy the superstar combo guard strategy. There have been and will in the future be more Quest Reports on this very important subject.<br />
<br />
By looking at the squad averages row you can see what the average rating of the players in that squad is for each team. By comparing the first squad with the second squad, you can see how much of a drop off there is between them. Since most of the players in the first squad are starters, this is approximately equivalent to comparing the starters and the bench. The bigger the drop off, the more minutes the starters should be playing.<br />
<br />
<span style="color: #cc6600;">SQUAD AND OVERALL TEAM AVERAGES</span><br />
You can also of course compare the squad averages of the two teams. If you do, you will be essentially comparing the starters as a whole and the non-starters as a whole of the two teams, although keep in mind a team may have graduated one or two second squad players to starter for the playoffs.<br />
<br />
Finally, notice that there is a “Team Average” at the lower left for each team. This is two times the first squad average plus the second squad average divided by three. In other words, this is a weighted average of the top two squads, with the first squad counted twice and the second squad counted once, which roughly corresponds to typical playing time patterns. Players in the third squad, the injured players, and the benched players are not counted in the team average.<br />
<br />
You can put substantial stock but not a very large amount of stock in the team average number because there are still often going to be in the second squad a player with a very low rating from time to time. How much such players play in the playoffs is dependent on how strapped the team is at the position and on how dumb the coaching is.<br />
<br />
<span style="color: #cc6600;">LOW RATING PLAYERS IN THE PLAYOFFS</span><br />
Often, especially on the best coached teams and on the primary contenders, a second squad player with a relatively low rating will be strategically benched during the playoffs. In general, players with ratings below .600 should play sparingly in the playoffs or not at all. Players with ratings below .500 should generally not play in the playoffs at all for any reason.<br />
<br />
So there is a fairly large statistical error going on with the overall team average. But if you see that there is a big difference of about .050 or more in the team averages, that would tell you that the higher team is clearly more talented than the lower.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-6278270750712584012010-04-16T20:01:00.000-07:002011-01-03T18:20:14.660-08:00[Historical and Non-Current] User Guide for Real Team Ratings Reports Updated as of April 2010IMPORTANT NOTICE: THIS IS A NON CURRENT, LEGACY USER GUIDE THAT WILL EVENTUALLY BE DELETED. <a href="http://nuggets1reference.blogspot.com/2011/01/user-guide-for-real-team-ratings-as-of.html">AN UPDATED AND CURRENT GUIDE IS LOCATED HERE</a>.<br /><br /><br /><br /><br /><span style="color:#b45f06;">USER GUIDE FOR REAL TEAM RATINGS REPORTS </span><br /><span style="color:#b45f06;">UPDATED AS OF APRIL 16 2010</span><br /><br /><span style="color:#b45f06;">INTRODUCTION</span><br />Of all the popular American sports Leagues, the NBA is the one where the better team is most likely to avoid being upset in the playoffs. Therefore, the RTR system can be used to gain knowledge of which team is most likely to win playoff series. It can also be used to determine whether how good various players played led to an upset or not, and to get a general idea of how much better or worse than expected teams played in playoff series.<br /><br />The Real Team Ratings (RTR) is NOT simply a system that shows how well the teams are doing in the regular season. Instead, it is a rating system designed to reveal the capability of winning playoff games and series of each team.<br /><br />The original and continuing foundation of RTR is defensive and offensive efficiency. The second most important factor and a big enough factor to be considered part of the foundation is the ability of teams to win games against the better and the best teams.<br /><br />On top of these most important foundational factors, there are three other factors. First, the Real Team Ratings system was substantially improved in April 2010 with the advent of a new factor that reflects recent performance (in about the last two months). This gets at several previously ignored items that will help determine who will win and lose in the playoffs; see below for details.<br /><br />The two other, smaller factors are the defense overweight adjustment and the pace over weight adjustment. History has shown that teams with great defenses and teams with slower paces have a little easier time winning Championships than the reverse. Every little bit of easier counts.<br /><br />In general, factors that sometimes impact winning are NOT included; only factors that always or at least almost always impact winning are included.<br /><br />RTR can be approximately used to predict who will win playoff series. However, there are of course factors not included in the RTR simply because their impact can not be known until the playoffs are played.<br /><br />One factor not included can sometimes be huge and can easily flip a series: late regular season and during the playoffs injuries. The new recent games factor partially accounts for injuries that occurred in February and March and injuries occurring before then were already and remain largely covered by RTR. But injuries that occur starting around the 25th of March or later remain mostly unaccounted for by RTR. Among factors not included in RTR that always impact winning playoff games, recent injuries is by far the biggest one.<br /><br />The ratings are calculated for all teams, even though 14 of the 30 NBA teams do not qualify for the playoffs. Even though they will not be playing any playoff games, the ratings for the lower teams nevertheless give an accurate measure of how well those teams would most likely do if they were in the playoffs. So for those lottery teams, RTR is an interesting hypothetical.<br /><br /><span style="color:#cc6600;">THE NEXT IMPROVEMENT</span><br />RTR will be tweaked further in the future as necessary, although we think the new April 2010 version is getting close to “almost perfect”. There is a proposal to include a small adjustment for coaching, based on the annual Real Coach Ratings, which themselves were substantially improved in 2009. Any new coaching adjustment will be small since coaching is already reflected in all of the other factors, but a small adjustment to reflect playoff experience and playoff performance of coaches appears to be warranted and is on the drawing board.<br /><br /><span style="color:#cc6600;">HISTORY OF REAL TEAM RATINGS</span><br />Quite honestly this system has had a more rocky development path than most other systems here at Quest for the Ring. There were several major changes to the system historically. For example, in 2009, the RTR rating system was much improved from prior versions. It was improved to make absolutely certain that you can predict the outcome of the playoffs in advance as accurately as possible. All crucial factors except for home court advantage, the injury situation, and coaching in the playoffs versus the regular season were now included and weighted very carefully. See below for how to adjust RTR scores for the first and second of these three items.<br /><br />The biggest and most important improvement for 2009 and beyond was the introduction of points for wins over and points subtracted for losses to the top sixteen teams (which would be the playoff teams themselves.)<br /><br /><span style="color:#cc6600;">SUMMARY</span><br />The RTR system is a combination of net efficiency (net points per 100 possessions), the wins and losses versus playoff bound teams and especially versus the top ten teams, recent performance, the relatively small defensive overweight adjustment, and the very small pace adjustment. Each of these is now described in detail.<br /><br /><span style="color:#b45f06;">THE FIVE FACTORS USED FOR REAL TEAM RATINGS</span><br /><span style="color:#b45f06;">1. NET EFFICIENCY</span><br />Offensive efficiency minus Defensive efficiency equals net efficiency. Offensive efficiency is points scored per 100 possessions. Defensive efficiency is points scored per 100 possessions. A weight of 3.0 is applied to net efficiency in the RTR formula which reflects how crucial this is.<br /><br /><span style="color:#b45f06;">2. WINS OVER AND LOSSES TO PLAYOFF TEAMS AND TO THE BEST TEN TEAMS</span><br />Each team's win-loss record is accessed for games it played against the top sixteen teams and, separately, for games it played against the top ten teams. These two records are added together, which has the effect of double weighting wins and losses versus top ten teams while leaving wins and losses versus the 11th through the 16th best teams single weighted.<br /><br />Next the winning percentage of the wins and losses combined as just explained is calculated to three decimal points (for example, .558). Next the difference between each team’s winning percentage and a base of .360 is calculated and then this difference is multiplied by 90 to achieve the desired correlation with the net efficiency factor or in other words to achieve the desired weight within RTR. As an example, for the team with a winning percentage of .558, the factor added to RTR is (.558-.360) X 90 = 17.82.<br /><br />The base of .360 is approximately the actual threshold between playoff and non-playoff teams.<br /><br />The wins and losses versus top teams factor is correlated so that the sixteen playoff teams get from it on average 90% of the RTR points they get from the net efficiency factor.<br /><br />Note that the use of the winning percentage as opposed to raw wins and losses almost completely corrects for different number of games played by teams against top teams.<br /><br />This factor, wins over and losses to playoff teams, is the key 2009 improvement over the very early versions of RTR and helped to clearly establish Real Team Ratings as the most accurate playoff predictor possible. By counting in the overall formula actual wins and losses in games between the likely playoff teams, you have gone in a straight line directly to evidence for the question we are out to answer: how good are the teams really going to be in the playoffs, according to everything known now?<br /><br /><span style="color:#b45f06;">3. MOST RECENT DEVELOPMENTS AS SHOWN BY RECENT GAMES</span><br />The Real Team Ratings system was substantially improved in April 2010 with the arrival of a new factor that reflects recent performance (in about the last two months). This gets at several previously largely ignored items that help determine who will win and lose in the playoffs.<br /><br />The key features and attributes of the most recent developments factor are:<br /><br />-Functionally it over weights the most recent performance, from the most recent 25 games.<br /><br />-It factors in momentum and morale going into the playoffs.<br /><br />-It factors in coaching strategies and tactics that have finally produced good (or bad) results just in time for the playoffs. In other words, it substantially though roughly reflects the likelihood that coaching strategies and tactics will work or not in the playoffs<br /><br />-It factors in the performance of new players acquired for the stretch run of the regular season and for the playoffs.<br /><br />-It substantially but indirectly and inexactly reflects the current injury situation of teams. It especially factors in injuries that have occurred within the last couple of months or so and that may be carrying over into the playoffs. In other words, this factor is extremely useful for correcting RTR for injuries that occurred in February and March.<br /><br />-The last five games of the Regular Season are ignored due to playoff coaches resting key players and due to other distortions. So the final Real Team Ratings for a season will cover from the 53rd game of a team through and including the 77th game of a team, while games 78 through 82 are ignored.<br /><br /><span style="color:#cc6600;">LIMITATIONS OF THE MOST RECENT DEVELOPMENTS FACTOR</span><br />Injuries that occurred in the last few days of March or in April are not much corrected for by this new factor. Moreover, injuries occurring during the playoffs themselves remain completely outside of the RTR system. Finally, when one or more players were injured and unavailable in February / March but are completely ready to go for the playoffs, the new factor may inadvertently distort the rating of the team downward.<br /><br />For these and other reasons, the "Manual Injury Adjustment" is being maintained, although it has been completely overhauled. See below for details.<br /><br /><span style="color:#b45f06;">4. DEFENSIVE OVERWEIGHT ADJUSTMENT </span><br />The teams are sorted by defensive efficiency. Then, using a range from 5.8 to -5.8, points are assigned, in equal increments of 0.4, to each team in order of how it ranks in defensive efficiency. Specifically, the team with the best defensive efficiency (fewest points allowed per 100 possessions) is given 5.8 points, the second most defensively efficient team gets 5.4 points, the third most defensively efficient team gets 5.0 points, and so on, until the least defensively efficient team gets minus 5.8 points.<br /><br />It is well known that, for the playoffs, how well a team can defend is generally somewhat more important than during the regular season. This factor answers the need to overweight defending in order to get accurate playoff projections. The adjustment gives an increase or a decrease in every team's rating in accordance with how each team ranks in defensive efficiency in the NBA.<br /><br />The amount of the adjustment is carefully calibrated to be sufficient without being excessive. Since for one thing almost all teams ramp up their defense in the playoffs to one extent or another, you have to be careful here to avoid getting carried away and putting in adjustments that are too large.<br /><br /><span style="color:#b45f06;">5. PACE OVERWEIGHT ADJUSTMENT</span><br />The teams are sorted by pace. Pace for each team is the average number of possessions per game for that team's games. Then, using a range from 2.9 to -2.9, points are assigned, in equal increments of .2, to each team in order of how it ranks according to pace. Specifically, the team with the slowest pace (fewest possessions per game) is given 2.9 points, the 2nd slowest pace team gets 2.7 points, the third slowest pace team gets 2.5 points, and so on, until the fastest pace team gets minus 2.9 points.<br /><br />The reason for the pace adjustment is that there is a relatively small but definite correlation between slower pace and winning playoff series. It is a little more difficult, on average, for fast pace teams to win playoff series than it is for slow pace teams to win them. Therefore, a small adjustment called the pace overweight adjustment is factored in to RTR.<br /><br />Why exactly do slower paced teams have a slightly easier job winning playoff series? Consider an example. For example, consider the Denver Nuggets. They are one of the fastest paced teams in the NBA during the regular season. If you just look at the efficiency measures, the Nuggets might appear to be almost identical to another, much slower team. But these two teams would be very different when you look at efficiency and pace together. In theory, slower paced teams can more reliably reproduce their nice regular season net efficiency in the playoffs than can faster paced teams, mostly because the playoffs feature a higher defensive intensity and aggressiveness, which automatically slows down the pace.<br /><br />Suppose that in the playoffs, the fast paced Nuggets and a slow paced team play. Each team had almost exactly the same offensive, defensive, and net efficiency numbers during the regular season. By playing extra hard on defense, the slow pace team can automatically slow down the game to some degree, which will disrupt the offensive (and possibly the defensive) efficiency of the Nuggets, the team that was fast pace in the regular season. In other words, there will be fewer possessions for the fast pace team in the playoff games than it typically had in the regular season. This in turn means that the fast pace team will be disrupted from what they did during the regular season to one extent or another.<br /><br />This means that for the fast pace team, both the offensive and the defensive efficiency could change in the playoffs from what it was in the regular season, due to all of the changes forced on the fast pace team by the change of pace. Both the offensive and the defensive efficiency might change, and each change could be either for the better or for the worse, but by far the most likely changes would be that the offense would be substantially less efficient, while the defense would not be changed much. A much less efficient offense, but about the same defense, is exactly what we have seen from the Nuggets in their numerous playoff series losses in recent years.<br /><br />In extreme cases, such as the fastest pace team being slowed down dramatically in the playoffs by an extremely slow team, the pace adjustment may be inadequate, so that there may still be some forecast error even after everything we have done.<br /><br />The bottom line is that in all known cases, faster paced teams do not do as well in the playoffs as they do in the regular season, all other things equal. If a fast paced team wants to win in the playoffs, it would be wise to do some things better in the playoffs than they did those things in the regular season, in order to compensate for being forced to operate at a slower pace.<br /><br /><span style="color:#b45f06;">CALCULATION OF RTR: THE FORMULA</span><br />The easiest way to describe the final calculation of RTR is to give you the formula.<br /><br />RTR =<br />Net Efficiency X 3.0<br /><br />Plus<br /><br />Winning percentage versus the top 16 and versus the top ten teams combined minus .360) X 90<br /><br />Plus<br /><br />The difference between wins and losses in the last 25 games (with the last five games of the regular season ignored)<br /><br />Plus<br /><br />The defense overweight adjustment (from +5.8 to -5.8 according to defensive efficiency rank)<br /><br />Plus<br /><br />The pace overweight adjustment (from +2.9 to -2.9 according to pace; slower pace is better than faster pace)<br /><br /><span style="color:#b45f06;">BASE STATISTICAL ERROR</span><br />Due to a small amount of unavoidable statistical error in RTR, there has to be about a five point difference between teams before you can start to have any confidence at all that the higher team will defeat the lower in a playoff series. The base statistical error for the final, end of season RTRs is about 3 points.<br /><br />Statistical error is of course greater with less data, which means that the earlier that Real Team Ratings come out during a season, the higher the base statistical error. The first RTR Report is scheduled to come out in the last week of December. The base statistical error at that point is about eight points. Aside from statistical error, of course there is the much larger fact that a lot can change between the end of December and late April that has nothing to do with statistical error.<br /><br /><span style="color:#cc6600;">========== PLAYOFF FACTORS OUTSIDE OF REAL TEAM RATINGS ==========</span><br />The RTR system is the best playoff prediction scheme that can be done during the regular season. But there are still some factors that can not be included in RTR itself that will help determine playoff series. To get even better accuracy than base RTR, you have to know exactly what the injury situation is at the time playoff games are played. You need to know who has home court advantage. And you also would want to know how specific coaching tactics in particular playoff series will work out.<br /><br />We now show you how when playoff time comes to adjust RTR for two out of three of those factors not included in RTR: home court advantage and the injury situation.<br /><br /><span style="color:#b45f06;">TWO MAJOR FACTORS DETERMINING WHO WINS PLAYOFF SERIES NOT BUILT IN TO THE BASIC RTR</span><br /><br /><span style="color:#b45f06;">1. HOME COURT ADVANTAGE</span><br />It is usually impossible to know who will have home court advantage in all of the round one playoff series until after the entire regular season is over.<br /><br />Home court advantage is estimated to be worth between 5 and 7 points. You should generally add six points to the team that has home court advantage, although you can add as few as five or as many as seven if you know for sure that the home court advantage is much less or much more important than usual.<br /><br /><span style="color:#b45f06;">2. PLAYERS UNAVAILABLE (OR PLAYING POORLY) DUE TO INJURES</span><br />This is the most important of the two manual adjustments to RTR that are needed to arrive at an almost perfect prediction of who will win playoff series. In other words, this can easily be a much bigger adjustment than the six points for the home court advantage.<br /><br />With the advent of the “most recent developments” factor (aka the “last 25 games factor”) manual injury adjustments are now easier, smaller, and more statistically valid than before. As a result, manual injury adjustments can now be highly recommended.<br /><br /><span style="color:#b45f06;">========== MANUAL INJURY ADJUSTMENTS ==========</span><br />Use the following instructions to adjust RTR of teams for injuries existing in the playoffs. The best manual injury adjustments can not be done until at least a day or two before a playoff series starts. In fact, due to the big, inherent uncertainty regarding injuries, manual injury adjustments often will need to be updated during or after game one of a series. This is because one or more of the players you thought would not play have played and / or one or more players you thought would play have not played due to injuries.<br /><br />There are many complications involving the impact of injuries on who is going to win playoff games. I'll mention a few of them. One big complication is that the injury situation changes more rapidly than any of the other factors. Another complication is that early season injuries, even if the player never comes back, are not as bad for the playoffs as are late season injuries. Yet another complication is that there is very often conflicting information out there about just how bad different injuries are. For example, one source may say a player is probable (75-80% chance of playing) while another says the player is questionable (50% chance) while still another says doubtful (20-25% chance).<br /><br />The overall magnitude of the injury adjustment to the new as of April 2010 RTR will range from zero to 30 points for most NBA playoff teams, but it is theoretically possible for there to be as much as a 60 point downward adjustment for a totally devastated team.<br /><br />Among the most important variables regarding players who can’t play in the playoffs are:<br /><br />-How good are the injured players? The Real Player Rating system is a perfect way to find out.<br /><br />-To what extent are other players able to step up and replace the injured player or players? This depends mostly on how good the replacement(s) is or are and on how good the coaches are.<br /><br />-For how long was the player injured? For players who never played at all, no adjustment in base RTR at all is necessary. The more the player played during the regular season, the GREATER the adjustment necessary.<br /><br />Players who were injured the entire season are irrelevant, except of course they are relevant in the hypothetical sense of how the season could have been different. Players who were injured relatively early in the regular season, in November or December, are only slightly relevant, and the loss of them would be a much smaller number of reduced RTR points than when the loss is later. Players who were injured late in the season, from mid-February to mid-April, have the most relevancy to whether playoff series can be won or lost, and the manual injury downward adjustment to RTR for them is much higher.<br /><br /><span style="color:#b45f06;">MECHANICS OF THE INJURY ADJUSTMENT</span><br /><span style="color:#b45f06;"></span><br /><span style="color:#cc6600;">FIRST FIND OUT WHO IS INJURED AND WHO MIGHT BE INJURED</span><br />The first thing to do of course is to find out which players are injured. For best results, use the <a href="http://thequestfortheringinjuries.blogspot.com/">Quest for the Ring injury page </a>to get the latest information.<br /><br /><span style="color:#cc6600;">MANUAL INJURY ADJUSTMENT BASE</span><br />The base or starting point is the quality of the player, as shown by his Real Player Rating (including the hidden defending adjustment.) The base adjustment is the Real Player Rating of the player minus .500 times 20. For example, if the player injured has a RPR of .700, the base manual injury adjustment is (.700 - .500) X 20 = .200 X 20 = 4. As another example, if the player is a superstar and has a Real Player Rating of .950, the base manual injury adjustment is (.950 - .500) X 20 = .450 X 20 = 9.<br /><br />.500 is subtracted from the ratings because ratings below .500 are virtually worthless in the playoffs.<br /><br />We now for each injured player take the base and adjust it for variables regarding the injury. The variables are as follows. There are five variables numbered 1, 2, 3A, 3B, a;nd 3C.<br /><br /><span style="color:#cc6600;">1. STATUS (PROBABILITY PLAYER WILL PLAY) ADJUSTMENT</span><br />This adjustment is for manual injury adjustments to RTR when it is uncertain whether the player will be able to play in the game or not. Unfortunately uncertainty is the norm, not the exception.<br /><br />Also unfortunately, sources of injury information sometimes conflict. When they do, you have to use your judgment as to which source is most correct, or else you can average out the designations.<br /><br />The following tells you what to multiply the base manual injury adjustment base by based on the injury designation being reported.<br /><br />Probable (There is about a 80% chance the player will play): multiply the base by .3<br /><br />Game Time Decision (There is about a 60% chance the player will play): multiply the base by .55 <br /><br />Questionable (There is about a 40% chance the player will play): multiply the base by .75<br /><br />Doubtful (There is about a 20% chance the player will play): multiply the base by .9<br /><br />Out (There is about a 0% chance the player the player will play): multiply the base by 1.0<br /><br />The status designations can be used not only as probabilities players will play but as rough but valid approximations of the severity of injuries, which in turn reflects the impact on the playoff series even if the player plays. Players who play slightly injured are seldom if ever going to be as good as they were with no injury at all. Therefore, the above adjustment factors not only reflect the probabilities but also the reality that even if the player plays, the team will be harmed by the injury situation.<br /><br /><span style="color:#b45f06;">2. WHEN IN THE SEASON THE PLAYER WAS LOST</span><br />Find out when the player became injured by checking game logs which are part of most statistical data sets for NBA players at most major sites including ESPN.<br /><br />If the player has been unavailable on an on and off basis, assume the player was not available for the entire range of time, unless he was available at least 75% of the games within the range, in which case use the most recent date he became unavailable.<br /><br />Use the following factors:<br />November .10<br />December .30<br />January .50<br />February .70<br />March .90<br />April 1.0<br /><br /><span style="color:#cc6600;">3. IMPORTANCE OF PLAYER TO THE TEAM</span><br />We actually have three separate adjustments which together show the importance of the player to the team.<br /><span style="color:#cc6600;"></span><br /><span style="color:#cc6600;">3A MINUTES PER GAME OF THE INJURED PLAYER</span><br />At ESPN or another good site, find the minutes for the player for the current season. Be careful not to use minutes per game from any other season. Use the following adjustment factors:<br /><br />30 mpg and more: 1.0<br />27 to 29.9: .9<br />24 to 26.9: .8<br />21 to 23.9: .7<br />18 to 20.9: .6<br />15 to 17.9: .5<br />12 to 14.9: .4<br />9 to 11.9: .3<br />6 to 8.9: .2<br />3 to 5.9: 1<br />Less than 3: 0<br /><br />This factor indirectly gets at to what extent other players can make up for the player who is not available due to injury.<br /><br /><span style="color:#cc6600;">3B OVERALL DEPTH OF THE TEAM</span><br />Go to the latest Real Player Ratings Report for the team. Such Reports are posted at The Quest for the Ring for most or all playoff teams in late March or early April. Near the very beginning of such Reports you will see all the key players listed by category. Count the number of players according to category as follows, but DO NOT COUNT any players who are not available due to injuries or for any other reason. <br /><br />Specifically, for purposes of this factor:<br /><br />-Players listed as out should not be counted<br />-Players lested as doubtful should not be counted<br />-Players listed as questionable should be counted at 1/2<br />-Players listed as game time decision should be counted<br />-Players listed as probable should be counted<br /><br />Note that in some cases you will be counting players as available even though you are calculating an injury hit on the team for them. This is paradoxical in the narrow sense but is part of a valid overall calculation.<br /><br />Here are the team depth count factors:<br /><br />Major Historical Superstars: Multiply the number of them by 10.<br />Historical Superstars: Multiply the number of them by 8.5.<br />Superstars: Multiply the number of them by 7.<br />Stars: Multiply the number of them by 6.<br />Very Good / Solid Starters: Multiply the number of them by 5<br />Major Role Players / Good Enough to start: Multiply the number of them by 4<br /><br />Add it all up and then apply the following factors to the manual injury adjustment base:<br /><br />50 and more: 0<br />49: .1<br />48: .2<br />47: .3<br />46 .4<br />45: .5<br />44: .6<br />43: .7<br />42: .8<br />41: .9<br />40 and less: 1.0<br /><br />What this means is that if a team is so chock loaded that its remaining, available players add up to 50 or more points then it can completely make up for the injured player. If the sum of the remaining players is 40 or less, the team most likely can not at all make up for the injury.<br /><br />In practice you will find that this test will often spit out the 1.0 factor since, unfortunately, few teams have enough good and great players to make an injury even partially irrelevant.<br /><br /><span style="color:#cc6600;">3C POSITION SHORTAGES</span><br />This factor is unique in that it can result in an increase rather than a decrease in the base manual injury adjustment factor. Find out if you don’t know already which position the injured player plays. Then check the depth chart for the team at ESPN or perhaps CBS Sports or Yahoo Sports. Find out how many available players there are at the injured players’ position.<br /><br />The minimum reasonable number of players for each position for a completely healthy team is two and the maximum is four. A team impacted by one or more injuries at a position will have between zero and three players at the position following the injury. Use the following factors:<br /><br />3 Players Still Available at the Position: .8<br />2 Players Still Available at the Position: 1.0<br />1 Player Still Available at the Position: 1.2<br />0 Players Still Available at the Position: 1.5<br /><br /><span style="color:#b45f06;">AN EXAMPLE: THE 2010 UTAH JAZZ</span><br />Ok, now lets consider an example to see how all of this manual injury adjustment stuff works.<br /><br /><span style="color:#cc6600;">EXAMPLE STEP ONE</span><br />Find out who is and who may be injured.<br /><br />We have this year Carlos Boozer and Andrei Kirilenko, the second and third best players on the Utah Jazz, who are playing the Denver Nuggets in the first round, affected by injuries. Mehmet Okur may possibly be affected. According to two well regarded sources, here was the situation the day before the playoff series began (April 16, 2009):<br /><br />-Carlos Boozer power forward, is questionable for Saturday's game against Denver due to a strained right oblique/rib cage.<br /><br />-Andrei Kirilenko small forward, will miss at least the first round of the playoffs due to a strained left calf.<br /><br />-Mehmet Okur, center, is probable with a strained left Achilles tendon.<br /><br />In all of the calculations that follow: we round to the nearest tenth of a point; there is very little need to be more exact than that.<br /><br /><span style="color:#cc6600;">EXAMPLE STEP TWO</span><br />Compute the base manual injury adjustments:<br /><br />Boozer: (1.005 - .500) X 20 = .505 X 20 = 10.1<br />Kirilenko: (.970 - .500) X 20 = .470 X 20 = 9.4<br />Okur: (.806 - .500) X 20 = .306 X 20 = 6.1<br /><br /><span style="color:#cc6600;">EXAMPLE STEP THREE</span><br />Adjust for the status (probability the player will play) factor:<br /><br />Boozer is “questionable” so the factor to use is .75:<br />10.1 X .75 = 7.6<br /><br />Kirilenko is “out” so the factor to use is 1.0:<br />9.4 X 1.0 = 9.4<br /><br />Okur is “probable” so the factor to use is .3:<br />6.1 X .3 = 1.8<br /><br />Note that although Okur is actually very likely to play, the Jazz will be at least slightly harmed by his minor injury whether or not he plays, so the small hit they will take on their Real Team Rating due to the minor injury for Okur is justified.<br /><br /><span style="color:#cc6600;">EXAMPLE STEP FOUR</span><br />Using the method described above, find out when in the season the player was lost (or mostly lost).<br /><br />The Boozer situation just developed in April; the factor for April is 1.0, so the Boozer number remains 7.6.<br /><br />The Kirilenko situation developed in March and the factor for March is .90. So for Kirilenko:<br /><br />9.4 X .9 = 8.5 <br /><br />The Okur situation just developed in April and the factor for April is 1.0. So the Okur number remains 1.8.<br /><br />Step Five: Minutes per game of the player<br /><br />Boozer’s minutes per game are 34.5 and the factor to use is 1.0 so Boozer’s number remains 7.6.<br /><br />Kirilenko’s minutes per game are 29 and the factor to use is .9:<br />8.5 X .9 = 7.7<br /><br />Okur’s minutes per game are 29.4 and the factor to use is .9:<br />1.8 X .9 = 1.6<br /><br /><span style="color:#cc6600;">EXAMPLE STEP FIVE</span><br />Find the overall depth of the team not counting injured players.<br /><br />Following the rules described above, Kirilenko is removed from the roster and we are left with:<br /><br />-Deron Williams: major historical superstar, worth 10 points<br />-Carlos Boozer: historical superstar, worth 8.5 points<br />-Kyle Korver: star, worth 6 points<br />-Paul Milsap: star, worth 6 points<br />-Mehmet Okur: very good / solid starter, worth 5 points<br />-Ronnie Price: major role player / good enough to start, worth 4 points<br /><br />Williams, Korver, Milsap, Okur, and Price are all available and they total 31 points. Boozer is questionable and he is a historical superstar. So he counts as 1/2 X 8.5 = 4.3. So the Jazz depth count is 35.3. So according to the table above, the factor to use (for all three of the Jazz players with injury situations) is 1.0, so the numbers of all three carry forward as what they were in the preceding step: Boozer: 7.6, Kirilenko: 7.7 and Okur: 1.6.<br /><br /><span style="color:#cc6600;">EXAMPLE STEP SIX</span><br />Check for position shortages:<br /><br />Boozer is a power forward and without him the Jazz have just one power forward so the factor to use is 1.2:<br /><br />7.6 X 1.2 = 9.1<br /><br />Kirilenko is a small forward and without him the Jazz have just one small forward so the factor to use is 1.2:<br /><br />7.7 X 1.2 = 9.2<br /><br />Okur is a center and without him the Jazz have two centers so the factor to use is 1.0:<br /><br />1.6 X 1.0 = 1.6<br /><br /><span style="color:#cc6600;">EXAMPLE STEP SEVEN</span><br />Add up the manual injury adjustments for the three Jazz players:<br /><br />9.1 + 9.2 +1.6 = 19.9<br /><br /><span style="color:#cc6600;">EXAMPLE STEP EIGHT</span><br />Subtract the manual injury adjustment from the Jazz’ Real Team Rating to get the RTR adjusted for injuries:<br /><br />39.6 – 19.9 = 19.7.<br /><br />So the Jazz Real Team Rating once injuries are accounted for is 19.7. Then if you do the same thing for the Nuggets, you can compare the two and find out who is probably going to win this series and what the probability is. Then in turn you can evaluate how well the teams do in the series given the situation. You can for example find out how much of an upset it would be if the Jazz beat the Nuggets (assuming their injuries make them underdogs as is apparently the case).<br /><br /><span style="color:#b45f06;">COACHING IN THE PLAYOFFS VERSUS COACHING IN THE REGULAR SEASON</span><br />As explained previously, this factor is currently not included in base RTR and nor is there a manual adjustment procedure for it. But regular season coaching is obviously and completely included in base RTR.<br /><br />Certain coaches deploy offensive and/or defensive strategies in the regular season that do not work as well in the playoffs as they do in the regular season. A team using this kind of strategy makes the playoffs but sooner or later gets bounced in the playoffs by a team using one or more strategies rewarded the most by basketball.<br /><br />In other words, and more broadly, it is known to us here at Quest that how a team is coached, including what schemes it is using on offense and defense can have a different impact in the playoffs than it and they had in the regular season. This would not be picked up by the RTR.<br /><br />The negative impact on RTR of coaching that works better in the regular season than in the playoffs is at this time believed to be between small and not so small, up to an absolute maximum of about 20 RTR points. But a 15-20 point hit would be plenty big enough to swing any close series. Coaches who coach well in the regular season but not in the playoffs will cost their teams playoff series they probably could have won, although this will not happen in every series. It will happen mostly in series where the RTR differential is between 5 and 20 points.<br /><br />This type of coaching will certainly be in the long run ruinous to the objective of going as far as possible in the playoffs, simply because in every playoff run any playoff team will sooner of later face teams with similar base RTR ratings.<br /><br />One of the primary objectives of the Quest for the Ring is to identify and explain offensive and defensive strategies that work better in the regular season than they do in the playoffs, and vice versa.<br /><br />Other than lacking the regular versus playoffs coaching differential, coaching is completely reflected in the RTR base system.<br /><br />Unfortunately, we don’t have any scheme, manual or otherwise, yet for coaching that is better or worse in the playoffs versus the regular season. However, we are working on it and there is a proposal to add a factor for this in RTR itself, and if that is approved no manual adjustment will be necessary.<br /><br /><span style="color:#b45f06;">========== INTERPRETING RTR DIFFERENCES BETWEEN TEAMS / PREDICTING PLAYOFF SERIES ==========</span><br />Scoping out playoff series is straightforward. You start with Real Team Ratings (RTR) as reported here at Quest and the first thing you then do is to add six points to the ratings of the teams with home court advantage. Then if you have the time and you want to be more accurate, you do the manual injury adjustments as needed. After you have adjusted the RTRs for home court and for injuries, you then compare them for the two teams playing and find out what the difference is. Finally you can use either the "quick prediction scale" and / or the descriptions in the "detailed guide" that you will see below the quick prediction scale.<br /><br /><span style="color:#cc6600;">QUICK PREDICTION SCALE FOR PLAYOFF SERIES</span><br />0 to 5.9 Complete toss-up: flip a coin<br />6 to 11.9 Roughly 60% chance the higher team will win<br />12 to 17.9 Roughly 70% chance the higher team will win<br />18 to 23.9 Roughly 80% chance the higher team will win<br />24 to 29.9 Roughly 89% chance the higher team will win<br />30 to 35.9 Roughly 95% chance the higher team will win<br />36 to 41.9 Roughly 98% chance the higher team will win<br />42 to 47.9 Roughly 99% chance the higher team will win<br />48 or more Roughly 100% chance the higher team will win<br /><br /><span style="color:#b45f06;">DETAILED GUIDE TO INTERPRETATION OF DIFFERENCES BETWEEN IN REAL TEAM RATINGS</span><br />In the detailed interpretaton guide that follows, the word "roughly" is repeatedly used in front of the probability numbers, as reminders of the small amount of unavoidable statistical error and to emphasize that unknown factors, including injuries, especially injuries for which no manual adjustment has been made, will in some cases result in substantially different actual probabilities.<br /><br />Whether or not you are doing manual injury adjustments, do not forget to add six points to the RTRs of the teams that have home court advantage. Injury adjustments are highly recommended unless neither of the teams have significant injuries.<br /><br />The probability percentages in both the quick chart above and in the descriptions below are based on historical results in the NBA.<br /><br /><span style="color:#b45f06;">DIFFERENCE IN RATINGS IS BETWEEN 0 AND 5.9</span><br />The series is a complete toss-up when statistical error is considered. There is a strong possibility of a 7 game series. The higher team has a 50% to 55% chance of winning, depending on what exactly the difference is. These probabilities are too low for anyone to have any confidence in using RTR to say who will win. All series of this type are decided quite simply by who plays better, by who coaches better, or both.<br /><br /><span style="color:#b45f06;">DIFFERENCE IN RATINGS IS BETWEEN 6.0 AND 11.9</span><br />The series can easily go either way, although the higher team has a small edge, and has between a 55% to 65% chance of winning, depending on where in the range the difference is. There is a very substantial chance of a 7-game series. If the lower team wins, it is a small upset. Either slight differences in the quality of coaching, certain players playing a little better or a little worse than they did in the regular season, or both, could be responsible for an upset at this level.<br /><br /><span style="color:#b45f06;">DIFFERENCE IN RATINGS IS BETWEEN 12.0 AND 17.9</span><br />The series can go either way and this type of difference gives a significant chance for a 7-game series. But the higher team has a clear edge. The higher team has between a 65% and a 75% probability of winning, depending on where in the range the difference is. If the lower team wins, it is a moderate upset. Either slight differences in the quality of coaching, certain players playing a little better or a little worse than they did in the regular season, or both, could be responsible for an upset at this level.<br /><br /><span style="color:#b45f06;">DIFFERENCE IN RATINGS IS BETWEEN 18.0 AND 23.9</span><br />The higher team has roughly between a 75% to 85% probability of winning, depending on where in the range the difference is. There is a chance, but only a small one, for a 7-game series. If the lower team wins, it is a fairly big upset. Either coaches, certain players, or both could be responsible for an upset at this level.<br /><br /><span style="color:#b45f06;">DIFFERENCE IN RATINGS IS BETWEEN 24.0 AND 29.9</span><br />The higher team has roughly between an 85% to a 93% probability of winning, depending on where in the range the difference is. In this kind of series, often the only way the lower team can win the series is by extending the series out to 7 games and then somehow winning the 7th game, thus taking the series 4 games to 3. However, it is not uncommon, assuming there is an upset in this type of series, for the lower team to so severly disrupt the favored team that the lower team upsets the higher, favored team 4 games to 2. Whichever way it does it, if the lower team does win coming in down by this amount, it should be considered a major upset. In many such cases, the coaching would have to be very wrong and/or negligent.<br /><br /><span style="color:#b45f06;">DIFFERENCE IN RATINGS IS BETWEEN 30.0 AND 35.9</span><br />The higher team has roughly between a 93% and a 97% probability of winning. depending on where in the range the difference is. In this kind of series, often the only way the lower team can win the series is by taking the series 7 games and winning the 7th game, thus taking the series 4 games to 3. However, there have been a tiny number of series where a team with this amount of a RTR deficit has won the series by so severly disrupting the favored team that it is able to win the series 4 games to 2. In the vast majority of such cases, the coaching for the higher team was severely wrong and/or negligent. Whether accomplished in 6 games or 7, the lower team winning despite being this far behind in RTR is extremely rare, and would be considered a very major and very surprising upset.<br /><br /><span style="color:#b45f06;">DIFFERENCE IN RATINGS IS BETWEEN 36.0 AND 41.9</span><br />The higher team has roughly between a 97% and a 99% probability of winning, depending on where in the range the difference is. Obviously, an upset would be extremely rare, shocking, and historical. It would in most cases be caused substantially by incompetent and/or severely negligent coaching or by one or more major injuries. With this amount of difference, any upset would almost certainly have to be with the series going all seven games.<br /><br /><span style="color:#b45f06;">DIFFERENCE IN RATINGS IS 42.0 AND 47.9</span><br />The higher team has a roughly 99% probability of winning the series. Obviously, an upset would be extremely rare, shocking, and historical. It would in most cases be caused substantially by incompetent and/or severely negligent coaching or by one or more major injuries. With this amount of difference, any upset would almost certainly have to be with the series going all seven games.<br /><br /><span style="color:#b45f06;">DEFFERENCE IN RATINGS IS 48.0 OR MORE</span><br />It is close to a 100% certainty that the higher team will win the series. Obviously, an upset would be extremely rare, shocking, and historical. It would in the vast majority of cases be caused substantially by incompetent and/or severely negligent coaching. With this amount of difference, any upset would almost certainly have to be with the series going all seven games.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-29784106092155455532009-05-29T22:15:00.000-07:002010-10-01T20:33:23.934-07:00[Historical and Non-Current] User Guide for the Real Player Ratings Interactive Tool, May 2009<span style="FONT-STYLE: italic">Notes: This Guide to the <a href="http://www.thequestfortheringtoolbox.blogspot.com/">Quest for the Ring Toolbox </a>has been replaced..</span><span style="FONT-STYLE: italic"> <a href="http://nuggets1reference.blogspot.com/2010/09/user-guide-for-real-player-rating.html">The User Guide for the Toolbox is now located here</a>.</span><br /><em></em>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-33761807593078023272009-05-25T06:58:00.001-07:002010-10-01T20:31:58.763-07:00User Guide for Real Game Ratings of Ultimate Game Breakdowns May 2009Real Game Ratings will be a set of team performance measures in games that, quite simply, allow the user to be at a higher level of knowledge and appreciation about basketball than those who are limited to traditional box scores and statistics.<br /><br />Some of these ratings have been developed by statistical gurus over the last 20 years or so. Some of them have been developed by Quest and have never been seen before. Although these measures are not rocket science, Quest is indebted to "those who have gone before" in developing sophisticated ways of looking at basketball games and players.<br /><br />Definitely for Quest and hopefully for most of the statistical experts who have blazed the trail, the objective has to be to reveal how basketball games are won.<br /><br /><span style="color: rgb(153, 51, 0);">ADVANCED MEASURES FOR TEAMS IN GAMES</span><br /><span style="color: rgb(204, 102, 0);">POSSESSIONS</span><br />The number of possessions is the foundation needed for several extremely important performance measures. Several statistical gurus have developed formulas for calculating the number of possessions a team had using box score numbers. The results of these formulas are extremely similar. Quest uses the following formula. Though relatively simple, it yields almost exactly the same number of possessions as do more complicated formulas.<br /><br />Possessions = Field Goals Attempted + Turnovers + (.44 * Free Throws Attempted) - Offensive Rebounds<br /><br /><span style="color: rgb(204, 102, 0);">EFFICIENCY</span><br />Efficiency is the single most important "advanced" performance measure. Anytime you are in a hurry, you can simply look at efficiency to evaluate how well a team played either on offense or defense. Efficiency is:<br /><br />Efficiency: Points / Possessions<br /><br />Quest has already been reporting team offensive and defensive efficiency separately and as part of the Real Team Ratings. We will now be including this crucial measure in Ultimate Game Breakdowns, which as explained in the 2009 Site News Update in the User Guide will be mostly for playoff games in the future. In other words, Quest will become virtually the only source on the Internet for team offensive and team defensive efficiency in NBA playoff games.<br /><br /><span style="color: rgb(204, 102, 0);">OFFENSIVE REBOUND PERCENTAGE</span><br />Most everyone knows that offensive rebounding is very important toward winning games, especially close games. On the other hand, offensive rebounding is less important for the task of looking at a basketball offense in isolation and evaluating how good it is, and how good the guards are in that offense.<br /><br />Quest will report this in the Ultimate Game Breakdowns for NBA playoff games. We will be virtually the only known source for this information. The formula is:<br /><br />Offensive Rebound Percentage = Offensive Rebounds / (Offensive Rebounds + Opponent's Defensive Rebounds)<br /><br />As you can see, this tells you how many of all of the available rebounds were snagged by the offensive squad.<br /><br /><span style="color: rgb(204, 102, 0);">TURNOVER PERCENTAGE</span><br />Turnovers are very, very important in determining which team wins the game, especially in close games. Turnovers are interwoven into the only at Quest offensive quality and power measures.<br /><br />Quest will report this in the Ultimate Game Breakdowns for NBA playoff games We will be virtually the only known source for this information. The formula is:<br /><br />Turnover Percentage = Turnovers / Possessions<br /><br /><span style="color: rgb(204, 102, 0);">GETTING TO THE LINE</span><br />When a team is playing a good defending team, a rough defending team such as the 2009 Denver Nuggets, and/or a team with very tall centers and power forwards, there is a tendency to settle for more outside jump shots than is wise. Basketball players are human, and given the choice between scoring without taking abuse in the paint and scoring with abuse, they will choose the former.<br /><br />While it is not true that you can win games simply by excessively over weighting driving into the paint in hopes of dunks, layups, and fouls, it is true that you have to maintain some kind of balance between so doing and between shooting from outside the paint. The main reason the balance is important is that it is much more difficult to defend a team that mixes up well drives in the paint with outside shooting.<br /><br />One complication involved in determining how much a team should take it to the rim is how closely the referees are calling a game. If the referees are calling the game loosely, if in other words the refs are "letting them play," the defenders have an unusual advantage in the paint, and the offense will be penalized if it drives into the paint too much. If the referees are calling a game tightly, than the offense in many cases will have the advantage in the paint, so obviously the coach should have the offense drive into the paint much more in that case. Keep in mind though that the referees may change how tightly they are calling the game as the game goes along.<br /><br />Aside from the factor discussed in the previous paragraph, other factors that determine exactly what the balance should be between drives into the paint and outside shots is relatively complicated, and is beyond the scope of this User Guide. But this very, very important subject will be the subject of future Quest reports.<br /><br />Quest will report the extent to which each team "got to the line" in Ultimate Game Breakdowns for NBA playoff games. We will be virtually the only known source for this information. The measure will be called Getting to the Line:<br /><br />Getting to the Line = Free Throws Attempted / Field Goals Attempted<br /><br />As you can see, this is the ratio of free throws to field goals attempted.<br /><br /><span style="color: rgb(204, 102, 0);">EFFECTIVE FIELD GOAL PERCENTAGE</span><br />This is simply a juiced up version of shooting accuracy. Basic shooting accuracy, as reported in box scores as field goals made / field goals attempted, is not a very good measure, because two-point and three-point scores are combined in together as if they are the same thing. Effective field goal percentage adjusts basic shooting percentage so that it reflects the extra value of 3-point scores. So this is where the crucial 3-point shooters are given credit for their contributions toward winning the game.<br /><br />Obviously, this is one of the most important measures for deciding who wins basketball games, and at the player level, for determining who the most valuable offensive players really are. Defensive Effective Field Goal Percentage is just as important for evaluating team defense as is the flip side.<br /><br />Effective field goal percentage is a crucial part of efficiency which, as explained above, is the most crucial measure of all for determining who is going to win the basketball game.<br /><br />Quest will be reporting the Effective Field Goal Percentage for teams in Ultimate Game Breakdowns for NBA playoff games. We will be virtually the only known source for this information. The formula is:<br /><br />Effective Field Goal Percentage = (Field Goals Made + (0.5 * 3-Point Field Goals Made)) / Field Goals Attempted<br /><br /><span style="color: rgb(204, 102, 0);">ASSIST / TURNOVER RATIO</span><br />This is number of assists divided by number of turnovers. Point guards have surprisingly different turnover rates. The ones with the lowest turnover rates are obviously the best for efficiency per se, but for overall effectiveness, you need to look at this ratio. A high turnover point guard can nevertheless be a very good point guard if he makes a truly large number of assists. In general, for every turnover a point guard suffers, the more assists he needs to make up for it.<br /><br />EVALUATION SCALE FOR ASSIST / TURNOVER RATIO<br />4.00 and More: Ultra Careful Point Guard, arguably too careful<br />3.50 to 3.99: Extremely Careful Point Guard, possibly too careful<br />3.00 to 3.49: Very Careful Point Guard<br />2.60 to 2.99: Careful Point Guard<br />2.20 to 2.59: Medium Point Guard<br />1.90 to 2.19: Slightly Careless Point Guard<br />1.60 to 1.89: Careless Point Guard<br />1.40 to 1.59: Very Careless Point Guard<br />1.20 to 1.39: Extremely Careless Point Guard<br />1.19 and Less: Ultra Careless Point Guard<br /><br />Unfortunately, it seems that the assist / turnover ratio by itself is not extremely useful for either evaluating point guards or even at the team level for determining how good an offense really is. The problem seems to be that some point guards "need" more turnovers to produce a lot of assists than do others. Some not very careful point guards can more than make up for turnovers by making assists that are more impressive and important than the assists made by careful point guards.<br /><br />On the other hand, very careless and worse point guards are not going to be able to fully make up for all their turnovers no matter what they do. Assist / turnover ratios below 1.60 would signal point guards who are simply making too many turnovers to have any chance of being truly effective playmakers. Keep in mind though that young point guards will often have higher or much higher ratios than they will have later on.<br /><br />So although by itself the ratio is not a greatly important thing, when used in conjunction with other offensive indicators, as Quest does, the assist/turnover ratio becomes much more useful.<br /><br />Quest will be reporting the Assists/Turnovers ratio for NBA playoff games and for a limited number of regular season games. This will be one of the only sources for this, although of course it is easy to make a rough calculation of this in your head simply by looking at a box score.<br /><br />QUEST FOR THE RING ORIGINAL SYSTEM FOR RATING THE QUALITY AND POWER OF BASKETBALL OFFENSES<br />Quest as of June 2009 is officially introducing high level performance measures found no where else on the Internet. Most of these are focused on the offense. But obviously, if you look at how an opponent did in these things, you can evaluate a defense using them. Very intelligent basketball fans, offensive basketball coaches, shooting guards, and especially point guards will be able to make the most use of these new measures.<br /><br /><span style="color: rgb(204, 102, 0);">PLAYMAKING IDENTITY</span><br />The Quest discussed during many reports in the first 18 months of the site a concept called "playmaking identity". This is basically to what extent a team's offense is organized for maximum effectiveness. The more a team's offense is directed by the guards in general and especially by the point guards, the more effective it will be. Here are some of the reasons for that:<br /><br />1. Point Guards bring up the ball. For that and for traditional reasons, point guards are supposed to be able to direct, or in other words to organize the offense to some extent. In theory, the more organized the offense, the more effective it will be, mainly because the more organized it is, the more the plays are repetitive, and the more repetitive and practiced the plays, the easier it is to score.<br /><br />2. Guards in general and especially point guards are directly responsible for running specific plays called by coaches, both plays in general for every game, and specific plays called in specific situations, especially off time outs and in critical late game situations.<br /><br />3. Point Guards are supposed to be able to read defenses and to be able to evaluate how well defenders are playing in a particular game. They are supposed to be able to use this knowledge to adjust their offense so as to avoid the good defending and attack the bad defending.<br /><br />Quest is now formalizing the concept. The definition of playmaking identity will be:<br /><br />Playmaking Identity = ((2 * Point Guard Assists) + Shooting Guard Assists) / Total Assists<br /><br />As you can see, this is an adjusted version of percentage of assists by guards. It's adjusted because the point guard assists are double weighted while the shooting guard assists are single weighted. In terms of ultimately rating how good the offense is, point guard assists are the most important, shooting guard assists are of medium importance, and assists by forwards and centers are less important.<br /><br />Assists by forwards and centers are left out of playmaking identity, which is part of the main point of this new measure, because assists by them, while better than no assists at all, are not very reflective of a quality, organized, and efficient offense.<br /><br />On the other hand, total assists and the assist/turnover ratio, which would include assists by forwards and centers, are very important, as you will see shortly.<br /><br />Quest for the Ring will be reporting Playmaking Identity for most NBA playoff games and for carefully selected regular season games. This measure has been created here and will definitely be available only here.<br /><br /><span style="color: rgb(204, 102, 0);">PLAYMAKING QUALITY</span><br />Playmaking Quality is an extremely important measure developed by Quest. Not only has this particular measure never been seen before, there has never been a measure that gets at how "good" an offense really is as well as this one does.<br /><br />The idea, like many of the most useful ideas, is relatively simple actually. The theory is that the two most important things in a basketball offense is how well organized it is, as reflected by Playmaking Identity, and how well it scores, as measured by Effective Field Goal Percentage. So the formula is:<br /><br />Playmaking Quality = Playmaking Identity * Effective Field Goal Percentage<br /><br />A way to look at this is that it is effective or real shooting adjusted by to what extent the shooting was organized or not. In theory, the more organized the shooting, the more inevitable it was in the game (and the less by chance it was). So this would be an indicator that you can get from every game as to how good the team's offense really is.<br /><br />The higher the Playmaking Quality as measured by more and more games, the more wins from offense you can expect for that team over the course of a season. Also, the higher the Playmaking Quality, the lessor the chance that even very good defending opposing teams can win with defense alone.<br /><br />Quest believes that Playmaking Quality may prove to have one of the most high correlations with winning playoff games and Championships of all measures in existence. Why? For one thing, and to reemphasize, Playmaking Quality measures the extent to which an offense is invulnerable to losing to a quality defense.<br /><br />Quest for the Ring will be reporting Playmaking Quality for most NBA playoff games and for carefully selected regular season games. This measure has been created here and will definitely be available only here.<br /><br /><span style="color: rgb(204, 102, 0);">PLAYMAKING POWER</span><br />While Playmaking Quality alone may be enough to ultimately explain why NBA playoff games are won and lost, Quest is introducing another one that may possibly be slightly more important still: Playmaking Power. This is Playmaking Quality multiplied by the team Assists / Turnover ratio. The formula is:<br /><br />Playmaking Power: Playmaking Quality * (Assists / Turnovers)<br /><br />Think of this as the ultimate summary measure of the quality of the offense of a team, with everything including the kitchen sink thrown in. In general, we are taking the best offensive quality measure possible (Playmaking Quality) and multiplying by the effective quantity of that offense, as shown by assists / turnovers. Gross quantity of the offense in this framework would be assists. Net or effective quantity would be assists / turnovers, since the more turnovers there are, the less valuable the assists actually are.<br /><br />Quest for the Ring will be reporting Playmaking Power for most NBA playoff games and for carefully selected regular season games. This measure has been created here and will definitely be available only here.<br /><br /><span style="color: rgb(204, 102, 0);">PRODUCTION OUTLOOK</span><br />Ultimate Game Breakdowns for NBA playoff games and for a small number of regular season games will from now on consist of Real Player Ratings and of the Real Game Ratings, the latter as explained in this User Guide, and the former explained in a separate User Guide.<br /><br />Unfortunately, we do not have the resources at this time to produce all of this for a substantial number of regular season games, let alone for all regular season games. We will at the least produce this for all NBA Championship games, for all Conference Finals games, and for all Conference semifinals games. To the extent possible, we will produce this for Conference quarterfinals, also known as the first round of the NBA playoffs.<br /><br />Also due to limitations currently existing, Ultimate Game Breakdowns for NBA playoff games, as detailed, will not be available for days, weeks, or possibly even months following those games. We will, however, be able to make sure that all the Breakdowns for a given year's playoff games are completed at the latest by the end of the year in which those games were played. And we will do everything possible to get the Ultimate Game Breakdowns for the Championship out quickly.<br /><br />If someday we can find qualified individuals to join the Quest Performance Measure Division (so to speak) then we will be able to do more Breakdowns and we will be able to get the Breakdowns done more quickly.<br /><br /><a href="http://www.yardbarker.com/author/new" onclick="window.open('http://www.yardbarker.com/author/new?pUrl=' + (encodeURIComponent(document.title)); return false;" target="_blank"><img style="border: 0pt none ; padding: 0pt 0pt 0pt 15px;" src="http://www.yardbarker.com/images/extern/bark_wide.gif" /></a><br /><br /><a href="http://ballhype.com/post/" onclick="location.href='http://ballhype.com/post/url/?url='+encodeURIComponent(location.href)+'&title='+encodeURIComponent(document.title);return false;" target="_blank"><img src="http://images.ballhype.com/media/img/hype/button_96x22.png" alt="BallHype: hype it up!" width="96" height="22" /></a><br /><!-- AddThis Button BEGIN --><br /><div><script type="text/javascript">var addthis_pub="tremaine";</script><br /><a name="data:post.title" id="data:post.url" onmouseover="'return" onmouseout="addthis_close()" onclick="return addthis_sendto()"><img src="http://s7.addthis.com/static/btn/lg-share-en.gif" alt="" style="border: 0pt none ;" border="0" width="125" height="16" /></a><script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"></script></div><br /><!-- AddThis Button END --><br /><a href="http://www.nuggets1comments.blogspot.com/" target="_blank">You Can Post Your Response to Anything on Quest Here</a>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-48509879060466352632009-02-26T15:14:00.000-08:002010-10-01T20:33:47.327-07:00[Historical, and Non-Current] User Guide for Real Player Rating Reports for the NBA, for NBA Teams, and for Games, February 2009NOTICE: THIS USER GUIDE IS FOR HISTORICAL USE ONLY. FOR THE LATEST GUIDE, LOOK FOR THE MOST RECENT VERSION IN THE REFERENCE INDEX. This version applies for ratings produced from late February until the end of May 2009.<br /><br />REAL PLAYER RATINGS BY TEAM USER GUIDE<br />Updated Feb. 25, 2009<br /><br />INTRODUCTION TO THE CONCEPT OF REAL PLAYER RATINGS<br />The Real Player Rating (RPR) is a very carefully constructed all inclusive performance measure. Everything of value that a basketball player can do is recorded by official NBA scorekeepers who sit right along the edge of the court, mid-court, and who are trained to observe and record everything that happens in a game.<br /><br />Since these days all of these counts are immediately input into continually updated public data bases online, such as at ESPN, it is theoretically possible to combine everything together into an overall performance measure for each player. This is what the RPR does.<br /><br />Real Player Rating or RPR is everything tracked by scorekeepers that a player does, good and bad, added and subtracted (with negative things such as turnovers and missed shots being subtracted). Very carefully calibrated factors, or weights, are applied to the different elements. The calibration, as you would expect, is done to reflect the different value toward winning games that different actions on the court have. All of the good and bad combined together is divided by minutes, so we can tell the rate, which we need to determine the overall quality or value of the player.<br /><br />REAL PLAYER RATINGS ARE ADJUSTED FOR DEFENDING NOT TRACKED BY SCOREKEEPERS STARTING IN 2009<br />Not counting purely subjective and abstract factors such as leadership, and not counting a few infrequently occurring actions on the court (not being counted or tracked by anyone yet) such as chasing down loose balls, the only thing a basketball player can regularly do on the court of any value that is not counted by NBA scorekeepers is preventing what would have been a score from being a score by defending against the shot or shots during a possession well enough to stop what would have been a score by the opposing team. In other words, what the player does to make the possessions of the opposing teams worthless other than what is already counted, which would be rebounds, steals, blocks, and personal fouls. These untracked or hidden actions would include effective man to man defending, effective rotation on defense off screens and picks, defensive recognition, and quickness of defensive reaction. These things would be counted by scorekeepers if it were possible. But, for example, there is no way to know exactly how many shots a good (or any kind of) defender has changed from being a score to a miss.<br /><br />Quest for the Ring has developed a statistically valid way to accurately estimate the untracked or hidden aspects of defending. This is described in complete detail in the latter sections of this Guide.<br /><br />SIMPLICITY, RELIABILITY, TRANSPARENCY, AND FOCUS ONLY ON "WINNING POWER"<br />Like everything statistical we do at Quest, we have kept this process as simple and reliable as possible, while at the same time spending as much time as necessary on design, quality control and performance evaluation. Unlike some other practitioners, we avoid what you might call layered complexity, which leads to formulas which can not be understood without studying them and which high traffic sites will not show on any of their web pages for fear that the public will rebel against the statistic. At Quest, we think that our rating systems can be understood and evaluated by most high school graduates, and we keep everything out in the open through User Guides such as this one.<br /><br />Basketball statistical gurus frequently forget that no matter how intricate their formulas are, they are very heavily manipulating process items such as assists and rebounds while spending very little time on how these things fit together to produce wins and losses. We think that they are making the mistake, whether or not they are aware, of injecting value adjustments regarding how they think the game should be played and value adjustments about which playing styles are better than others.<br /><br />Whereas, the primary objectives of the relative simplicity (small number of formulas, to be precise) of the Quest RPR is to avoid all how the game should be played and how players should play value judgments. We don't care about the styles, only about the results. The RPR is concerned first and foremost with the impact each player has on the potential for winning games.<br /><br />Quest thinks it makes more sense to minimize the manipulation of process items, and to focus much more on coming up with the best possible estimation of how the process items impact points for and points against in games, which in turn of course determines wins. Whereas other "advanced statistics" might give you more depth and flavor regarding how a particular player plays, the Quest RPR is a way for the reader to, in a very quick and easy way, determine what the overall value of the player is with respect to producing wins or losses.<br /><br />In other words, the foundation of RPR is and will always be measurement of a player's power to help win basketball games, whereas the foundation of other, more complicated statistics may include preferences about how the game should be played and about the style of players, with winning power measured less accurately as a result of those focuses.<br /><br />THE MAIN REASON REAL PLAYER RATING IS SO VALUABLE<br />Because it is per time, RPR is the best possible measure of the net quality of a basketball player, or simply "how good" the player is (on average) for each minute of playing time.<br /><br />REAL PLAYER RATING REPORTS CAN BE FOR THE WHOLE NBA, FOR A TEAM, OR FOR A GAME<br />With a Real Player Ratings Report for the entire NBA, you can see very rapidly who the best players in the NBA have been during the course of the season.<br /><br />With a Real Player Ratings Report for a Team for the Regular Season, you can see very rapidly who the best players on the team have been during the course of the season. You can use this information to investigate the possiblity that the coach is not perfect. Well, we know that no coach is perfect. So really, with the benefit of 20/20 hindsight, we can investigate and determine what mistakes the coach has apparently made with regard to rotations and playing times. Furthermore, by using the Ratings, basketball knowledge, a little creativity, and logical deduction, we can also investigate and perhaps determine whether the coach has made incorrect decisions regarding which strategies and plays are best for his team's offense and defense.<br /><br />Real Player Ratings for games are the most important component of Reports called Ultimate Game Breakdown: Players Reports.<br /><br />CAUTIONS<br />To be completely honest and clear, although it is the best possible overall real life measure, RPR is still not a perfect or absolute, "final word" measure on any player. In general, you must remember that all performance measures including this one for the NBA are relative rather than absolute measures. The ratings are relative to the team context. Players do not exist in a vacuum, especially in basketball.<br /><br />Several specific cautions will now be described.<br /><br />Because basketball is a team game, and more so than most other sports, players who are on really good teams might have their own performances "crowded out" to some extent by even better players. So, paradoxically, ratings of good players on good teams will generally have slightly lower ratings than they would have if they were on a bad team. Conversely, great players on bad teams will have slightly higher ratings than they would have if they were on a good team.<br /><br />Players need not only playing time but possession of the ball in order to produce many of the things that count in the rating. So if, for whatever reason, a player does not get the ball as often as he would on a different team, or with a different coach, or with whatever other circumstances you can dream of, then his RPR will be lower than what it could or would be.<br /><br />If a good player is on a good team where there are players even better than he is, than his RPR will likely be lower than it would be if he were on a not as good team.<br /><br />If a good player plays a certain position for which his team has an even better player, then it's probable that the better player will crowd out the lesser player to one extent or another, so that the lesser player's RPR will be lower than what it would be if he were the best player at the position on the team.<br /><br />The ratings are only for the current season. It has recently been discovered that many player's ratings often change up or down by more than 10% from year to year, and by much greater amounts over many years.<br /><br />Those who think defense in basketball is much more important than offense may consider the magnitude of the defensive adjustment to be inadequate. They will contend that defensive specialists who are poor offensive players should have a higher rating. While we realized that we needed to adjust the ratings for defending not tracked by NBA scorekeepers, we continue to believe that players who are great defensive specialists but poor or undeveloped offensive players should in most cases rank no higher than the major role player level.<br /><br />Do not forget that RPR is a per time measure. RPP and not RPR measures total impact of a player. RPR measures how valuable a player has been toward winning basketball games, per unit of time.<br /><br />The classification scheme, like the ratings, is relative. A role player on a bad team might be a solid starter on a very good team. A star on a bad team might be just a major role player on a really good team. And so on and so forth. A player is a star, a role player, or whatever only in the contexts of the particular season and the particular team involved. If he was on a different team, or if it was a different year, his classification might be different.<br /><br />So in conclusion, don't think of RPR as the ultimate gospel or bible on how good players are. But do think of it as an extremely accurate and reliable summary of how good the players actually have been in real life in the specific time (season or playoffs) and place (team) involved.<br /><br />A NOTE ABOUT REAL PLAYER RATINGS FOR INDIVIDUAL GAMES<br />However, not as many breakdowns of individual game ratings are going to closely track the overall average for the roster as you might think. This is because one of the interesting things about basketball that makes it different from most other sports is that "how good" a player is from game to game varies radically. The best players have terrible games where they do almost nothing sometimes, while players who normally do not do much can every once in a while have outstanding games, at least if you measure it per minute on the court anyway. If you just looked at actual production, and never at a reserve player's Real Player Rating, you would hardly notice any of his unusually outstanding games, since players who normally do not do much will normally not have much playing time.<br /><br />INTERACTIONS BETWEEN PLAYING TIMES, PLAYER RATINGS, AND THE NEEDS OF TEAMS<br />There are certain things that only certain players can do very well, and if those things are crucial for the team, than those players will have to play more minutes than they might otherwise play. The extra minutes might tend to reduce the player's Real Player Rating, while his total production will rise with the additional minutes. So to fairly and completely evaluate any player, you must always look at both the Real Player Rating (RPR) and the Real Player Production (RPP).<br /><br />Furthermore, it is strongly suspected that, in order to compete in the playoffs, a team must have as many players of as high a quality (RPR) as possible, while at the same time having at least one or two players whose actual production is among the highest in the NBA regardless of exactly how high the RPRs happen to be. (All high RPP players will be relatively high RPR players; some will be higher than others.) Specifically for example, LeBron James' actual massive amount of production is most likely just as important to the Cleveland Cavaliers as is his RPR or, in other words, as is his rate of production. Similarly, Kobe Bryant's quantity is probably at least as important to the Lakers as is his quality.<br /><br />Whereas, teams such as the Denver Nuggets, who have instructed a possible huge producer, Carmelo Anthony, to "not worry about scoring," may have made a fatal mistake relative to the playoffs, because teams with no extremely high rate producers may be generally doomed to lose quickly in the playoffs even if they have an unusually large number of high quality players as shown by RPR. This is because extremely high RPP players can by themselves "dominate a game" to some extent, meaning they can by themselves possibly win the game for their team, without worrying about complications that come in to play if you need to coordinate several high RPR but ultimately and theoretically limited RPP players.<br /><br />Players who over the course of a season appear to rank higher in RPR (quality) but lower in RPP (quantity) may not be getting enough playing time. Players who over the course of a season appear to rank lower in RPR (quality) but higher in RPP (quantity) may be getting too much playing time. But as alluded to earlier, you must not automatically conclude this, because some skills are needed out on the court most of the time, but yet may be available only from a small number players on the roster. Such players may have to get more playing time due to that critical skill in short supply, even if their overall quality does not seem to justify all of that playing time.<br /><br />A relatively common reason for unusual playing time will be players who are either truly outstanding defenders (who get extra playing time) or truly bad defenders (who get their playing time reduced).<br /><br />Another common reason for extra playing time will be if a team has a point guard who has many more turnovers than the average point guard has. Because the point guard is so important, a good coach has to play his best guard who can make plays at the position for a full set of minutes every game, pretty much regardless of how many turnovers that player makes. If you take out your designated point guard due to "too many turnovers," it may end up sort of like cutting your foot off because you have a bad case of athletes foot!<br /><br />MINIMUM PLAYING TIME RULES<br />Only players who played at least 10% of the minutes of whoever has played the most minutes on the team are included in these reports. Any player who has played for less than 10% of the minutes of the player who has played the most minutes is not included, since he didn't play for long enough to be fairly or reasonably compared with the other players. Furthermore, as described previously in the adjustment for defending section, only players who have played at least 300 minutes can have a defensive rating, or an overall RPR given to him. Both the 10% and the 300 minutes minimums must be met for a player to be rated.<br /><br />REAL PLAYER PRODUCTION<br />Of course, looking at actual production (everything positive added together and everything negative subtracted out) is something that is extremely important too. The total production (everything good and everything bad combined together) is simply called Real Player Production or RPP.<br /><br />There is no methodology for including defending (other than rebounding, steals, blocks, and personal fouls) in RPP at this time.<br /><br />SOURCE OF TRACKED BASKETBALL COUNTS<br />The sources for the raw counts of scores, rebounds, steals, turnovers, and so forth is ESPN.com and NBA.com.<br /><br />THE FORMULA<br />For 2008-09, the RPR formula has been very carefully and accurately tweaked again and is set to be as follows:<br /><br />POSITIVE FACTORS<br />Points 1.00 (at par)<br />Number of 3-Pt FGs Made 1.00<br />Number of 2-Pt FGs Made 0.60<br />Number of FTs Made 0.00<br /><br />Assists 1.75<br /><br />Offensive Rebounds 1.15<br />Defensive Rebounds 1.25<br />Blocks 1.60<br />Steals 2.15<br /><br />NEGATIVE FACTORS<br />3-Pt FGs Missed -1.00<br />2-Pt FGs Missed -0.85<br />FTs Missed -0.85<br /><br />Turnovers -2.00<br />Personal Fouls -0.80<br /><br />DEFENDING RATING<br />A quality of defending rating of between 0 and .230 is added to "Base or unadjusted RPR". In most cases, the defending rating is between 0.050 and .150. See the User Guide for the Defending Components" below for a very detailed explanation of how we determine how to defensively rate the players.<br /><br />ACTUAL COMBINED AWARD OR PENALTY BY TYPE OF SHOT<br />3-Pointer Made 4.00<br />2-Pointer Made 2.60<br />Free Throw Made 1.00<br />3-Pointer Missed -1.00<br />2-Pointer Missed -0.85<br />Free Throw Missed -0.85<br /><br />ZERO POINTS: PERCENTAGES BELOW WHICH THERE IS A NEGATIVE NET RESULT<br />3-Pointer 0 score % 0.200<br />2-Pointer 0 score % 0.246<br />1-Pointer 0 score % 0.459<br /><br />This means that if a player has a lower percentage than any of the three above, then his RPR would be lower rather than higher as a result of his shooting that type of shot.<br /><br />ASSISTS VERSUS TURNOVERS ZERO POINT<br />Assist/Turnover Ratio That Yields 0 Net Points: 1.143<br /><br />This means that any player who has an assist/turnover ration of less than 1.143 is losing RPR rating when assists and turnovers are considered. He would have to either increase assisting or reduce turnovers to turn the combined effect from assists and turnovers positive.<br /><br />QUALITY (RPR) AND QUANTITY (RPP} SUMMARIZED ONE LAST TIME<br />RPR reports show for each player the RPR (Real Player Rating) which tells you how good a player did (all the good things minus all the bad things) out on the court per unit of time. The RPP (Real Player Production) report tells you how much in total (the sum of the of the good things minus the sum of the bad things) a player did out on the court, without regard to playing time.<br /><br />Many and maybe most sports watchers and an unknown but probably disturbingly large number of sports managers make the mistakes of exaggerating the importance of quantity and overlooking to some extent quality. These reports allow you to expand your horizons. These reports put quantity and quality side by side, which is extremely valuable, because both are roughly equally important in explaining accurately why and how the team is playing the way it is.<br /><br />======== DEFENDING AND OFFENSIVE SUB RATINGS ======================<br /><br />THE DEFENDING ADJUSTMENT TO REAL PLAYER RATINGS AND THE DEFENDING SUB RATING<br /><br />THE DEFENDING COMPONENTS OR SUB RATINGS OF REAL PLAYER RATINGS--NEW AS OF JANUARY 2009<br />As of January 8, 2009, The Quest is proud to announce to you that the second major improvement to Real Player Ratings (RPR) in less than half a year is now fully up and running. The first major improvement were some needed changes in the factors used for RPR. The second major improvement (series of improvements, actually) is so far as I am aware the first ever effort to rate the defensive efforts of players that are hidden unless you watch all that player's games, because they are not scored or tracked by scorekeepers.<br /><br />I have been talking about and working for and expecting the breakthrough in evaluation of defending for almost two years. Now that the breakthrough has come, I am now even more certain that RPR is the best overall rating system in existence, and that it is now roughly as good as it will ever or can ever be.<br /><br />I recently developed a statistically valid way to rate the defending of players, that is, what they do to prevent scores other than rebounding, blocks, steals, and fouls, which were always included in RPR. This would include man to man defending, zone defending, pick and roll defending, defensive recognition, and defensive rotation.<br /><br />Although the technique used had to be indirect and subject to a very small amount of statistical error, it validly awards the better defenders with bigger RPR bonuses. It has been validated by comparing results obtained with the defensive ratings shown on three different "advanced basketball statistics" web sites. Our results were shown to be extremely highly correlated with the results shown on the other sites. Where there are small differences, I believe mine are better, if only because mine uses simple, bedrock statistical theory rather than involved formulas.<br /><br />HIDDEN DEFENDING<br />Before revealing what we do to reveal it, let's define "hidden defending." Exactly what is hidden defending? It's defending not tracked by the NBA. It's every action that helps to prevent the other team from scoring other than rebounding, stealing, and blocking, and fouling. So it would include man to man defending, zone defending, rotating in general, defensive recognition, and quick defensive response to various offensive tactics, such as pick and rolls. Obviously, if a defender is good at these things, the other team doesn't score as many points than if the defender is lousy at these things.<br /><br />HOW TO REVEAL HIDDEN DEFENDING IN FOUR STEPS<br />STEP ONE: CALCUATION OF RAW HIDDEN DEFENDING RATINGS<br />Unlike most "advanced statistics" that are published on the internet or in print, we give you all the details about how we do ours, so that you can evaluate the evaluations, so to speak. The following is specifically what we are doing to be able to accurately and fairly compare players' defending:<br /><br />Where do we start to discover what is hidden? We keep it as simple and yet as accurate as possible. We use the most official and therefore presumably the most reliable data as the building blocks for rating the defense of NBA players. We start with the player minutes and points scored by the other team while the player was on the court that are shown in the plus/minus statistical section at NBA.com.<br /><br />There are no value judgments made regarding a player's defending style, or regarding a team's defending style for that matter. We don't care about style. Using points allowed per minute is looking at results, nothing more and nothing less.<br /><br />After simply dividing points allowed by minutes on the court, we adjust (we standardize, to be more precise) that rate for the pace of the team and for the quality of the team's defense. The two adjustments are needed so that the ratings of players who are on different teams can be fairly compared.<br /><br />Players who are on teams with faster paces give up more points per minute through no fault of their own. Similarly, players who are on teams with less efficient defenses give up more points per minute, regardless of how well they defend, everything else held constant. You could not fairly compare players on two or more teams with different paces and different team defense qualities unless you standardized, or in other words controlled for those differences for all NBA players.<br /><br />USE OF BASIC STATISTICAL SAMPLING THEORY<br />What we are doing is using an indirect and inexact yet accurate and statistically valid way to discover who the better defenders are. No two players are out on the court for all the exact same minutes. So although for every player, what the other players out on the court do defensively while they are out on the court is a very large factor determining what that player's points per minute allowed will be, when you look at many, many hundreds of minutes, what the individual player does, or does not do defensively, as the case may be, will eventually show up in that particular player's points allowed per minute statistic.<br /><br />In other words, what any individual player does defensively has to sooner or later show itself in a differentiation from other players of his points allowed per minute. As the number of minutes rise above 500, and then 1,000 and then, for many players, above 2,000 and even 3,000 for a regular season, what a particular player does or does not do defensively becomes more and more exactly shown by the points allowed per minute number. This is very basic statistical sampling theory in operation. Statistical sampling theory is the easy to understand bedrock theory of statistics.<br /><br />Due to the necessity of a large sample of minutes, we will not do defending estimates for any player who has played for fewer than 300 minutes. Quality of defending estimates will be slightly less accurate for players who have only played between 301 and about 600 minutes than they will be for players who have played for more than 600 minutes. We believe that the estimates are going to be extremely accurate for all players who have played 750 minutes or more. The idea is relatively simple: as the number of hundreds of minutes played goes up, the accuracy of this system improves, to the point where it gives you the same information you would have if you knew exactly how many possessions of the other team each player ruined with his defending.<br /><br />For your information, all players allow between 1.85 and 2.18 points per minute; most allow between 1.94 and 2.11. The overall NBA average is about 2.03 points per minute allowed.<br /><br />STEP TWO: CONVERSION OF RAW HIDDEN DEFENDING POINTS ALLOWED PER MINUTE TO FILTERED HIDDEN DEFENDING POINTS ALLOWED PER MINUTE<br />Since different players have different breakdowns between how much of their defending shows up in tracked statistics such as defensive rebounding and how much of it does not, in order to improve accuracy we need to have a method to filter, or in other words, separate, the two categories of defending. If we didn't do this, we would still have a useful statistic, but it would be biased in favor of players whose defending is counted in tracked statistics more so than other players. There would be in effect some double counting of defending for players who have most of their quality defending tracked by scored statistics.<br /><br />The filter used is to multiply the raw hidden defending ratings by the percentage of the real player production that is offensive. In other words we take the inverse of the percentage of a player's real player production that is defensive and multiply the raw hidden defending ratings by that. The rationale to do this is that although the exact relation is unknowable, we know that for a given raw hidden defending performance level, there will be an inverse relation between scored defending and hidden defending. The more defensive rebounds, steals, and blocks a player is making for any raw level, the less he is relying on hidden defending to achieve the raw level. And vice versa. So multilying by the inverse of the percentage of all contributions that are defensive (in other words, multiplying by offensive contributions) filters out much of the bias that is in the raw hidden defending rating.<br /><br />To be even more specific, we first extract out defensive rebounding, steals, blocks, and personal fouls, the sum total of which is called "Scored Defensive Contribution". All of the other components combined constitute "Scored Offensive Contribution". Now we can determine the percentages of the RPP that are offensive and defensive, and then we can use the offensive percentages to convert the raw hidden defending ratings to filtered hidden defending ratings.<br /><br />STEP THREE: CONVERSION OF FILTERED ALLOWED POINTS PER MINUTE TO FILTERED HIDDEN DEFENDING RATING<br />We need to translate the adjusted or filtered points allowed per minute into numerical terms that are the most useful with respect to RPR. So with a very carefully designed translation scale, we amplify the very small differences in different player's points allowed per minute numbers into much larger different hidden defending ratings for each player. Then we simply add the hidden defending rating to the Base RPR to yield RPR.<br /><br />STEP FOUR: USE OF HIDDEN DEFENDING RATING<br />We now have added in a reasonably good estimate of the value of actions of players that are not even kept track of by scorekeepers! The filtered hidden defending ratings are added to the Base, Unadjusted, or Scored RPR to give RPR. Technically, you could call the final result "Ajusted RPR," but we are trying to avoid that terminology because of how important we think it is to include the hidden defending in the performance measure.<br /><br />SIZE OF THE DEFENDING ADJUSTMENTS<br />Base RPR's for most NBA players range between .400 and 1.000. The range of possible defending adjustments to the base RPRs is from 0 to about .230. In most cases, however, the adjustment will be between 0.020 and .170.<br /><br />THE DEFENDING SUB RATING: PUTTING THE HIDDEN AND THE UNHIDDEN TOGETHER<br />Aside from the Hidden Defending Rating we can find out how well each player does in terms of unhidden or scored defending, can't we? Of course se can.<br /><br />Aside from the hidden there is of course unhidden defending, which would be rebounding plus steals plus blocks minus personal fouls. If we extract the combination of those four out of the same counts that underlie the RPR as a whole, we get what we are going to call the Scored Defending Contribution. This could also be thought of as Tracked Defending Contribution if you prefer. Then if we divide this by minutes, we can have a Scored (or Tracked) Defending Rating.<br /><br />Finally, if we combine Hidden Defending Rating (HDR) with Scored Defending Rating (SDR) we can have an Overall Defending Rating (ODR). I am for now going to simply multiply the HDR by two and add that to the SDR to yield the ODR. To combine them this way is more arbitrary than my usual standards allow; I am doing this because there is as of yet no non-arbitrary way of doing it. The formula of two times HDR plus SDR brings HDR almost up to par with SDR in terms of the actual numbers and the averages of those numbers involved.<br /><br />In other words, I am saying for now that hidden defending is almost as important as scored defending. There appear to be many coaches and not a few hardcore basketball fans who believe that hidden defending is actually more important than scored defending, but I am very likely never going to agree with that. I think that although hidden defending is important, and plausibly almost as important as tracked defending, that it is like a quicksand in that there seems to be a tendency for a substantial minority of basketball people to get carried away with estimating the importance of it and then become more and more trapped by their error in terms of how they look at basketball or in terms of how they coach their team if they are coaching.<br /><br />THE OFFENSIVE SUB RATING<br />The Offensive Sub Rating is all tracked actions other than the defensive ones (defensive rebounding, steals, blocks, and personal fouls) combined together using the RPR weights, divided by minutes. In other words, it is Total Offensive Production divided by minutes. For the list of all tracked actions and the weight factors assigned to each, see the secion titled "The Formula" above.<br /><br />======== SUMMARY OF PRIMARY FORMULAS =================<br />Real Player Production or RPR = (All tracked or scored actions weighted according to best available analysis of importance / minutes) + Filtered Hidden Defending Adjustment<br /><br />Real Player Production or RPP = Total Offensive Production + Total Defensive Production. (All tracked or scored actions weighted according to best available analysis of importance.)<br /><br />Offensive Sub Rating = Total Scored or Tracked Offensive Production / Minutes<br /><br />Defensive Sub Rating = (Total Scored or Tracked Defensive Production / Minutes) + 2 X Filtered Hidden Defending Adjustment<br /><br />Filtered Hidden Defending Adjustment = Raw Hidden Defending Adjustment X Percentate of RPP that is Offensive<br /><br />Raw Hidden Defending Adjustment = Assigned value based on chart. the objective of which, is to amplify seemingly minute differences in points allowed per minute.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-20984045655800075572009-02-09T07:53:00.000-08:002010-10-01T20:35:01.406-07:00[Historical and Non-Current] User Guide for the Defending Components or Sub Ratings of Real Player Ratings, January 2009THE DEFENDING COMPONENTS OR SUB RATINGS OF REAL PLAYER RATINGS--NEW AS OF JANUARY 2009<br />As of January 8, 2009, The Quest is proud to announce to you that the second major improvement to Real Player Ratings (RPR) in less than half a year is now fully up and running. The first major improvement were some needed changes in the factors used for RPR. The second major improvement (series of improvements, actually) is so far as I am aware the first ever effort to rate the defensive efforts of players that are hidden unless you watch all that player's games, because they are not scored or tracked by scorekeepers.<br /><br />I have been talking about and working for and expecting the breakthrough in evaluation of defending for almost two years. Now that the breakthrough has come, I am now even more certain that RPR is the best overall rating system in existence, and that it is now roughly as good as it will ever or can ever be.<br /><br />I recently developed a statistically valid way to rate the defending of players, that is, what they do to prevent scores other than rebounding, blocks, steals, and fouls, which were always included in RPR. This would include man to man defending, zone defending, pick and roll defending, defensive recognition, and defensive rotation.<br /><br />Although the technique used had to be indirect and inexact, it validly awards the better defenders with bigger RPR bonuses. It has been validated by comparing results obtained with the defensive ratings shown on three different "advanced basketball statistics" web sites. Our results were shown to be extremely highly correlated with the results shown on the other sites. Where there are small differences, I believe mine are better, if only because mine uses simple, bedrock statistical theory rather than involved formulas.<br /><br />HIDDEN DEFENDING<br />Before revealing what we do to reveal it, let's define "hidden defending." Exactly what is hidden defending? It's every action that helps to prevent the other team from scoring other than rebounding, stealing, and blocking. So it would include man to man defending, zone defending, rotating in general, defensive recognition, and quick defensive response to various offensive tactics, such as pick and rolls. Obviously, if a defender is good at these things, the other team doesn't score as many points than if the defender is lousy at these things.<br /><br />HOW TO REVEAL HIDDEN DEFENDING IN FIVE STEPS<br />STEP ONE: CALCUATION OF RAW HIDDEN DEFENDING RATINGS<br />Unlike most "advanced statistics" that are published on the internet or in print, we give you all the details about how we do ours, so that you can evaluate the evaluations, so to speak. The following is specifically what we are doing to be able to accurately and fairly compare players' defending:<br /><br />Where do we start to discover what is hidden? We keep it as simple and yet as accurate as possible. We use the most official and therefore presumably the most reliable data as the building blocks for rating the defense of NBA players. We start with the player minutes and points scored by the other team while the player was on the court that are shown in the plus/minus statistical section at NBA.com.<br /><br />After simply dividing points allowed by minutes on the court, we adjust that rate for the pace of the team and for the quality of the team's defense. The two adjustments are needed so that the ratings of players who are on different teams can be fairly compared.<br /><br />Players who are on teams with faster paces give up more points per minute through no fault of their own. Similarly, players who are on teams with less efficient defenses give up more points per minute, everything else held constant. You could not fairly compare players on two or more teams with different paces and different team defense qualities unless you standardized, or in other words controlled for those differences for all NBA players.<br /><br />USE OF BASIC STATISTICAL SAMPLING THEORY<br />What we are doing is using an indirect and inexact yet accurate and statistically valid way to discover who the better defenders are. No two players are out on the court for all the exact same minutes. So although for every player, what the other players out on the court do defensively while they are out on the court is a very large factor determining what that player's points per minute allowed will be, when you look at many, many hundreds of minutes, what the individual player does, or does not do defensively, as the case may be, will eventually show up in that particular player's points allowed per minute statistic.<br /><br />In other words, what any individual player does defensively has to sooner or later show itself in the points allowed per minute. As the number of minutes rise above 500, and then 1,000 and then, for many players, above 2,000 and even 3,000 for a regular season, what a particular player does or does not do defensively becomes more and more exactly shown by the points allowed per minute number. This is very basic statistical sampling theory in operation. Statistical sampling theory is the easy to understand bedrock theory of statistics.<br /><br />Due to the necessity of a large sample of minutes, we will not do defending estimates for any player who has played for fewer than 300 minutes. Quality of defending estimates will be slightly less accurate for players who have only played between 301 and about 600 minutes than they will be for players who have played for more than 600 minutes. We believe that the estimates are going to be extremely accurate for all players who have played 750 minutes or more. The idea is relatively simple: as the number of hundreds of minutes played goes up, the accuracy of this system improves, to the point where it gives you the same information you would have if you knew exactly how many possessions of the other team each player ruined with his defending.<br /><br />For your information, all players allow between 1.87 and 2.16 points per minute; most allow between 1.94 and 2.11. The overall NBA average is about 2.03 points per minute allowed.<br /><br />STEP TWO: CONVERSION OF RAW HIDDEN DEFENDING POINTS ALLOWED PER MINUTE TO FILTERED HIDDEN DEFENDING POINTS ALLOWED PER MINUTE<br />Since different players have different breakdowns between how much of their defending shows up in tracked statistics such as defensive rebounding and how much of it does not, in order to improve accuracy we need to have a method to filter, or in other words, separate, the two categories of defending. If we didn't do this, we would still have a useful statistic, but it would be biased in favor of players whose defending is counted in tracked statistics more so than other players. There would be in effect some double counting of defending for players who have most of their quality defending tracked by scored statistics.<br /><br />The filter used is to multiply the raw hidden defending ratings by the percentage of the real player production that is offensive. In other words we take the inverse of the percentage of a player's real player production that is defensive and multiply the raw hidden defending ratings by that. The rationale to do this is that although the exact relation is unknowable, we know that for a given raw hidden defending performance level, there will be an inverse relation between scored defending and hidden defending. The more defensive rebounds, steals, and blocks a player is making for any raw level, the less he is relying on hidden defending to achieve the raw level. And vice versa. So multilying by the inverse of the percentage of all contributions that are defensive (in other words, multiplying by offensive contributions) filters out much of the bias that is in the raw hidden defending rating.<br /><br />To be even more specific, we first extract out defensive rebounding, steals, blocks, and personal fouls, the sum total of which is called "Scored Defensive Contribution". All of the other components combined constitute "Scored Offensive Contribution". Now we can determine the percentages of the RPP that are offensive and defensive, and then we can use the offensive percentages to convert the raw hidden defending ratings to filtered hidden defending ratings.<br /><br />STEP THREE: CONVERSION OF FILTERED ALLOWED POINTS PER MINUTE TO FILTERED HIDDEN DEFENDING RATING<br />We need to translate the adjusted or filtered points allowed per minute into numerical terms that are the most useful with respect to RPR. So with a very carefully designed translation scale, we amplify the very small differences in different player's points allowed per minute numbers into much larger different hidden defending ratings for each player. Then we simply add the hidden defending rating to the Base RPR to yield RPR.<br /><br />STEP FOUR: USE OF HIDDEN DEFENDING RATING<br />We now have added in a reasonably good estimate of the value of actions of players that are not even kept track of by scorekeepers! The filtered hidden defending ratings are added to the "Base or Scored RPR" to give RPP. The range of possible defending adjustments to the base RPR is from 0 to about .230. In most cases, however, the adjustment will be between 0.030 and .150.<br /><br />STEP FIVE: OVERALL EVALUATION OF DEFENDING<br />Aside from the Hidden Defending Rating we can find out how well each player does in terms of unhidden or scored defending, can't we? Of course se can.<br /><br />Aside from the hidden there is of course unhidden defending, which would be rebounding plus steals plus blocks minus personal fouls. If we extract the combination of those four out of the same counts that underlie the RPR as a whole, we get what we are going to call the Scored Defending Contribution. This could also be thought of as Tracked Defending Contribution if you prefer. Then if we divide this by minutes, we can have a Scored (or Tracked) Defending Rating.<br /><br />Finally, if we combine Hidden Defending Rating (HDR) with Scored Defending Rating (SDR) we can have an Overall Defending Rating (ODR). I am for now going to simply multiply the HDR by two and add that to the SDR to yield the ODR. To combine them this way is more arbitrary than my usual standards allow; I am doing this because there is as of yet no non-arbitrary way of doing it. The formula of two times HDR plus SDR brings HDR almost up to par with SDR in terms of the actual numbers and the averages of those numbers involved.<br /><br />In other words, I am saying for now that hidden defending is almost as important as scored defending. There appear to be many coaches and not a few hardcore basketball fans who believe that hidden defending is actually more important than scored defending, but I am very likely never going to agree with that. I think that although hidden defending is important, and plausibly almost as important as tracked defending, that it is like a quicksand in that there seems to be a tendency for a substantial minority of basketball people to get carried away with estimating the importance of it and then become more and more trapped by their error in terms of how they look at basketball or in terms of how they coach their team if they are coaching.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-89425951391529806482008-07-30T15:42:00.000-07:002010-10-01T20:34:26.875-07:00[Historical and Non-Current] User Guide for NBA Real Player Ratings, July 2008IMPORTANT NOTICE: THIS USER GUIDE IS EXPIRED. FOR THE CURRENT USER GUIDE TO REAL PLAYER RATINGS LOOK FOR THE LATEST VERSION IN THE REPORT INDEX. THIS GUIDE DOES APPLY, HOWEVER, TO REAL PLAYER RATING REPORTS PRIOR TO NOVEMBER 2008.<br /><br />USER GUIDE FOR NBA REAL PLAYER RATINGS<br />As of July 2008<br /><br />We start by taking the top 390 players, out of about 450 players who played in the NBA in 2007-08, ranked according to gross or simple player rating, which is as follows:<br /><br />ADD THE FOLLOWING<br />Points<br />Rebounds<br />1.4 X Assists<br />Steals<br />1.4 X Blocks<br />Field Goals Made<br />0.5 X # of 3-Pointers Made<br />0.25 X # of Free Throws Made<br /><br />SUBTRACT THE FOLLOWING<br />0.7 X Turnovers<br />0.8 X # of Missed Field Goals<br />0.8 X # of Missed Free Throws<br /><br />Real Player Rating, the holy grail of player ratings, is then gross or simple player rating divided by minutes per game. This gives you the actual production per minute of each player, so you can now directly compare players with very different playing times. By discovering players who have high ratings, but low minutes, you can spot players who were underrated by their coaches. Among younger players, you can spot the promising ones and the ones who need more time to develop into full NBA players, time that in some cases may not be available, meaning that the player will end up playing in Europe or something.<br /><br />Then we eliminate any player who did not play in at least 16 games.<br /><br />Then we eliminate any player who did not play at least 7 minutes per game in the games he played in.<br /><br />Finally, we lop off the players with the lowest real player ratings from the bottom (36 players in this case) and take the top 330 NBA players to form the official Real Player Ratings list. The average team in the NBA will have 11 players from this list.<br /><br />If a player does not appear on the list, then one or more of the following is true:<br /><br />1. The player was not among the top 390 out of 450 players for gross or simple real player rating.<br />2. His real player rating is very low, less than .536.<br />3. He played in fewer than 16 games.<br />4. His minutes per game were less than 7. In many cases, this would be a player who played mostly or only in garbage time.<br />5. The player is one of the best made you miss type of defenders in the NBA, but has very little offensive game. Since scores that a defender prevents is an unknown quantity, it is not accounted for by the Real Player Rating, and so it is possible for a player not to make the top 330 list despite being a valuable asset, albeit mostly on defense only. Bruce Bowen, the San Antonio small forward, is the most obvious example. His real player rating is only .370, yet he played over 30 minutes a game due to all the scores of the Spurs’ opponents he stops.<br /><br />ADJUSTMENTS FOR MADE THEM MISS DEFENDING<br />Since shots that a defender stops from going in the basket, with no actual block, can not be known and are not kept track of, the Real Player Rating is not exactly perfect. But if you are very knowledgeable about the skills and efforts of players with respect to preventing scores, you can make your own adjustments based on your knowledge.<br /><br />In order to improve my coverage of the Denver Nuggets, I introduced in 2008 “adjustments for defending” to the Nuggets real player ratings. Although neither I nor anyone knows anywhere near exactly how many scores were prevented by the various Nuggets, since I was very familiar with the players and what they can and do accomplish on the court, I was able to rank the Nuggets with respect to made you miss defending.<br /><br />I decided that I would then assign an adjustment for made them miss defending to the real player ratings of each Nugget, in equal increments. Furthermore, I decided that the top half of the list of Nuggets according to made them miss defending would get positive adjustments, while the bottom half would get negative adjustments.<br /><br />The next step was to estimate how much the adjustments should be. After giving it my best thought, I decided that a +.130 adjustment would be the best estimate I can come up with for what the best made them miss defender the Nuggets have should get. So Kenyon Martins’s Real Player rating is .777, but his Real Player Rating adjusted for made them miss defending is .907.<br /><br />The equal increment adjustments were symmetric as to zero, so the best made them miss defender, as just discussed, received a +.130 adjustment, while the worst made them miss defender received a -.130 adjustment. Notice that this means that the best made them miss defender is a full .260 better than the worst made them miss defender, than the basic Real Player Ratings indicate. Since the Real Player Ratings of the entire list of 330 players range from .536 to 1.268, a range of .732, the .260 range for made them miss defending is, it seems clear to me, an adequate but not excessive correction of the fact that no one actually knows how many shots various players prevented from going in.<br /><br />So what you do, if you think you know about how good a player is in made them miss defending compared with other players, is to adjust that player’s Real Player Rating up or down, by as much as .130 up if the player is among the very best made them miss defenders among the 330 players rated, and by as much as .130 down if the player is among the very worst.<br /><br />Specifically, you estimate how the player would rank among the 330 players who are in the Real Player Ratings rankings, and then adjust that player’s rating according to the following guideline:.<br /><br />Top 12: +.130<br />13-24: +.120<br />25-36: +.110<br />37-48: +.100<br />49-60: +.090<br />61-72: +.080<br />73-84: +.070<br />85-96: +.060<br />97-108: +.050<br />109-120: +.040<br />121-132: +.030<br />133-144: +.020<br />145-156: +.010<br />157-174: 0<br />175-186: -.010<br />187-198: -.020<br />199-210: -.030<br />211-222: -.040<br />223-234: -.050<br />235-246: -.060<br />247-258: -.070<br />259-270: -.080<br />271-282: -.090<br />283-294: -.100<br />295-306: -.110<br />307-318: -.120<br />319-330: -.130<br /><br />As another Nuggets example, and maybe I got carried away slightly with the hysteria regarding Carmelo Anthony’s defending, but I estimated Anthony would be ranked in the 259-270 range among the 330 players if all these players were ranked according to made them miss defending. So Anthony’s Real Player Rating, which is 1.091, becomes 1.011, when it is adjusted for made them miss defending.<br /><br />Remember that you can adjust only players you know very well as to their defending; it is most likely beyond anyone's capabilities to even approximately rank all 330 players, so there can be no full "Real Player Ratings Adjusted for Made Them Miss Defending". No one that I have found has attempted to do this anywhere on the internet! However, as I did with the Nuggets, I believe that you can calculate the adjusted ratings for a team, if you know the players on that team very well.<br /><br />SCALE FOR REAL PLAYER RATINGS<br />All Time Historic Superstar Player 1.175 to Up<br />Superstar Player 1.050 to 1.174<br />Star Player 0.925 to 1.049<br />Outstanding Player 0.825 to 0.924<br />Major Role Player 0.750 to 0.824<br />Role Player 0.675 to 0.749<br />Minor Role Player 0.600 to 0.674<br />Reserve Only Player 0.525 to 0.599<br />Marginal or Struggling Player 0.450 to 0.524<br />Bust Players (or Defense Only!) Lower to 0.449Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-31626831145712516342007-12-17T15:09:00.000-08:002007-12-17T15:47:49.679-08:00Nuggets Player Ratings From the 2006-07 SeasonNUGGETS PLAYER RATINGS FROM THE 2006-07 SEASON (LAST YEAR)<br />Following the player rating is the team that the player played on in 2006-07<br />Carmelo Anthony.. 41.5 Nuggets<br />Allen Iverson...... 39.4 Nuggets and 76'ers<br />Marcus Camby... 32.4 Nuggets<br />Nene............... 23.3 Nuggets<br />Chucky Atkins.... 21.8 Grizzlies<br />J.R. Smith....... 18.5 Nuggets<br />Steve Blake...... 15.2 Nuggets<br />Eduardo Najera... 13.9 Nuggets<br />Steven Hunter.... 13.3 76'ers<br />Reggie Evans..... 12.6 Nuggets<br />Linas Kleiza....... 12.1 Nuggets<br />Yakhouba Diawara. 7.0 Nuggets<br />DerMarr Johnson.. 5.3 Nuggets<br />Bobby Jones...... 4.0 76'ers<br /><br />NUGGETS WHO HARDLY PLAYED AT ALL IN 2006-07<br />Kenyon Martin.... Injured-Nuggets<br />Anthony Carter... Hardly Played-Nuggets<br />Von Wafter....... Hardly Played-Clippers<br />Jelani McCoy..... Did Not Play At All-No Team<br /><br />Source for the ratings: ESPNUnknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-35385204883319824832007-12-17T13:29:00.000-08:002010-10-01T20:34:08.525-07:00[Historical and Non-Current] Real Player Ratings User Guide, December 2007IMPORTANT NOTICE: THIS IS A REPLACED USER GUIDE AND APPLIES ONLY TO REAL PLAYER RATING REPORTS PRIOR TO NOVEMBER 2008<br /><br />ESPN PLAYER RATING<br />You can tell how well they played at a glance. Of the advanced statistics I have seen on the internet, this one seems to have the best balance between offense and defense. Many other advanced statistics are biased in favor of good defenders, and do not reflect the heavy importance of offense in basketball. Here is the formula for the ESPN rating of a player:<br /><br />PLAYER RATING (GROSS) =<br /><br />Points + Rebounds + 1.4*Assists + Steals + 1.4*Blocks - .7*Turnovers + # of Field Goals Made +1/2*# of 3-pointers Made - .8*# of Missed Field Goals - .8*# of Missed Free Throws + .25 *# of Free Throws Made<br /><br />REAL PLAYER RATING<br />Here is the formula for Real Player Rating:<br /><br />Real Player Rating = ESPN Player Rating / Minutes Played<br /><br />The ESPN Player Rating and the Minutes Played are both usually used with a per game time frame, so the usage of this statistic will usually be:<br /><br />Real Player Rating = ESPN Player Rating per Game / Minutes Played per Game<br /><br />But you can calculate a Real Player Rating for any period of time you have the data for. Nuggets 1 will report out Real Player Ratings for each game, and it will periodically give the Real Player Ratings for all the Nuggets for the whole season to date.<br /><br />The straight up player rankings are obviously heavily affected by how many playing minutes the various players get. With many teams, you can rely on the coach to give his various players roughly the playing time that makes the most sense for his team. Unfortunately, you can not rely on George Karl to award playing time in just about the best way possible. Therefore, it makes good sense to introduce a new and very important statistic that Nuggets 1 will call the Real Per Minute Player Rating which, as the name implies, is the gross ESPN player rating divided by the number of minutes.<br /><br />This statistic allows everyone to see whether or not players who play only a small number of minutes are doing better than their low gross rating will indicate. At the same time, it will allow everyone to see whether players with a lot of minutes are playing worse than, as well as, or better than their gross ranking shows. This is another big improvement in the Nuggets 1 never ending quest to give readers total information about the Nuggets. This statistic allows the reader, at a glance, to see exactly how well each player is doing without regard to playing time. So it gives you pure knowledge not available anywhere else.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-5772221547364193097.post-84221601804967481732007-11-27T08:53:00.000-08:002008-03-27T12:33:56.390-07:00Nuggets 1 Alert Scale For the NuggetsIMPORTANT NOTE ABOUT ALERT STATUS<br />All teams, of course, have an alert status, and the key thing that can swing games is not so much the actual status of the two teams, but the difference in the two statuses. The difference in the alert status is a third outside factor that impacts a game, joining home court advantage and extra rest advantage, if any. <br /><br />NO ALERT (0-15): There are virtually no problems. Teams like the Spurs are in this category from time to time.<br /><br />GREEN ALERT (16-29): There are minor problems whose total impact is very small. There is very little effect on the team’s ability to win games against teams from any level.<br /><br />GREY ALERT (30-44): There are relatively minor problems leading to a small threat against the success of the entire season. It is still possible to beat quality teams, but it will be more unusual to beat a quality team, because about 1/4 of what would have been wins against good teams will now be losses. There should be no impact with respect to medium and poor teams.<br /><br />YELLOW ALERT (45-59): Minor damage is occurring to the season. The entire season is under medium threat. Beating quality teams is more difficult and will be relatively unusual. About 1/2 of all would be wins against good teams will now be losses. Beating mid-level teams is a little more difficult. About 1/4 of games that would be wins agsinst mid-level teams will now be losses. Beating low level teams is still relatively easy. A good team has become in between a good team and a mid-level team when it is under this alert.<br /><br />ORANGE ALERT (60-79): Moderate damage is occurring to the season. The entire season is under serious threat, and you can just about forget about beating quality teams. About 3/4 of all would be wins against good teams will now be losses. Beating mid-level teams is much more difficult. About 1/2 of games against mid-level teams that would have been wins will now be losses under this alert. Even poor teams can often beat an otherwise good team that is under this alert. Close to 1/4 of games against poor teams that would have been wins will now be losses under this alert. A good team has been reduced to being a mid-level team, at best, when it is under this alert.<br /><br />RED ALERT (80-99): Serious damage to the season is occurring now. Beating quality teams is almost impossible. Beating mid-level teams is extremely difficult and will be unusual. About 3/4 of games against mid-level teams that would have been wins will now be losses if there is a RED ALERT. The result against low-level teams is on a case by case basis. Close to 1/2 of games against low level teams that would have been wins will now be losses under this alert. Essentially, this alert means that an otherwise good team has been reduced to being a poor or low level team.<br /><br />BLACK ALERT (100+): The season is lost and, under normal circumstances, the Coach is going to be fired no later than the end of the season. The Coach almost always gets fired when a season is lost during what was supposed to be a good season, regardless of how much of the blame actually is due the coach for the problems that led to the loss of the season. Under a BLACK ALERT, the team has become one of the worst teams in the League, and will lose most of it's games.Unknownnoreply@blogger.com