Like many high level Reports at Quest for the Ring (QFTR), playoff previews are a formatted type of Report. Formatted reports have a pre-set format and there is little or no custom commentary included. The whole idea of formatted reports is to provide a very large amount of important information very efficiently. The carefully planned and long evolved and perfected formatting eliminates the need for time-consuming custom text reporting in contexts where there is really no need for it. But to fully understand a formatted Report you need to be familiar with the User Guide for it.
In contrast to formatted reports, QFTR breaks new ground in general and reveals its latest discoveries about basketball in particular in free form (non-formatted) text reports. While formatted posts are "on the reservation", non-formatted text reports are where QFTR "goes off the reservation". Both types of reports are essential; having just one type without the other type would reduce the value of QFTR by MORE than half.
In Playoff Preview Reports (PPRs) Excel Team Grids are used for quick and easy comparisons between teams. Since Excel is ultimately a sophisticated way to format information, PPRs are technically one of the very most intensely formatted Reports in the entire QFTR arsenal of formatted Reports.
Team Grids on Excel are also actually the best foundational tool for managing a basketball team. For example, team grids allow managers, coaches, or anyone else to consider changes in players and/or in playing times that would improve the chances of winning playoff series and regular season games.
Partly because no one is perfect, partly because relatively incompetent coaches are all too common, and partly because basketball (like many things) is more complicated than most people think it is, coaching errors are commonplace. Team Grids on Excel allow for quick flagging of coaching errors, some of which can be big enough to cost a team a playoff series or as many as a dozen regular season wins.
We now proceed to detailed information about the content appearing in Team Playoff Previews in the Excel format.
============ SECTION ONE (AT THE TOP) OF PLAYOFF PREVIEW REPORTS USING EXCEL: HEAD TO HEAD COMPARISONS ============
Using Real Player Ratings (RPRs) Section One allows for quick and easy comparison of players by position. You can compare specific players for any position. For example, you can see which team has the better starting point guard. You can very easily and quickly see which team has the better second squad small forward. And so on and so forth for each of the five positions and each of the two squads.
Many young and some not so young basketball fans spend time arguing about who is the better player between two playoff starters at the same position. At QFTR we scientifically and accurately inform you of who was actually better in the current year.
SQUAD AVERAGES AND OVERALL TEAM AVERAGES
One of the most important things to observe in the Head to Head Comparison area (Section One) are the squad Real Player Rating (RPR) averages. Carefully comparing the squad averages is very important and if you skip this you really will not be able to properly preview a playoff series.
When you compare squad averages, you are essentially comparing the starters as a whole and the non-starters as a whole of the two teams. Since as everyone knows basketball is partly a team game and has stronger team dynamics at work than in many other sports, when the starters of one team are substantially better than the starters of the other team, this will often mean the advantaged team will likely win the series by virtue of that fact alone.
But keep in mind a smart coach may possibly have graduated one or two second squad players to starter for the playoffs. This will not show up on the team grids in the Report. Also, keep in mind that in the Report, players are placed into squads according to minutes played. So when a team intentionally has the best player at a position come in late in the first quarter "from off the bench" that player may be more of a second squad player out on the court even though he is shown as a first squad player in the Playoff Preview.
By looking at the squad averages you can see what the average rating of the players in that squad is for each team. By comparing the first squad with the second squad, you can see how much of a drop off there is between them. Since most of the players in the first squad are starters, this is approximately equivalent to comparing the starters and the bench. The bigger the drop off, the more minutes the starters should be playing.
TEAM REAL PLAYER RATING AVERAGES
At the very bottom of Section One you will see a row for “Team Average” and on that row you will find the Team Real Player Rating Average (TRPRA) for each of the two teams.
TRPPA is two times the first squad average plus the second squad average divided by three. In other words, it is a weighted average of the top two squads with the first squad counted twice and the second squad counted once, which roughly corresponds to typical playing time patterns. Players in the third squad (also known as "the reserves") the injured players, and the benched players are not counted in the team average.
You can put substantial stock but not an unlimited amount of stock in the team average number.
One weakness of TRPPA is that even among later round playoff teams there are still often going to be in the second squad a player with a very low rating from time to time. How much such players play in the playoffs is dependent on how strapped the team is at the position and on how dumb the coaching is.
Another weakness in the team real player rating average concept that sometimes can be significant is that as already indicated third squad ratings are completely ignored for the Team Real Player Rating Averages. But third squad players sometimes get fairly substantial playing time because sometimes they are fairly good players.
Despite the shortcomings, TRPRA very often correctly signals which team is going to win the series. TRPRA is likely to predict the winner when the difference between the two teams is .050 or more and it is especially likely to correctly predict the winner when the difference is .100 or more. QFTR uses TRPPA (along with other information of course) to help project which team will win playoff series.
TYPICAL POSITION, SQUAD AND TEAM REAL PLAYER RATING AVERAGES FOR THE VERY BEST TEAMS
The following discussion is limited to the very best teams, specifically the four final teams only (the teams in the Conference finals). Position, Squad and Team averages for non-playoff teams and for teams eliminated in the first and second rounds are beyond the scope of this User Guide.
POSITION AVERAGES FOR 4 CONFERENCE FINAL TEAMS
Point Guard .914
Shooting Guard .774
Small Forward .786
Power Forward .872
Center .920
SQUAD AVERAGES FOR 4 CONFERENCE FINAL TEAMS
1st Squad .853
2nd Squad .708
TEAM REAL PLAYER RATING AVERAGES FOR 4 CONFERENCE FINAL TEAMS
Final Four Teams .805
Teams in the NBA Championship .868
TEAMS IN THE CHAMPIONSHIP
Many Championship teams will have at least one position where the average RPR of the two players who play it the most is greater than .950. Championship teams will sometimes feature two positions where the average of the top two players is greater than .900 with the most common combos being point guard and either center or power forward. At the low end, Championship teams will very seldom have any position where the best two players average below .700.
But some mere playoff teams will have at least one position where the average of the top two players at the position is a little less than .700. The most common positions for this situation would be small forward and shooting guard. As you might expect, playoff teams that have even one position where the top two players who play it average less than .700 are generally the ones eliminated in the early rounds.
NBA OVERALL (ALL TEAMS) REAL PLAYER RATING EVALUATION SCALE
For comparison purposes this Guide now shows the overall Real Player Rating evaluation scale for ALL NBA players and ALL teams. This reminds you that many of the players on the four conference final teams are way above average players:
SCALE FOR REGULAR SEASON REAL PLAYER RATINGS
Perfect Player for all Practical Purposes / Major Historic Super Star 1.100 and more
Historic Super Star 1.000 1.099
Super Star 0.900 0.999
A Star Player / A well above normal starter 0.820 0.899
Very Good Player / A solid starter 0.760 0.819
Major Role Player / Good enough to start 0.700 0.759
Good Role Player / Often a good 6th man, can possibly start 0.640 0.699
Satisfactory Role Player / Generally should not start 0.580 0.639
Marginal Role Player / Should not start except in an emergency 0.520 0.579
Poor Player / Should never start 0.460 0.519
Very Poor Player 0.400 0.459
Extremely Poor Player 0.399 and less
AVERAGE RATINGS BY POSITION
Not all positions are created equal. In pro basketball, point guard and center are the most important positions, power forward is in the middle, and small forward and shooting guard are the least important. (Some teams will have a different pattern.) The following are good estimates for average ratings by position among all NBA players who play 300 minutes or more. There are very few small forwards and shooting guards who don't fit at other positions who are superstars. Most superstars are players who can play point guard, power forward, or center.
Point Guard .750
Shooting Guard .635
Small Forward .645
Power Forward .715
Center .755
All Positions / All Players (NBA Overall Average) .700
To quickly and fairly compare two players who play different positions, convert their Ratings as follows:
Point Guards: Subtract .050; for example, .700 becomes .650
Shooting Guards: Add .065; for example, .700 becomes .765
Small Forwards: Add .055; for example, .700 becomes .755
Power Forwards: Subtract .015; for example, .700 becomes .685
Centers: Subtract .055; for example, .700 becomes .645
TEAMS SHOULD AVOID PLAYING LOW RATING PLAYERS IN THE PLAYOFFS
Often, especially on the best coached teams and on the primary contenders, a second squad player with a relatively low rating will be strategically benched during the playoffs. Players at the nearest position can fill in at the position.
In general, centers and point guards with ratings below .650 should play sparingly in the playoffs or not at all. Power forwards with ratings below .615 should play sparingly or not at all in the playoffs. Small forwards and shooting guards with ratings below .545 and .535 respectively should play sparingly or not at all in the playoffs.
============ SECTION TWO (LOWER SECTION) OF PLAYOFF PREVIEWS USING EXCEL: TEAM GRIDS ============
FIRST SQUAD, SECOND SQUAD, AND RESERVES
A depth chart shows you team policy regarding who starts and who are the backups and in what order for the five positions. The team grid is based on the depth chart style. However, players (other than players acquired during the season from trades; see below regarding them) are placed into first squad, second squad, and third squad according to minutes played, not according to the latest ESPN or any other depth chart, or in other words not according to anyone's estimation of what the team policy is.
Instead of using depth charts, whoever has played the most minutes at a position is shown in the “1st Squad” whether or not that player starts at the position. Whoever has played the second most minutes at a position is shown in the "2nd Squad" regardless of that player's position on any depth chart. Whoever has played the third most minutes at a position is shown in the "Reserves" (which could have been labelled "3rd Squad" instead).
There is a notable exception to the rule for who goes in which squad. If a player has been acquired during the season and he is listed as the starter on the ESPN depth chart, he will be shown as first squad. Similarly, if a player acquired during the season is shown as the first backup to the starter in the depth chart he will be shown as second squad regardless of minutes. In other words, the depth chart prevails over minutes in the case of players acquired by trade during the season. This makes sense because minutes played for the prior team could not reasonably be counted for the current team.
PLAYERS WHO MOST LIKELY WILL NOT BE PLAYING
On a Team Grid, just to the right of the “3rd Squad" column you see two grey areas. From left to right the first one is for players who are most likely or definitely out for much or for all of the series for some reason, usually due to injury.
The rating for players who will not be played is shown as long as the player has played at least 300 minutes in either the current year or in the previous year. If the injured player didn't play at least 300 minutes in either of those years, then "none" will be shown for the rating for both years. Such players most likely would not play even if they were available to play.
The second grey shaded area to the right is for players who could play but almost certainly will not play because they played fewer than 300 minutes during the regular season. The 300 minutes threshold is the minimum needed for a hidden defending adjustment and therefore is the minimum needed for a player to get a Real Player Rating. It also is being used here as the threshold for determining whether a player was essentially benched for the season. 300 minutes is less than four minutes a game, which is a very good dividing line for saying whether a player was benched for the season or not. You can get close to 300 minutes with just garbage time, so if you don't play at least 300 minutes, you are basically benched.
PLAYERS ACQUIRED BY TRADE
We have already described how players acquired by trade are placed with respect to what squad they are in. Here we discuss how we determine what rating to show for them.
Players acquired by trade during the season who have played at least 300 minutes for their new team (during the regular season) are treated on the grid as if they were on the team the entire season. The rating you see for them is for their new, current team minutes. The previous team rating is considered to be irrelevant for the grid.
Players acquired by trade during the season who have NOT played at least 300 minutes for their new team are shown as "more or less benched" if they did play at least 300 minutes for the previous team this season but not at least 300 minutes for the new, current team. The rating you see for them in the "more or less benched" column would have to be and is their rating on their previous team this season.
If the player acquired by trade has never played at least 300 minutes for any team, he is treated like any other player who has never played 300 minutes or more. How those players are shown on the Team Grids immediately follows.
PLAYERS WHO HAVE NEVER PLAYED AT LEAST 300 MINUTES IN ANY SEASON
These players will be listed in the "More or Less Benched for the Season" column. No rating can be computed for them for any year so "none" is shown for prior year rating. Rookies who didn't get to play much in their first years are commonly shown this way. Other than garbage time, it is extraordinarily unlikely that any such players will play in any playoff game in the current year.
In the "More or less Benched" area, the Real Player Rating that is shown is the one from the most recent year the player played at least 300 minutes. What year that was is shown right next to their rating. Sometimes you can spot a player who should have played more than 300 minutes in this area. Generally, players in the More or Less Benched area of the Team Grid will not be playing in any playoff game except perhaps in garbage time.
COMPARING TEAMS BY POSITION
The position averages are shown ONLY on the Team Grids (in Section Two) of the Playoff Preview Report. They are not really relevant for the head to head comparison area (Section One). The header abbreviation used on the grids for the position average column is "POS AVGS".
By looking at position averages in Section Two you can compare the two teams position by position. For each position, only the ratings of the first squad and of the second squad player are considered for the position average. And the rating of the first squad player at each position counts twice as much as the rating of the second squad player at each position. In other words, for each position the position average is two times the rating of the first squad player plus the rating of the second squad player divided by three.
Reserves (third squad) players generally do not play and so their ratings are ignored for the position calculations.
WHAT IF THERE WAS ONLY ONE PLAYER WHO PLAYED AT LEAST 300 MINUTES AT A POSITION?
The position average calculation assumes that there were at least two players who played at least 300 minutes at each position, one in the first squad and one in the second squad. If there is only one player who played 300 minutes or more at a position (who is in the first squad) there is a special rule. For the second player at the position, 75% of that single player's rating is considered to be the rating for the player at that position in the second squad. The 25% reduction is justified because of the fact that one or more players at other positions will have to fill out the position that has only one player. Those other position players will obviously generally not be as valuable at the position as players dedicated to that position are.
What if there isn't much fill-in? If the single player consumes most of the playing time because he is a superstar, the 25% reduction is still justified because when any player plays most of a game, he is often not as good late in the game due to not being rested enough.
Tuesday, September 20, 2011
Wednesday, January 19, 2011
User Guide for Real Coach Ratings as of January 2011
======= SECTION ONE: INTRODUCTION =======
Quest for the Ring is proud and pleased to present what is apparently the world's first serious effort to scientifically and accurately rate and rank all of the current NBA head coaches. Even the academic oriented basketball statistics sites do not have any formulas or specialized ratings for coaches, although some of them thank goodness keep track of basic coach data including wins and losses.
The QFTR coach rating product is called Real Coach Ratings (RCR). The first edition of these annual ratings, which compared to the latest version was relatively crude (and yet still much more than mere opinion) was published in October 2008. The second edition, which features substantial but relatively modest improvements over the 2008 edition, was published in early December, 2009. In late November 2010 the third edition, which featured relatively large scale and important improvements over the 2009 edition, was published. At this time it is not known to what extent it will be desirable and possible to improve RCR further, but there is a fairly high probability that most and possibly all future changes to and expansions of RCR will all be small compared with the changes and expansion in 2010.
Why should the coaches hide behind a black curtain as they do in the USA? Concerning coaches, there is virtually a total lack of the kind of statistical comparing and contrasting that goes on with players 24/7. To say there is a double standard where players get the short end of the stick would be an understatement. Coaches can get away with relative incompetence and negligence for many years, in some cases indefinitely, whereas players will within days, weeks, or a few months at the most have their minutes cut at the least, and they can easily be bounced around the NBA or demoted to some other League. When QFTR started to rank and rate Coaches in 2008, it was way, way overdue that someone did it.
The big Corporation sites such as ESPN have editorial limitations which prevent them from being severely critical of NBA head coaches, managers, or owners. ESPN writers can be mildly critical at the most (which in practice means they have to hint at criticism rather than directly criticize). For heavy criticism of NBA coaches, managers, or owners, you have to go somewhere other than ESPN, CBS Sports, Fox Sports, and NBC Sports. As one of many examples, you might see some heavy criticism at SlamOnline.com. And then even when you do venture elsewhere and see some heavy criticism of coaches, managers, or owners, you are most often going to see only opinions as opposed to conclusions based on hard research. I mean, if you are lucky, the opinions are dead on accurate, but since there is little if any evidence from research backing up those opinions they could easily be wrong. Here at QFTR it is the reverse: you seldom will see a mere opinion and most of what you see are conclusions backed up by valid and adequate research.
I can pretty much guarantee you that no one has ever, even with the capabilities created by the Internet age, put in as much effort, thought, and technology as QFTR has into fairly comparing NBA coaches with widely different lengths of time spent in professional head coaching. Despite the fact that QFTR has little or no competition for coach ratings, it applies full scale quality control to RCR and provides a very detailed User Guide that exceeds 20,000 words. And the Real Coach Ratings (RCR) system CAN be used in other Leagues, other countries, and on other planets, if there are any other basketball planets, that is!
The Real Coach Rating system has been extensively improved in the second half of 2010. The biggest improvement is the new factor called "Playoffs Games Coaching Score". A lot of time went into developing this factor, much of which went into developing an underlying data base called the "NBA Playoffs Series, Teams and Coaches Database". This database consists of every playoff series ever played since 1980 except for twenty best of three first round playoff series played between 1980 and 1983.
To summarize simply, for each series, a statistically valid estimate of exactly how many games should have been won by each team is calculated (to two decimal places, for example, 3.25 wins) and then the actual number of wins is compared to this and either a positive score or negative score is derived from this.
THE NBA PLAYOFFS SERIES, TEAMS, AND COACHES DATABASE
In 2010 Quest for the Ring developed a database which has details about virtually all playoff series of the world’s premier pro basketball League, the NBA, from 1980 to the present. The number one reason why the database was developed was so that RCR could be substantially improved. Specifically, one of the main objectives for creating this database was to identify which coaches of pro teams win more games in the playoffs that they “were supposed to lose” than they lose games that they were "supposed to win" (net playoff winners). And of course, we also want to find out which coaches lose more playoff games they were supposed to win than they win playoff games they were supposed to lose (net playoffs losers). (And of course there are some coaches who win some that should have been losses and lose some that should have been wins whose overall record on that is about even up.)
In late November 2010 and in very early 2011 much of the information that can be obtained from the database was published in various Reports. See especially:
“NBA Playoffs Upsets: How Many are There and Why do They Happen?”
“Real Coach Ratings for the NBA, 2010-11, Look Ahead”
“Official NBA Coach Recommendations: Can the Coach of Your Team Win the Quest or Not?”
Note however that the actual database has not been published and is not scheduled to be at this time. Not all of the information that can be obtained from this database has been published in Reports yet. And although QFTR has more and more in recent years published Excel worksheets that are products of databases or in effect are micro databases, the templates for the largest databases can not be published due to risks associated with copyright violation. The QFTR public email address can be used for inquiries about how someone could possibly obtain a copy of the database and about the terms of use for it. For the email address, at the QFTR home page, click the “Contact” link that is on one of the horizontal menus just under the banner.
Using what is formally known as the “NBA Playoffs Series, Teams, and Coaches Database", and also using knowledge about statistics and basketball, it has been proven beyond a shadow of a doubt that some coaches are better in the regular season than they are in the playoffs. Actually, to be more precise, the playoff losing coaches are ones who have their teams playing in ways that lead to relatively more wins in the regular season than in the playoffs. And vice versa: coaches who win extra games in the playoffs have selected strategies and tactics that work better in the playoffs than they do in the regular season.
This is not really all that surprising as long as you know that the game of basketball itself changes a little in the playoffs from what it is in the regular season. The rules stay the same and to the untrained eye it may seem like the same game, but in reality the way it is played changes a little and the way the referees call games changes a little. Although most people do not know all of the details of the changes (the magnitudes and the components and so forth) most people are aware in general terms that defending is more important in the playoffs than it is in the regular season. To state it a little differently, most people are aware that many if not most teams ramp up their defending for the playoffs; they play defense more aggressively, more energetically, more athletically, and sometimes smarter.
Defending can be improved almost overnight through will and effort. But this is not really true with offense. Here it’s appropriate to insert a few paragraphs from the User Guide to Real Team Ratings:
DO NOT MAKE THE MISTAKE OF OVERSTATING THE IMPORTANCE OF DEFENSE
But don’t fall into a trap here; don’t get carried away. In basketball defense is relatively less important than it is in many and very possibly most other sports. Basketball is designed to be a game that favors the offense more so than for many, many other sports.
The tightrope here is that on the one hand you have to realize that defense is more important in the playoffs than it is in the regular season. On the other hand you have to understand that in basketball exactly how important the defense can be is limited fairly strictly. Defense alone can not possibly win you a Championship in basketball.
By contrast, in American pro football the limitations on how important the defense can be are far weaker, meaning that unlike in basketball, you can win the Super Bowl Championship in football pretty easily with the best defense in the League but a below average offense. For example, the Pittsburgh Steelers have done this several times over the years. But in basketball it is extremely difficult to win the Championship (and you are going to need some luck) to win it with even the best defense in the League but only the 20th best offense (out of 30). What you really need in basketball to go along with the best defense in the League is at the very least the 15th best offense (out of 30); and to have a good chance you need at least the 10th best offense to go along with the best defense.
So even though in basketball defense is more important in the playoffs than it is in the regular season, the magnitude of the change is not really all that large; in basketball defense is only a little more or, arguably in some cases, moderately more important in the playoffs than in the regular season.
Note also that, ironically, the teams that are the very best defensively in the regular season are unable to increase the quality of their defending in the playoffs as much as teams that come into the playoffs with lower ranked defenses. Coming into the playoffs, teams with one of the best two or three offenses in the League but whose defenses are down around 10th best are generally more likely to win the Championship then teams which come in with one of the top two or three defenses but only about the 10th best offense.
It’s obvious that teams have the opportunity to be better defensively in the playoffs than they were in the regular season; after all, this happens all the time. Defensively in the playoffs, it’s mostly a matter of doing the same things that were done in the regular season harder, faster, and/or smarter. But the opportunity for a team to be better offensively in the playoffs than it was in the regular season is very limited. In other words, offensively, what you saw in the regular season is pretty much all you are going to see in the playoffs. Teams should assume they can improve a little defensively but they should never ever assume they can get substantially better offensively when the playoffs come, because that is unlikely to happen.
This is indirectly another reason why teams that run slightly organized offenses are much smarter and more likely to win The Quest for the Ring than are the teams that run more street ball type offenses. Coaches who run the street ball type offenses often think that that strategy will work better in the playoffs than in the regular season. They may think that unlike a slightly organized offense a street ball type offense can be ramped up in the playoffs. And they may think that a street ball type offense is exactly what you want to try to offset the ramped of defenses you see in the playoffs.
All of these suppositions are false to one extent or another. First, street ball type offenses work less well in the playoffs against ramped up defenses than they do in the regular season against lesser defenses. Second, you can not substantially ramp up any type of offense in the playoffs including the street ball type. For offense more so than defense, it is crucial that in the regular season you are playing in a way that will allow you to win in the playoffs. For defense it is theoretically very recommended but not required that you in the regular season play in a way that will allow you to win in the playoffs. Third, ramped up defenses are relatively more effective against street ball type offenses than they are against slightly organized offenses.
For convenience, this Guide is developed into main sections and subsections. The main sections are:
Section 1 Introduction (Which ends here)
Section 2 Components of and Format of Real Coach Ratings Reports
Section 3 Discussion of and Calculation of Factors used for the Playoffs Sub Rating
Section 4 Discussion of and Calculation of Factors used for the Regular Season Sub Rating
Section 5 Interpretation of Ratings and Evaluation of Coaches
Section 6 Cautions Including the Well Known Experience Gap Problem
Within each section subsections are in all caps as shown.
======= SECTION TWO: COMPONENTS OF AND FORMAT OF REAL COACH RATINGS REPORTS =======
Starting in 2010 QFTR produces two Real Coach Ratings Reports. One of them, scheduled for August is called the "Look Back Version" which, as the name implies, gives the ratings for all the head coaches from the season just gone by. The other one, scheduled for October, is called the “Look Ahead Version” which, as the name implies, gives the ratings for all the head coaches as the new season gets underway.
Note that QFTR has data that would allow a rating to be calculated for any coach who ever coached any playoff series in 1980 or later (including retired and deceased coaches). This information will be published as time permits in future years. A total of 89 coaches have coached at least one playoff series since 1980, all of whom are in the database.
For anyone who has seen a prior Coach Ratings Report, you will see that the format of the report has changed and that the Report is even bigger than before. Yes, this Report is longer than most, but the length is justified because if a team has the wrong coach it is going to be wasting money and wasting player talents. For any of the worst playoffs coaches, winning the NBA Championship is literally impossible unless perhaps they end up with one of the very best teams of all time, and even then the poor playoffs coach might still lose the Championship.
The RCR Reports are now divided into three primary sections:
--Rankings
--Key Details About Coaches
--Coach by Coach Details
Each primary section is divided into sub sections (which are themselves sometimes divided into sub sections of the sub sections).
The Rankings Section of a RCR report is the core of the Report, and there are three sub sections for it, all of which are rankings:
--Real Coach Ratings (overall)
--Real Coach Playoffs Sub Ratings
--Real Coach Regular Season Sub Ratings
The second of the three primary sections of a RCR report, the Key Details About Coaches Section, contains four sub sections:
--Listing by team of coaches who appear in the report
--Coaching changes by team (appears in the Look Ahead Version)
--Coaches who QFTR guarantees will never win The Quest for the Ring (and those coaches close to this status)
--Coaches who have never coached any NBA playoff games (who because of this have a Playoffs Sub Rating of zero)
The first two and the fourth of these four are self-explanatory. For the criteria used to declare that a coach will never win the Quest, see “Section 5: Interpretation of Ratings and Evaluation of Coaches” below.
The third of the three primary sections of a RCR Repot, The Coach by Coach Details Section, consists of numerous facts about all the coaches. The coaches are presented alphabetically by team. Let’s look at an example to see what information can be found here. Most of the information is self-explanatory. We’ll use Larry Brown, coach of the Charlotte Bobcats for 2010-11:
CHARLOTTE BOBCATS
COACH: LARRY BROWN
Real Coach Rating: 2420.14
Rank Among 2010-11 Coaches: 2 out of 30
PLAYOFFS / REGULAR SEASON BREAKDOWN
Playoffs Rating: 2199.00
Playoffs Rank: 2 out of 30
Regular Season Rating: 221.14
Regular Season Rank: 14 out of 30
PLAYOFFS DETAILS
Playoffs experience: Number of playoff games coached: 193
Net Playoff games WON that should have been losses: 16.1
How many EXTRA playoff games this coach will WIN out of 100: 9.4
NBA Championships won: 1
Number of times this Coach won a Conference final but not the Championship: 2
REGULAR SEASON DETAILS
Games coached with current team: 164
Regular season games coached: 1974
Regular season wins: 1089
Regular season losses: 885
As you can see, most of this is self-explanatory.
As for the more mysterious items, first note that the overall Real Coach Rating equals the sum of the Playoffs Sub Rating and the Regular Season Sub Rating. One of the many interesting things about RCR is that you can easily see that some coaches have much higher playoffs ratings than they do regular season ratings (like Larry Brown does) whereas other coaches have much higher regular season ratings than they do playoffs ratings (like George Karl does). This is more proof of what QFTR talks about all the time regarding how the playoffs are more different from the regular season than most people think and regarding how some coaches are good for the regular season but bad for the playoffs.
In the Playoffs Details area, there are two things that are going to be mysterious because most likely no one has every calculated such a thing until now. The first item is this one: “Net Playoff games WON that should have been losses: 16.1”. This is not a rate but instead it is an absolute and actual number. It is neither a directly observable number nor a certain number, but rather a number derived from the model used in the playoffs database.
Why is this number valid? QFTR strongly endorses the database, all its components including its formulae, and all results derived from the database. To see if you agree with QFTR and for all of the details about the database and about how information is derived from the database, see "Section 3: Discussion of and Calculation of Factors used for the Playoffs Sub Rating" below.
This first of the two mysterious items, number of wins that were supposed to be losses (or the number of losses that were supposed to be wins) is information which is free of the kind of statistical error involved with rates discussed immediately below, and so QFTR publishes it for all coaches who have coached at least one playoff game. But there is another kind of statistical error involved, so extreme caution is warrented when evaluating coaches who have coached fewer than 25 playoff games. See Section Six for complete details.
The actual real life absolute minimum a coach in the database could have coached is three playoff games. Although the database begins with 1979-80, it excludes all four of the first round playoff series played each year from 1980 through 1983 because those were best of three series, which are so short that the database model used to determine unexpected wins and losses is not statistically valid. In a best of three, whichever team wins two games first wins the series.
Note that from 1980 through the present sixteen teams have made the playoffs every year, but the format of the playoffs has changed several times. From 1980 to 1983 there were only four first round series, and these were best of threes. From 1980 through 1983, four teams were given first round byes; these four played the winners of the round one series in round two. In 1984 the playoffs format was changed extensively. Now there were eight first round series instead of just four, and now they were best of five rather than best of three games. There were no more byes starting in 1984. Both prior to and after 1984, rounds after round one were all best of seven series.
Starting from 1984, all series (including the round one best of fives) are included since the model can be used without excessive statistical error for best of five series, where whoever wins three games first wins the series. The last year that the round one best of five was employed was in 2001-2002. Starting in the next year and through the present, the round ones have all been best of sevens (and of course all the other rounds have remained best of sevens.
If a coach has not coached any playoff games, this is clearly evident because it is reported this way (in the Coach by Coach Details Section):
Playoff games won that should have been losses: 0
Playoff games lost that should have been wins: 0
Going back to the Larry Brown example for the Coach by Coach Details Section of the RCR Report, the second mysterious item, which is right below the first, is this: “How many EXTRA playoff games this coach will WIN out of 100: 9.4”. (Or it could tell you how many EXTRA playoff games the coach will LOSE out of 100.) This is a rate with the actual number of extra wins or losses as the numerator and the actual number of games coached as the denominator. The words “extra” and “lose” or “win” are in all caps to make the coach detail section easy to read or skim through.
All rates calculated with relatively small amounts of data based on real events have relatively high statistical errors. The statistical error increases exponentially for very small and tiny amounts of data. To avoid reporting rates that are likely to be in error, QFTR does not publish rates for any coach who has coached fewer than 25 playoff games. For these inexperienced coaches, instead of a rate, you will see:
“The extra playoff games this coach will win or lose out of 100 is not reported for this coach due to insufficient number of playoff games.”
Remember that both of these very important numbers come directly from the NBA Playoffs Series, Teams, and Coaches Database. For further details, see Section 3: Discussion of and Calculation of Factors used for the Playoffs Sub Rating below.
Note that even those who disagree with the innovative QFTR evaluation measures and those who are not sure and don’t have time to evaluate the model can make extensive use of the raw data that is in the Coach by Coach Details sub section. But you can only do this with the regular season details because the playoffs details are almost entirely made up of custom designed information and the simple playoffs wins and losses are NOT published by QFTR.
Quite frankly, the raw playoffs wins and losses is information that is not only inferior to what QFTR does publish, but also it has very little information value in general. Unless you know what the playoffs record was “supposed to be”, you can’t do much of anything with the raw wins and losses (or with the raw percentage of wins in the playoffs). For one thing, there are radical differences in how many playoff games different coaches have coached. Another problem is that many coaches have coached too few games for making any judgments just based on raw wins and losses. Yet another problem (and it is a big problem) is that different coaches average different quality of players over their playoffs coaching careers. All of these problems are tackled and largely or completely solved by the QFTR methodology.
WHY THE SUB RATINGS ARE NEEDED AND ARE AT LEAST AS IMPORTANT AS THE OVERALL RATINGS
As you know already the RCR system involves two sub ratings that you combine to get the overall coach ratings. With all other QFTR systems the overall rating is more important than any of the sub ratings. With Real Coach Ratings, though, the playoffs and regular season sub ratings are by themselves extremely important and at this time are considered more valid than the overall ratings. The reasons are rather involved and are discussed in Section 5. QFTR thinks that the playoffs sub ratings are more important than either the regular season sub ratings or the overall ratings, but of course QFTR is biased because it is focused like a laser on the NBA playoffs and championship. For much more about this subject see “Section 5: Interpretation of Ratings and Evaluation of Coaches”.
NUMERICAL PARAMETERS OF RATINGS AND SUB RATINGS
Only a handful of coaches (who are likely the worst coaches) have overall Real Coach Ratings below zero. Unlike Real Team Ratings, where all the ratings average out to about zero (and where the teams not likely to make the playoffs have negative scores) with Real Coach Ratings, the vast majority of the coaches have positive ratings. And many if not most of the coaches who end up with negative ratings are going to be only slightly below zero.
One of the ways the QFTR system is validated is that it is much more likely for coaches with low and negative ratings to be fired than ones with higher ratings.
But the firing of coaches with negative ratings is far from automatic. Unfortunately, some teams persist with coaches who have negative ratings who in many cases could not possibly win The Quest for the Ring, and in some cases can never be and will never even be truly successful regular season coaches either. Apparently, managers and owners have a whole lot of difficulty evaluating coaches, something which is not surprising here at QFTR given all we have discovered and proven.
Let’s look at the average, the median, and the range of the overall ratings and of the two sub ratings.
In the November 2010 (like many QFTR Reports it was a little late) Look Ahead Version, the average Real Coach Rating is 706 and the median is 275. The highest rating is 8,801 (Phil Jackson, with Larry Brown the second highest at 2,420). The lowest overall rating is -326 (Mike D’Antoni). Twenty five coaches have overall Real Coach Ratings above zero and five coaches have ratings below zero.
In the November 2010 Look Ahead Version the average playoffs sub rating is 227 and the median is 0. The highest playoffs sub rating is 6,035 (Phil Jackson, with Larry Brown the second highest at 2,199). The lowest playoffs sub rating is -793 (Rick Carlisle). Eleven coaches have playoffs sub ratings above zero and twelve coaches have playoffs sub ratings below zero. Seven coaches, all of whom have never coached a NBA playoff game, have playoffs sub ratings of exactly zero.
In the November 2010 Look Ahead Version the average regular season sub rating is 479 and the median is 201. The highest regular season sub rating is 2,766 (Phil Jackson, with Greg Popovich second at 1,884). The lowest regular season sub rating is -107 (Lionel Hollins). Twenty eight coaches have regular season sub ratings above zero. Two coaches have regular season sub ratings below zero.
We just presented those numbers not only to make using the 2010 reports easier, but also because it is likely that, unlike prior to now, those parameters are not going to change much in the future.
======= SECTION THREE: DISCUSSION OF AND CALCULATION OF FACTORS USED FOR THE PLAYOFFS SUB RATING =======
Mechanically, the playoffs sub rating is simply the rating you get when you factor in only the playoffs-related factors. The playoffs sub rating consists of the following factors which will be discussed in detail in order:
(1) Playoff games coached
(2) Championships won
(3) Conference Titles won (but where the Championship was not won)
(4) Playoff Games Coaching Score
This list is deceptively short because the fourth item actually requires numerous components, and it has a very sophisticated data base backing it up and validating it. If those components were listed separately, the total number of components comprising the playoffs sub rating would differ depending on exactly how the system was broken down, but would be at least ten.
1 PLAYOFF GAMES COACHED
This is also known as the playoffs experience factor. This is very simple: two points are awarded for every playoff game coached regardless of result.
The limit is 200 playoff games. There will most likely never be a coach who benefits in any significant way from getting more playoff coaching experience beyond 200 games. Coaches who have coached more than 200 playoff games are going to be older, very veteran coaches who are extremely unlikely to change how they coach.
Also, the number of coaches coaching currently who have coached more than 200 playoff games is always going to be a tiny number. As of January 2011, there are only three current coaches who are close to or over 200 playoff games coached:
Phil Jackson 323
Jerry Sloan 202
Larry Brown 193
Coaches such as these already know as much as they ever will know about winning NBA playoff games. If some of their beliefs are wrong, everyone is going to have to live with that, because coaches this experienced are not going to change their ways after all this experience spanning many, many years. And unfortunately, it is very possible for even coaches this experienced to have false beliefs about how playoff games and championships are won. QFTR has hard, smoking gun evidence to prove that; see Section 5 of this Guide and see also various Reports at QFTR.
2 CHAMPIONSHIPS WON
100 points are added for each Championship win. It is always 100 points regardless of how many games the Championship consisted of. These points are first and foremost awarded for merit but also they can be looked at as extra points given for extremely valuable experience. Counting the two points every coach gets for experience for every playoff game (assuming less than 200 playoff games have been coached) and assuming an average Championship of about six games, the total experience points for each Championship game (where the Championship is won) is approximately nineteen.
3 CONFERENCE FINALS WON BUT THE CHAMPIONSHIP IS NOT WON
50 points are given to each coach who wins a Conference Final but loses the Championship. It is always 50 points regardless of how many games the Conference Final consisted of and regardless of how many games the Championship consisted of. These points are first and foremost awarded for merit but also they can be looked at as extra points given for extremely valuable experience. Counting the two points every coach gets for experience for every playoff game, and assuming an average Conference Final of about six games, the total experience points for each Championship game (losing effort) is approximately ten.
There is no bonus for mere losing appearances in the conference finals. Only two playoff series need to be won to merely reach these finals, and either an extra outstanding bunch of players and/or mere luck could in many cases allow a team with even a bad playoffs coach to fairly easily reach a Conference Final.
PLAYOFF GAMES COACHING SCORE
This last of the four factors making up the Playoffs Sub Rating is by far the most important one. This is where all of the good, successful playoffs coaches are going to get most of their points from. On the flip side, this factor is where the bad playoff coaches get heavily penalized up to and including cases where they end up with a very negative playoffs sub rating despite having a lot of experience.
The following will take you on a little journey whose destination is the Playoff Games Coaching Score. This score is calculated for each playoff series and for each coach. The key to the score is statistically determining (for each coach and for each series) the exact number of playoff games won that were supposed to be losses, and also the exact number of playoff games lost that were supposed to be wins. All of this is calculated using the QFTR Playoffs Series, Games, Teams, and Coaches Database, or QFTR Playoffs Database for short.
THE QUEST FOR THE RING PLAYOFFS DATABASE
The QFTR NBA Playoffs Series, Teams, and Coaches Database has every playoff series played beginning with the 1979-80 year through the present (2010) except for sixteen best of three series played from 1980 through 1983 (four of them each year). Why these were excluded was explained in Section 2 above. As of 2010 there are 433 NBA playoff series in the database.
For each playoff series, there are 22 primary information items:
DATABASE ITEM ONE: The Year (the series was played)
DATABASE ITEM TWO: The Round; in all years there were four rounds, but round one series played from 1980 through 1983 are not included as explained earlier.
DATABASE ITEM THREE: Away Team; this is the team that does not have the home court advantage
DATABASE ITEM FOUR: Offensive Efficiency of the Away Team: This is the average points scored per 100 possessions (in the regular season leading up to the playoffs).
DATABASE ITEM FIVE: Defensive Efficiency of the Away Team: This is the average points given up per 100 possessions (in the regular season leading up to the playoffs).
DATABASE ITEM SIX: Net Efficiency of the Away Team: This is Offensive Efficiency minus Defensive Efficiency for the Away Team. This can either be a positive or negative number, but most playoff teams have positive net efficiencies and most teams that do not make the playoffs have negative net efficiencies.
DATABASE ITEM SEVEN: Offensive Efficiency of the Home Team: This is the average points scored per 100 possessions (in the regular season leading up to the playoffs).
DATABASE ITEM EIGHT: Defensive Efficiency of the Home Team: This is the average points given up per 100 possessions (in the regular season leading up to the playoffs).
DATABASE ITEM NINE: Net Efficiency of the Home Team: This is Offensive Efficiency minus Defensive Efficiency for the Home Team.
DATABASE ITEM TEN: Home Team Net Efficiency minus Away Team Net Efficiency: This is the Net Efficiency of the Home Team minus the Net Efficiency of the Away Team.
In almost exactly 90% of the series, this number is positive. When it was, the better team according to efficiency had the home court advantage. Note that since home court advantage is determined by wins and losses, this means that wins and losses are extremely highly correlated with net efficiency. But for looking at results of and for predicting series, net efficiency is even more reliable than simple wins and losses.
In about 2% of the playoff series, both teams had the same net efficiency; in these cases Item Ten is zero.
In almost exactly 8% of the playoff series, this number is negative. When it is, the team that is not as good according to efficiency was able to somehow get the home court advantage, from a tie breaker for example.
The most lopsided playoff series in history according to efficiency was the round one 1992 series between Miami and Chicago which had Michael Jordan that year. Miami’s record that year was just 38-44 while Chicago was 67-15. Chicago’s net efficiency that year was 11.0 and Miami’s was -4.2. Item Ten was 11.0 minus negative 4.2 or 15.2; this is the highest difference since 1980 to date. The Chicago Bulls were overwhelmingly favored and, sure enough, they defeated the Miami Heat three games to zero in that one.
The series where the away team was better than the home team by the greatest margin was in round two in 1997 where the Seattle Supersonics were the Away Team and the Houston Rockets were the Home Team. Seattle had a net efficiency of 8.5. Houston had a net efficiency of 4.8. In this case Item Ten was -3.7. Despite being much less efficient than Seattle, Houston had the home court advantage. Both teams finished with 57 wins and 25 losses. Houston won game seven of this series at home and thus won this series 4 games to 3. Houston went on to the West Conference Final but lost to the Utah Jazz 4-2.
DATABASE ITEM ELEVEN: Home Team Net Efficiency minus Away Team Net Efficiency plus the Home Court Advantage Adjustment: The adjustment is always 1.4 points which represents the advantage that the home team has expressed in terms of net efficiency. Having home court advantage is approximately equivalent to having a net efficiency that is 1.4 points better than the one calculated from the regular season.
This Item Eleven essentially tells you how close the series should be, with the home court advantage factored in.
For example, for the Seattle vs. Houston series just discussed, Houston’s net efficiency was boosted from 4.8 to 6.2. Seattle still had the better net efficiency (8.5) but it lost game seven in Houston. In this case Item Eleven was 6.2 minus 8.5 equals negative 2.3. Remember, this being negative is very unusual. Only 8 percent of series have negative numbers for Item Eleven and less than that still have negative numbers after 1.4 is added to the home team’s net efficiency.
As another example, for the Miami-Chicago series discussed just prior to the Seattle-Houston one, Chicago had home advantage and so its’ net efficiency was boosted from 11.0 to 12.4; Miami’s net efficiency remained minus 4.2. In this case Item Eleven was 12.4 minus negative 4.2 equals 16.6.
DATABASE ITEM TWELVE: Favored Team: This field is a text field and is either “Home” or “Away” depending on which team is favored. If Item Eleven is positive as it is most of the time, the team with home court advantage was favored, and vice versa. Out of the total of 433 series, only 14 have been ones where the team without the home court advantage was favored to win the series. These series have been split seven a piece: seven times the Away Team won as expected and seven times the Home Team won unexpectedly. None of these were all that surprising upsets because the Away Team was favored by a small amount in all of these.
The favored team needs to be clearly identified so that the expected wins and losses process can be worked relatively easily; read on for details.
DATABASE ITEM THIRTEEN: Away Team Actual Wins: The number of games actually won in the series by the Away Team.
DATABASE ITEM FOURTEEN: Home Team Actual Wins: The number of games actually won in the series by the Home Team.
DATABASE ITEM FIFTEEN: Expected Away Team Wins
DATABASE ITEM SIXTEEN: Expected Home Team Wins
For items fifteen and sixteen, the first step is that whichever team is favored (according to Item 11 and as shown in Item 12) is expected to win the number of games that wins the series. For best of seven series, the expected wins for the favored team is four. For best of five series, the expected wins for the favored team is three.
The expected wins for the team not favored (the underdog) is determined based on a very carefully constructed and calibrated scale. For very and extremely close series, the expected wins of the underdog is one game fewer than the number of wins needed to win the series. In a best of seven series between two very closely matched teams, the expected number of wins for the underdog is three (and the favored team is expected to win four games).
At the opposite extreme, for series where the difference between the teams is large, which is most common in the first round, the expected number of wins of the underdog is often zero.
In between the extremes of razor close series and very lopsided series, the expected number of wins for the underdog ranges between one fewer than the number of wins needed to win the series (which is three for best of sevens) and zero. The scale is calibrated down to net efficiency differences of just 0.1. Here is the actual scale with just the round number efficiency differences shown:
DIFFERENCE IN NET EFFICIENCY VERSUS EXPECTED WINS BY UNDERDOG SCALE
The first number just below here is Item Eleven (the difference in the net efficiencies with the home court adjustment factored in) and the second number is the expected wins for the underdog in a best of seven series.
0.0: 3.00 games
1.0: 2.90 games
2.0: 2.78 games
3.0: 2.58 games
4.0: 2.28 games
5.0: 1.93 games
6.0: 1.53 games
7.0: 1.20 games
8.0: 0.90 games
9.0: 0.60 games
10.0: 0.40 games
11.0: 0.20 games
12.0: 0.00 games
(If the gap is greater than 12, zero games are expected to be won by the underdog.)
The scale, which you could look at as the all-important core of the entire Playoffs Sub Rating (and even of the entire RCR system) was very carefully constructed in accordance with and validated against all of the actual historical results of NBA playoffs series from 1959-60 through 2009-10.
There is a different scale for best of five series which is constructed, calibrated, and validated in the same way.
So now we have Items Fifteen and Sixteen, the expected wins for each team, and we are ready to move on.
DATABASE ITEM SEVENTEEN: Actual Away Team Wins minus Expected Away Team Wins: Positive numbers are good and negative numbers are not good for the Away Team and its Coach.
DATABASE ITEM EIGHTEEN: Actual Home Team Wins minus Expected Home Team Wins: Positive numbers are good and negative numbers are not good for the Home Team and its Coach.
DATABASE ITEM NINETEEN: Away Coach: The Coach of the team that did not have the home court advantage is identified here. (This is a text field.)
DATABASE ITEM TWENTY: Home Coach: The Coach of the team that did have the home court advantage is identified here. (This is a text field.)
DATABASE ITEM TWENTY ONE: Away Coach Score
DATABASE ITEM TWENTY TWO: Home Coach Score
Items 21 and 22 are the most important and innovative end products coming out of the database.
Items 21 and 22 are calculated in a coordinated way rather than separately. For every playoff series, one of the coaches will have a positive Coach Score and the other one will have a negative Coach Score that is the inverse. For each series, if you add the two coach scores the result is always zero. For the entire database, if you add every single coach score the result is always zero.
These two coach scores are calculated for each playoff series in three steps:
STEP ONE
First, Item Seventeen times 100 is the preliminary Away Coach Score (Item 21). Similarly, Item Eighteen times 100 is the preliminary Home Coach Score (Item 22).
STEP TWO
Step two is that preliminary Item 21 and preliminary Item 22 are compared. Whichever is farther from zero is declared to be the “controlling score”. (Another way to think of this is that the absolute value of the negative score is compared to the value of the positive preliminary score, and whichever number is greater is the “controlling score”. Of course, using the absolute value is a very temporary thing; the final coach score will be negative whenever the preliminary score was negative.
STEP THREE
The controlling score is the actual score for the corresponding coach. (The preliminary, the controlling, and the actual scores are all the exact same number). The other score (the “non-controlling score” if you will) is discarded. In its place goes the inverse of the controlling score. Note that all that is being changed is the magnitude of the number; whether the score is positive or negative is never changed. This inverse of the controlling score is the final score for the other coach.
What are we actually doing with this procedure? We are identifying the biggest expectation gap; specifically, we are identifying whether the Home Team and Coach had the biggest gap between expectation and result (either positive or negative) or whether it was the Away Team and Coach that had the biggest gap (either positive or negative). Once the biggest gap is identified and scored, the other coach receives the invoice or opposite score.
Note that the gaps for all the playoff series in the database should, if the model is statistically valid, add up to very close to zero. In other words, the absolute value of the sum of the negative gaps should be very similar to the sum of the positive gaps. If they are substantially different, the scale can be slightly adjusted in a process known as recalibration. This type of recalibration is very important and very effective for insuring quality control and for insuring the reliable validity of results. For complete details, see Section Six.
EXAMPLE OF THE CALCULATION OF COACH SCORES FOR A PLAYOFF SERIES
Here is an example; we’ll use the 2010 NBA Championship between the Boston Celtics and the Los Angeles Lakers. Boston was the Away Team and Los Angeles was the Home Team. The Coach of Boston was Doc Rivers and the Coach of Los Angeles was Phil Jackson. Boston had a net efficiency of 3.9 and Los Angeles had a net efficiency of 5.1. The difference (Item 10) was 1.2. Item 11 is where the home court adjustment of 1.4 is factored in, so Item 11 is 2.6. Los Angeles was the favored team.
According to the chart that QFTR uses that gives expected wins according to adjusted difference in net efficiency, the expected wins by the underdog in a best of seven series where the adjusted efficiency difference is 2.6 is 2.66. Boston, the underdog and the Away Team, actually won three games in that series. So for them, Item 17 (actual minus expected Away Team wins) was 3.0 minus 2.66 = .34.
Next, you can see that Item 21 (Away Coach Score) preliminary is .34 times 100 equals 34.
For Los Angeles, the expected number of wins (Item 16) was four and the actual number of wins was four. So Item 18 (actual minus expected Home Team wins) is 4 minus 4 equals zero. Then Item 22 (Home Coach Score) is 0 times 100 equals zero.
Now we compare the two preliminary coach scores:
Preliminary Away Coach Score: 34
Preliminary Home Coach Score: 0
The one greatest from zero (regardless of whether negative or positive) is 34, which is the one for the Away Team and the away Coach. This is declared to be the controlling score, and the score of 34 is the coach score for this series for the Coach of the Away Team, which in this case was Doc Rivers. So in this particular series, the Away Team did a little better than expected, which earned the Coach, Doc Rivers, a “coach score” for this series of 34 points.
In accordance with step three (above) the inverse or opposite of the controlling score is minus 34 (-34). This is the score given to the Coach of the home team, which in this case was Phil Jackson. That is, Jackson’s preliminary score of zero is changed to minus 34 because Doc Rivers did a little better than he was supposed to according to the statistical model which, remember, is based on and validated by more than 600 playoff series played during a 50 year period ending in 2010.
COACH PLAYOFF SCORES CLOSE TO OR EXACTLY ZERO
Note that with this method the only way for a coach score to be exactly zero is for the series to be decided exactly according to expectations. Realistically, the only series that can possibly be decided exactly according to expectations are ones which are supposed to be 4-0 routs (or 3-0 routs in best of fives). If the actual result is 4-0 (or 3-0), if in other words the actual result is identical to the expected result, both coaches will have coach scores for that series of zero. In this case there is no effect whatsoever on either coach’s playoff sub rating (or their overall RCR).
But coach scores can be very close to zero regardless of how close the series was expected to be. For example, the scale might project a series to be decided (statistically, of course) 4 games to 1.99 games. If the actual result is 4-2, then the underdog coach will have a coach score of 1: (.01 times 100). The favored coach has a coach score of -1 in this example.
The main point is that the model embedded in the database accurately measures the difference between expected and actual playoff wins for each playoff series (and for both coaches in each series). Again, the larger of the two differences (between actual and expected) is the operative one.
PLAYOFF COACH SCORES FOR ALL SERIES COACHED
For each coach, the combined total of all his coach scores for all series he coached is called his “Playoff Games Coaching Score”. This in turn is one of the four components of the Playoffs Sub Rating of the Real Coach Ratings system. As discussed earlier, this Playoff Games Coaching Score is more important than the other three components of the Playoffs Sub Rating combined.
NUMBER OF GAMES WON THAT SHOULD HAVE BEEN LOST OR NUMBER OF GAMES LOST THAT SHOULD HAVE BEEN WON
For each coach, the Playoff Games Coaching Score divided by 100 equals the number of games won that should have been lost (if positive) or the number of games lost that should have been won (if negative). This derived result is reported in the Coach by Coach Details Sub Section of the Rankings Section of Real Coach Ratings Reports. Although technically this is a statistical construct as opposed to exact reality, we know for a fact that the real life numbers are very, very similar to the calculated numbers.
By dividing the unexpected wins or losses by the total number of playoff games coached, we can then calculate a rate of unexpected wins for the good playoff coaches and the rate of unexpected losses for the bad playoff coaches. For more details, see Section Two above.
SCORES FOR GAMES WON AND LOST ACCORDING TO EXPECTATIONS
Coaches’ playoff sub ratings do not change at all when they win games they were supposed to win or when they lose games they were supposed to lose. If a series is decided in exactly the way it is supposed to be, both coaches get the experience points (two points for each game) and they get nothing else.
You can see from this how it is not an exaggeration to say that the Playoffs Sub Rating completely ignores raw wins and losses. Instead, it awards only differences between actual and expected wins and losses.
This is not only valid but is much superior to awarding or penalizing anything at all based on the raw wins and losses. Raw wins and losses are determined more by the quality of the players than by the quality of the coaches. What we want to know and what the playoffs sub rating shows for each coach is whether that coach won any games the players would not have won were it not for the above average coaching. And of course we also want to know for each coach is whether that coach lost any games the players alone would not have lost were it not for the below average coaching.
This ends the primary and in detail discussion of the Playoffs Sub Rating. For those who are a little confused, and/or for those not convinced that the system just discussed in detail works well, please read the following, which is a revised version of a discussion that first appeared in the May 2010 User Guide (when the framework of the new system was established but all the details and the database were awaiting development). The following is a relatively simple but accurate and effective summary of the QFTR Playoffs Sub Rating system.
SUPPLEMENTARY, SUMMARY DISCUSSION OF THE COACHING SCORES FOR THE PLAYOFFS SUB RATING
For each playoffs series we start with four measures, the offensive efficiency of the two teams and the defensive efficiency of the two teams (all from the regular season, of course). Efficiency is how many points scored or how many points given up per 100 possessions. Over the course of the regular season, the thousands of possessions result in precise efficiency numbers where seemingly very small differences are actually big differences between teams that can easily be big enough to cause wins or losses in the playoffs.
Then for each team we subtract the defensive efficiency from the offensive efficiency to find the net efficiency. Most but not all playoff teams have positive net efficiency numbers and most teams that do not make the playoffs have negative net efficiency numbers.
Then we add a small “bonus” amount to the net efficiency of the team that has the home court advantage in the series.
Then we compare the two net efficiencies and whichever team is higher is the favorite. Of course this is true in real life: the team with the better net efficiency beats the other team the vast majority of the time, although when the differences are smaller this is not so certain.
The exact difference between the two net efficiencies is crucial, because it determines the likelihood or probability of the favored team winning. The greater the difference in net efficiency is, the greater the probability that the better team will win the series. Assuming no injuries, in many first round series and even occasionally in a second round series, the probability that the better team will win the series is almost 100%. QFTR has carefully constructed a scale to translate deceptively small differences in net efficiency to how many games the underdog should win on average in a best of seven game (and a best of five) series. For example, if the difference in net efficiency is 5.0, the underdog will on average win 1.5 games in a best of seven series (with the favored team winning 4 games). This average number of wins by the underdog is usually called the “expected number of wins”.
Next, for each playoff series, we compare the number of games actually won and lost by the coach versus what the expected number of wins and losses are. The difference between the actual and the expected is the all-important thing; this difference is then amplified (with a multiplier) to accurately reflect the great (and underestimated by the general public) importance of coaching in the playoffs.
Unexpected wins and losses are rewarded and penalized heavily but not excessively. Unexpected playoff losses are one of the worst things that can happen to a team and a franchise. Among other things, unexpected losses waste the owners’ money, because they partly waste the efforts of a lot of players and managers, and because they make the franchise less likely to attract top free agents. Obviously, unexpected losses also waste the talents and efforts of the players. Unexpected playoff losses are a nightmare and the fewer of them you have the better.
Note that for a coach who is exactly good enough to win exactly the number of playoff games he is supposed to win and no more than that, statistically speaking, unexpected playoff losses are going to be exactly offset by unexpected playoff wins once the sample size (number of playoff games in this case) is large enough. In real life, this means that all coaches are going to have a series once in awhile where his team performs below standard (and loses one or more games that should have been wins) but these will statistically eventually be offset by that coaches’ unexpected playoff wins.
This is the most crucial thing you have to keep in mind: the main purpose of the playoffs sub rating system is to on the downside flush out and penalize coaches who have more unexpected playoff losses than unexpected playoff wins. On the upside, the primary purpose of the advanced system is to flush out and to award coaches who have more unexpected playoff wins than unexpected playoff losses.
Quest for the Ring already knows many of the basketball strategies and tactics that work better in the playoffs than in the regular season, and you do to if you read the site because we review and illustrate most of them from time to time.
======= SECTION FOUR: DISCUSSION OF AND CALCULATION OF FACTORS USED FOR THE REGULAR SEASON SUB RATING =======
There are four components of the Regular Season Sub Rating:
(1) Number of Regular Season Games Coached
(2) Number of Consecutive Regular Season Games Coached with Current Team
(3) Number of Regular Season Wins
(4) Number of Regular Season Losses
1 NUMBER OF REGULAR SEASON GAMES COACHED
One Point is given for each regular season game coached up to 500 games, which is about six seasons worth of games. If a Coach has not learned just about everything he needs to by this point, it is unlikely he ever will, so the award for experience is sharply reduced for all games coached beyond 500. 0.25 points (1/4 of a point) is given for games 501 through 1,000. 0.06 points (about 1/16 of a point) is given for all games over 1,000. Note that in early versions nothing was given for games coached in excess of 1,000; the latest version corrects that very minor error by recognizing that even long, veteran coaches might make extremely small improvements in their later years.
What about rookie and near rookie coaches? Just because they have never coached in the NBA, should their experience rating be zero? No, I don't believe so. They either have substantial coaching experience in other Leagues, or they were extremely talented and/or intelligent players, or both, or else they would not have been hired to be a head Coach in the NBA. So any coach who has coached for fewer than 200 NBA games is given exactly 200 points for experience. So rookie coaches start out with Real Coach Ratings of 200 and they go up or down from there. For new coaches, the Regular Season Games Coached is fixed at 200 until the coach has coached 200 games; then it goes up from there (by 1 for each game through 500, by 0.25 for games 501 through 1,000, and by 0.06 for any games above 1,000).
2 NUMBER OF CONSECUTIVE REGULAR SEASON GAMES COACHED WITH CURRENT TEAM
This is a supplementary experience score which most benefits coaches who have gone the longest without being fired by their current teams. The points given are 0.30 (3/10 of a point) for all games coached, up to 1,000 games, by the coach for the team the Coach is currently working for.
The one side of the coin regarding this is that the coach must be doing what the organization wants to avoid being fired, and he can't be a total failure basketball wise, so starting with those things he deserves credit in proportion to how long he has kept his post. The other side of the coin is that the more experience a Coach has with a particular team, the more valuable he is to that franchise, because he knows everybody and everything concerned with the franchise better and better with each passing year. Generally speaking, the more successive games a Coach has coached with the same team, the more effectively and efficiently he can help the team squeeze out wins that would otherwise be losses.
Jerry Sloan, who coming in to 2009-10 had coached a mind boggling 1,668 games for the Utah Jazz, is the ultimate example of a Coach who due to his many years with the same team is going to be more effective and efficient than he would be if he had just switched to a different team. Due partly to this factor, do not be surprised if the Jazz become a losing team shortly after Sloan finally retires.
Another name for this factor might be "franchise specific experience." For 2009-10 the Washington Wizards hired a new head Coach, Flip Saunders, who has a lot of prior experience with other teams and has a relatively high rating. But he is brand new to the Wizards, so be careful not to expect miracles or even to assume that his coaching is going to be as good as it has been in the past from the get go. Look instead for the Wizards to get a little better as the season goes along and in the coming years if Saunders remains the coach. This is because Saunders needs time to merge his skills and abilities with the specific factors involved with making the Wizards a winning team.
3 NUMBER OF REGULAR SEASON WINS
Four points is assigned per regular season win.
4 NUMBER OF REGULAR SEASON LOSSES
Minus 5.5 points is assigned per regular season loss.
WHY THE PENALTY FOR LOSING A REGULAR SEASON GAME USUALLY EXCEEDS THE GAIN FOR WINNING ONE
You must keep in mind that any coach who has been fired for not winning enough in the regular season, for not winning enough in the playoffs, or for both, and has not been rehired by another team, is not on the list of coaches being rated. We don't care about them. In theory we are supposed to be evaluating mostly coaches who are among the best in the country.
The whole idea in multi-billion dollar professional sports is to win more than you lose, and that most obviously and most definitely includes the coaches. So a 50/50 record in either the regular season or in the playoffs is not good enough long term, and coaches who are not better than .500 sooner or later get fired and not rehired, and those who have met that fate already are not on the list of current coaches.
To reflect the reality that coaches who can not win more than they lose are sooner or later going to be fired, and will most likely never advance in the playoffs before they are fired, it is necessary to make sure that losses entail a bigger negative number than do wins entail a positive number. But we have to avoid getting carried away. So when I add in the amount given for experience, the apparent gap between the award for winning and the penalty for losing is shrunk down to a small amount.
Now consider the true underlying net positive and negative scores for the four types of regular season games and results, which you get by combining the experience points with the points for the win or the loss:
TRUE REGULAR SEASON COACH GAME SCORES FOR WINS
For the majority of coaches, this will be 5 Points: 4 points for the win and 1 point for the experience. Here is the breakdown for each type of coach:
Rookie and Very New Coaches (less than 200 games): 4 for the win + 0 for the experience equals 4.0 points
Relatively New Coaches (201 to 500 games): 4 for the win + 1 for the experience equals 5.0 points
Veteran Coaches (501 games to 1,000 games): 4 for the win + .25 for the experience equals 4.25 points
Long Veteran Coaches (more than 1,000 games): 4 for the win + 0.06 for the experience equals 4.06 points
TRUE REGULAR SEASON COACH GAME SCORES FOR LOSSES
For the majority of coaches, this will be -4.5 points: -5.5 points for the loss and 1 point for the experience. Here is the breakdown for each type of coach:
Rookie and Very New Coaches (less than 200 games) -5.5 for the loss + 0 for the experience equals -5.5 points
Relatively New Coaches (201 to 500 games) -5.5 for the loss + 1 for the experience equals -4.5 points
Veteran Coaches (501 games to 1,000 games): -5.5 for the win + .25 for the experience equals -5.25 points
Ultra Veteran Coaches (more than 1,000 games): -5.5 for the loss + 0.06 for the experience equals -5.44 points
In summary and in comparison:
--Rookie and very new coaches get 4 points for regular season wins and lose 5.5 points for losses.
--Relatively new coaches get 5 points for regular season wins and lose 4.5 points for losses.
--Veteran coaches get 4.25 points for regular season wins and lose 5.25 points for losses.
--Ultra Veteran Coaches get 4.06 points for regular season wins and lose 5.44 points for losses.
Important note: the rookie and very new coaches actually get the same points as the relatively new coaches when you look at the bigger picture, because they already have received 200 experience points for their first 200 games.
The key thing to note here is that with respect to wins and losses the regular season sub rating is a little biased in favor of relatively new coaches versus the veteran coaches. This is on purpose, of course. This substantially offsets what would otherwise be an unfair advantage in the rating system. The more experienced coaches are expected to do somewhat better in winning and losing in order to achieve a net positive from their winning and losing. This is the primary mechanism used by QFTR that substantially evens the playing field between coaches of widely differing amounts of experience, without being unfair to any type of coach. Without this slightly differing treatment, the ratings system would be biased to some extent in favor of the veteran coaches, because the veteran coaches are eligible for far more points from the sheer number of experience points they get, from the consecutive games coached with current team item, and often from any or all of the items in the playoff sub rating system.
In any future tweaking of the RCR system, one of the areas most likely to be tweaked is points given or taken away for regular season wins or losses by the different types of coaches. A case can be made that relatively new coaches should be even more favorably treated in the regular season relative to the veteran coaches than they already are. But if there is any future tweaking, we will as always be careful to avoid going overboard.
See Section 5 and especially Section 6 for more on the difficulties in comparing coaches with widely different numbers of games coached.
======= SECTION FIVE: EVALUATION OF COACHES AND SPECIFIC INTERPRETATION OF RATINGS =======
The primary objective of Quest for the Ring (QFTR) is to determine and report exactly how NBA playoff games are won and lost. Since in the playoffs, and especially in later rounds of the playoffs there is usually very little difference between how good the players are, any difference in the coaches, sometimes including even very small differences, can determine who wins the series. Therefore, one of the most recurring themes at QFTR is what is good and what is bad coaching for the playoffs. This means that QFTR gives very heavy attention to coaching in its reports on the main home page.
Further, the general public is unaware of just how important coaching is in the playoffs, especially in the Conference Finals and in the Championship, and this fact makes QFTR all the more motivated to keep reporting on the subject. Very, very few other basketball writers attempt to cover this subject at all; it’s like a lonely frontier out here. Regardless of their being very little if any competition for reporting on pro basketball coaching, QFTR uses the same high quality standards and high and reliable quality control for this area as it does for other areas (which other writers and broadcasters do attempt to cover).
Since there is so much on the subject in the hundreds and hundreds of reports on the QFTR home page, any single article on the subject, assuming it was not a full scale and lengthy book, could only highlight the main points. Similarly, this Section of this User Guide (which obviously can not be even a short book in length let alone a long one) can only discuss some of the most important points about coaching in the playoffs in particular and in the NBA in general.
Moreover, this Section has the second objective of explaining specifically how to use the overall ratings and the two sub ratings of the Real Coach Ratings System. The need for this second focus further limits the amount of coverage we can devote to the evaluation of coaches topic. We will try to more than scratch the surface here, but trust me; this topic is way too big for this Section of this Guide.
Given all of the limitations we have for this Section, anyone who wants “full and complete coverage” of what good and bad coaching is in the NBA, and especially in the playoffs, should read any or all of hundreds of reports that are at the QFTR home page.
“Evaluation of coaches” generally will be covered first in this dual focus Section and then “specific interpretation of ratings” will be last.
PART ONE OF TWO PARTS OF SECTION FIVE: EVALUATION OF COACHES
IMPACT OF COACHING IN THE REGULAR SEASON VERSUS THE PLAYOFFS
Theoretically, unless he is stuck with a truly lousy roster, any reasonably good coach can win a lot of regular season games and get his team into the playoffs. Plus, any coach at all, including a bad one, can squeak a very good or great team into the playoffs. For any reasonably good coach, merely getting into the playoffs is really not much of an accomplishment at all.
Many, many owners, managers, and fans do not seem to understand this, but the only thing that really matters with regard to coaching is what happens in the playoffs. Only the truly good coaches can win in the playoffs. The playoffs are where the wheat is separated from the chaff. In the NBA, the regular season is quite honestly nothing more than the preseason for the "playoff season," which is the only season which really matters when all is said and done.
Another way to look at the regular season is that it is a sort of D-League for the off-season. What I mean by that is that owners, managers, and coaches should be watching other teams in the regular season so that they can spot up and coming players who they should try to obtain in the off season (and to a lesser extent in trades in the regular season prior to the trading deadline in February).
Playoff games are generally more intense in all respects: individual players' efforts, team play as a whole, and coaching efforts are all ramped up. And as most of the general public is generally aware of, most teams ramp up their defending in the playoffs.
CERTAIN VETERAN PLAYERS CAN COACH THEMSELVES TO SOME EXTENT
Always keep in mind that older, more veteran teams can coach themselves to one extent or another, particularly if the roster is both highly skilled and highly experienced. It doesn't matter who comes up with the winning schemes and patterns; what matters is that someone does. Younger teams, however, always need a good coaching staff to make headway in the playoffs.
Quest for the Ring has gone on record claiming that the 2007-08 Champion Boston Celtics are a good example of a team that could coach itself well to a large extent.
However, coaches are important in the late playoff rounds even for teams that can partly coach themselves. Coaches determine playing times, which are much more important than most people realize. If the coach of a really good, veteran team that is to some extent “coaching itself” often inserts the wrong players in the game at the wrong time and/or does not have the playing times roughly correct, and/or has a player completely benched who should be playing, then the team will be damaged from bad coaching regardless of how well the players are “coaching themselves”.
COACHES' NUMBER ONE OBJECTIVE IS TO AVOID BEING FIRED
The number one objective for all coaches, but especially for rookie and newer coaches, is to avoid being fired. Calculations indicate that the average Real Coach Rating is currently 706 and the median is about 275. So the objective of all rookie coaches must be to increase their starting rating of 200 toward the median and later on toward the average of 706 in as few years as possible.
Although there will occasionally be exceptions to the rule, coaches who move up even a little from 200 are generally safe from being fired while those who move down from 200 are not safe. Even achieving just a 250 gives the coach a little job security, 325 gives substantial job security, and 400 gives very substantial job security. I’m not saying that the job security achieved for those relatively modest ratings is a good or right thing. Rather, I am merely reporting what is going on in the real world.
The firing of coaches with ratings higher than 250 is relatively uncommon. But when a coach who has a rating of 250 or higher is fired, he is likely to be hired by a new team, most often for the very next season, but sometimes after a delay of a year or two or three. Coaches with ratings higher than 400 who are fired are very likely to be hired by a new team within at most a few years. If a coach with a rating higher than 600 is fired but is never rehired, then something exceptional happened; for example, maybe there was a complete and humiliating collapse in a playoff series. Or perhaps there was a vicious argument between that coach and one of the managers or the owner.
Note also that there is a huge exception to the general rule of thumb that coaches with ratings below 200 (and especially those with ratings below zero) are not safe from being fired. Long veteran coaches, those who have coached about 800 games or more, are often not fired even if they are very poor playoff coaches whose sharply below zero playoff sub rating drives their overall coach rating below zero. This is because many owners do not understand that some coaches do well in the regular season but can not do well in the playoffs, or worse, because of owners who are willing to settle for a good, “dependable” regular season coach even if he is a bad playoffs coach.
You can think of the range between 200 and 400 as "the proving ground" for coaches. Most coaches who drop below zero instead of going up from 200 during their first 3-6 years will be bounced out of the NBA. No mercy is given for coaches stuck during all of those years with sub par teams.
QFTR recommends that coaches who have ratings below 200 for more than about five straight years, and especially coaches who have ratings below zero for about five straight years should be fired unless the managers and owners involved are sure that the coach has not had competitive players to work with, or unless the managers and owners involved are sure that the coach is getting better at his job, or unless there is some other unusual mitigating factor.
Coaches, whether they are newer ones or long veterans, who maintain their jobs with Real Player Ratings below 200, and especially with Real Coach Ratings below zero, are frequently going to be men who have very cordial relations with the managers and owners. In other words, they are being kept on the payroll because the managers and/or the owners involved personally like the coach in question enough to brush aside any concerns about whether that coach is doing a good enough job for their team. These dubious coaches are given the benefit of the doubt or, in other words, sort of a free pass. These free passes generally don’t last for longer than roughly six years for newer coaches, but can last indefinitely for long veteran coaches.
It is not just owners and managers who can be fooled into thinking that a coach is a good one just because he has been coaching for many, many years. It honestly seems that most basketball writers and broadcasters are fooled in this way also. And of course, much of the general public is also fooled.
It is also true that some managers and owners live in fear that they might go from bad to worse if they exchange one coach for another. They simply do not have enough courage to strike out and try a rookie or a near-rookie coach, or to pick up a coach who has been fired by another team but who deserves a second chance.
The key is balance. On the one hand you don't want to be stuck out of caution or fear with a veteran coach who is simply not among the best coaches. On the other hand, you can't just strike out and pick any one who has never coached an NBA team before but seems like he might be a good coach. Rather, you have to do a lot of homework and research. You have to spend a lot of time and make every effort to find that one coach out of a hundred candidates who will actually become one of the better and maybe even one of the best NBA coaches.
Note that in the real world, most owners who strike out on this subject do so by erring on the side of too much caution or fear. In the real world, it appears to be pretty rare and pretty difficult for an owner to choose a coach who has never coached in the NBA before who ends up being a waste of time in effect. Due to the fact that coaching in the NBA is at least a little more complicated and a lot more important than most people and owners think it is, owners who gamble a little by trying a coach who has never coached in the NBA before have a fairly good chance to get a big reward for the little gamble.
THE COACH RUT AND WHY IT CAN EASILY HAPPEN TO OTHERWISE DECENT FRANCHISES
Teams should avoid getting stuck in a rut that the public is completely unaware of but that QFTR has proven exists. This rut is where a team has a very good regular season coach but a lousy playoffs coach. It can be extremely difficult to get out of this rut because it is very hard to fire a coach who usually does very well in the regular season.
Plus, which coaches are not good for the playoffs is basically a secret from the public. This is one of QFTR’s favorite and most important topics, and yet it took even us until November 2010 before we assembled all the hard proof and officially reported out which coaches are lousy playoffs coaches. And this is most likely the first time in history anyone carefully and mathematically investigated this. It took many hours of work to prove this beyond a shadow of a doubt and it was not very easy to do. So it is understandable that most people are in the dark and would not believe that there are a substantial number of coaches who are very good in the regular season but are poor in the playoffs.
The point is, this is basically unknown territory, so don't expect that these good in the regular but bad in the playoffs coaches are going to be fired when they should be (or never hired in the first place). Instead, expect that teams are going to make mistakes with these types of coaches year after year after year. People and things other than the coach will get the blame, and in some cases other people and things are also to blame. But the problem remains that this type of coach is very seldom if ever blamed simply because no one is aware that this type of coach exists and is fairly common.
Most lousy playoffs coaches get away with being lousy playoffs coaches year after year after year as long as they are good regular season coaches. A franchise can be in the dark about this for many years, for the entire time the coach is the coach. A team stuck with this type of coach will typically go along year after year thinking they have a chance to win the Quest, whereas their coach may be so poor in the playoffs that realistically they have no chance whatsoever to win it regardless of who the players are.
NEVER EVER HIRE A COACH WITH A POOR PLAYOFFS RECORD IF YOU WANT TO WIN A CHAMPIONSHIP
The best way to explain this section is with an example. The Denver Nuggets hired George Karl in January 2005 as their head coach despite the fact that he had a poor playoffs record and rating. RCR did not exist back then (and nor would the Nuggets use RCR even now) but they did have Karl’s playoffs win/loss record which should have been enough for them to avoid the mistake of hiring Karl. Specifically, when the Nuggets hired Karl, his playoffs record was 59-67. While coaching the Nuggets, Karl's playoffs record is 15-26 as of January 2011. So overall, his playoffs record as of January 2011 is 74-93. Percentage wise, Karl’s' playoff record has gotten worse while he has coached the Nuggets, not better (despite a strong result in 2009). In short, Karl had a losing playoffs record when he was hired and it has only gotten worse since.
The Nuggets were wrong to hire Karl and they are also wrong not to fire him unless he wins the NBA Championship within the next year or two. Which by the way, the Nuggets probably were in 2007, definitely were in 2008, possibly were in 2009, and were possibly again for 2010 talented enough to win a Championship if the playoffs coaching had been top notch. The now fired Nuggets general managers of the 2006-2010 era were experts at bringing relatively obscure but surprisingly good players (especially surprisingly good scorers) to the Nuggets.
Coaches with losing playoff records are fired by all truly serious NBA franchises these days regardless of regular season records. The absolute top franchises, including at least the Lakers, the Celtics, and the Spurs, would never in the first place hire a coach with a losing record in the playoffs. If their coach ever dropped to where he had a losing playoffs record, he would be fired by the top franchises regardless of how fantastic the coach’s regular season record was.
Why did the Nuggets hire Karl? I can only offer educated guesses. The Nuggets either knew in advance they would never win the Quest with Karl and hired him anyway, or they figured incorrectly that Karl's playoff record was trumped by better aspects of Karl's record, or they decided that Karl's playoff record could be excused for irrational reasons, or there was some other unknown, off the wall reason for hiring Mr. Karl.
The most favored specific “off the wall” theory regarding why Karl was hired is that the Nuggets decided roughly in 2002 to go for a certain kind of player who can be a major bargain because other teams generally avoid that kind of player. The Nuggets decided to go for more volatile players who might need to be contained by a crack the whip type of coach so that they don't "fly off the reservation" and harm team cohesion and morale. Karl is in fact a good coach if you have a bunch of players more emotional and more volatile than average, because for one thing he will not hesitate to bench players who get enraged about this, that, or the other thing. He will bench anyone at any time and for any reason, good or not.
Whatever the Nuggets' management thought, they thought wrongly. If you are a team owner or manager, you can not afford to take any risk or to make any benign assumptions or weak rationalizations when you choose a head coach. If a coach has a poor playoffs record, you have no choice but to not hire that coach if you are serious about winning the Quest. There are going to be coaches who are good enough to do well in the regular season but not good enough to prevail in the playoffs. You should not be the goober who hires one of them, obviously. Let some other franchise/team get stuck in the mud for years and years with that type of coach.
I have to be blunt and a little repetitive here to make absolutely sure I am understood. You should never, ever do what the Nuggets did if you are serious about winning the Quest. Your coach should have a good record for BOTH the regular season and the playoffs. The playoff record is even more important than the regular season record.
Finally, before leaving this crucial subject, I am going to state that given the choice between on the one hand a younger coach who is considered to be a good or great up and coming coach, but who has no NBA playoff record at all, and not much of a regular season one, and on the other hand a long-term veteran coach who has a decent, good, or even great regular season record but a poor, losing playoffs record, you are better off choosing the young coach with no playoff record.
In point blank and clear summary, hiring a coach with a bad playoffs record is one of the worst things you can do if you want to win the Quest.
MORE ON THE EVALUATION OF GEORGE KARL
Ever since our project started QFTR has focused on George Karl more so than any other coach (simply because when we first started we only intended to be a Denver Nuggets site). This may sound sarcastic but we actually do not intend it to be: George Karl has, by doing things that are wrong (or unwise if you prefer) alerted QFTR to many things that you DON’T want to do if you are coaching playoff games in the NBA.
Karl will go down in history as not the only one but certainly as one of the all-time most famous coaches among the ones whose coaching beliefs and methods work much better in the regular season than they do in the playoffs. There have always been coaches like this, there are other coaches like this right now, and there will always be coaches like this. But Karl will always stand out as a particularly good, example of this kind of coach, a “textbook case” if you will.
Out of twenty years in the playoffs, Karl has managed to get winning playoffs records in only four years. One of those was 2009, which was surprising to say the least. That year, Karl tried an ultra aggressive and energetic type of defending and proved that it can win you a few playoff games that you would otherwise have lost as long as the referees fail to call a good number of the fouls. However, the deep hole that Karl dug in many earlier years was so deep that the Nuggets' miraculous 2009 playoffs campaign was not enough to lift George Karl all that much in his playoffs sub rating. In the 2009 playoffs, his win-loss went from 62-83 to 72-89. (Then in 2010 it went to 74-93). Karl was still after 2009 and is still right now showing up in the win-loss and also in the ratings as a very poor playoffs coach.
PART TWO OF TWO PARTS OF SECTION FIVE: SPECIFIC INTERPRETATION OF RATINGS
In late 2010 QFTR evolved what was a general and vague coach recommendation system to a more organized and exact one tied to Real Coach Ratings that can be called the QFTR Coach Recommendation System (CRS). Separate playoffs and regular season recommendations are given for all NBA head coaches. These are given in a report that appears within a few days (or a few weeks at the most) following the Real Coach Ratings Reports. Specifically, the Reports with the official recommendations are scheduled for late August and for October; however, production limitations will sometimes cause them to be late.
QFTR gives two recommendations for each coach but paradoxically does NOT give any overall recommendation. Two main reasons explain this paradox. First, it turns out that there is a big, big difference for a lot of coaches in how well their coaching works out in the regular season versus how well it works out in the playoffs. It turns out that it is relatively common for pro basketball coaches to be very good regular season coaches but poor or very poor playoffs coaches. For these coaches, the way they look at and understand basketball and how they have their team playing works better in the regular season than it does in the playoffs. Because of this alone, making combined regular season / playoffs recommendations would be far less productive than you might think.
The second reason why we don't even attempt an overall recommendation is that franchises will look at the importance of the regular season and the playoffs differently. For franchises who know already they are most likely not going to be in the playoffs for a while, and also for franchises who think the regular season is more important for them than the playoffs, they might perhaps use the regular season recommendations more than the playoff ones.
However, QFTR strongly disagrees with any owner or manager who places the importance of the regular season above the importance of the playoffs. By rights, the playoffs should always be considered as more important than the regular season. If a team is not going to be making the playoffs this year it should by rights have a great playoff coach anyway, so when the team does make the playoffs in the near future it has the right coach for winning in the playoffs.
RECOMMENDATIONS ABOUT THE RECOMMENDATIONS
QFTR highly recommends that all franchises use the playoff recommendations more strongly than they do the regular season recommendations.
But some words of caution are in order. Never completely ignore the regular season recommendations. It is going to be very unusual for a great playoff coach to be a not so good regular season coach (unlike the reverse which is surprisingly common) but if there ever was a coach with an outstanding playoff record but a poor regular season record, you would want to avoid this coach as a kind of insurance policy against having the wrong coach overall. This scenario could play out if the number of playoff games coached was relatively low and a fluke amount of statistical error resulted in an artificially high playoffs rating (whereas meanwhile the lower regular season rating was exactly accurate).
At an absolute minimum, the playoffs should be considered equal in importance to the regular season and the playoff coach recommendations should be just as important as the regular season coach recommendations.
One thing QFTR could do (and what QFTR would do if forced to make an overall recommendation) would be to use a formula where the playoffs rating was more important than the regular season rating. Or for that matter we could change the overall Real Coach Ratings system so that it was even more weighted in favor of the playoffs than it already is. We choose not to do either of these things at this time because of the complexities already discussed and because of other factors not mentioned here.
To some extent this discussion about which recommendations to use is not completely on point, because obviously, the best thing and what you want is a coach who is above average for BOTH the playoffs and for the regular season. Unfortunately however such coaches are much rarer than most people think they are. It turns out that although it is not rocket science, coaching basketball at the NBA level is much more difficult and complex than most people think it is. And then NBA playoff coaching is more difficult and complex than regular season coaching is. Ironically, many of the head coaches themselves apparently underestimate how difficult their job is and many of them don’t even begin to understand the magnitude and nature of the differences between the regular season and the playoffs.
THE PHIL JACKSON ADJUSTMENT FOR THE PLAYOFFS COACH RECOMMENDATIONS
Phil Jackson is by far the best and most successful NBA playoffs coach among current and recent head coaches. Actually, he is most likely the best NBA playoffs coach of all time (although there are a handful of other ones who are in Jackson’s ballpark). Jackson has repeatedly won playoff games he wasn't supposed to win versus some of the very best of the other NBA coaches. Jackson has won just about 42 playoff games he wasn’t supposed to win out of a total of 323 playoff games. Jackson’s all time playoffs record is 225-98 but according to the QFTR investigation his “par record” is just 183-140.
This means that if you think (as most of the general public does) that Phil Jackson wins in the playoffs mostly according to how good his players are and that he has little or no impact on how many wins his teams get you are completely wrong. Jackson has had good teams, since he was “supposed to be” 183-140 in the playoffs but he boosted that to 225-98 and this was such a big improvement that we know, for example, that Jackson would not have won 11 rings (and very possibly not even half a dozen rings) if he were an average playoffs coach. We also know that Jackson would have won very few if any rings if he was a well below average playoffs coach.
Some coaches have come up against Phil Jackson in many more playoff games than others. Rick Adelman, Jerry Sloan, and Greg Popovich lead this pack, having faced Jackson in 29, 27, and 26 playoff games respectively. Adelman has pretty well held his own but Sloan and especially Popovich have been hammered by Jackson. After these three there is a group of five coaches who have faced Jackson in between 12 and 16 playoff games and three out of five of these have been handed (by Jackson) a big bunch of losses that should have been wins. The damage to them, though, is far less than the damage to Popovich.
For a big majority of coaches, the more playoff games a coach has played against Phil Jackson, the more his Playoff Rating is going to be depressed because Jackson has heavily dominated in playoffs coaching. Therefore, for my playoff coach recommendations, I decided to remove most of the bias caused by big differences between coaches in the number of games versus Phil Jackson. For determining the recommendations, 4/5 or 80% of the scoring resulting from games versus Phil Jackson is removed.
The "Phil Jackson adjustment" is NOT done in the main Real Coach Ratings Report. All of the numbers in the playoffs sub ratings in that Report include all games played against Phil Jackson. Only in the official recommendations Report are in effect 80% of the games versus Phil Jackson taken out.
The advantages of the Phil Jackson adjustment outweigh the disadvantages. The main advantage is that without it, coaches who have been severely hammered by Jackson (due to having to play him more than other coaches) will have misleadingly low ratings.
However, the disadvantage is that if a coach goes up against Phil Jackson in the playoffs, the coach might in theory appear to be a little more competitive versus Jackson than he really is. Really though, that is a moot point because Phil Jackson’s ratings are far, far ahead of any other coach’s whether or not the other coach’s ratings are boosted by the Phil Jackson adjustment.
RATINGS FOR NON-CURRENT COACHES CAN BE CALCULATED AND PROVIDED
Note that QFTR can in theory include in these recommendations any coach who has ever coached in the NBA (subject to the 25 playoff games and 200 regular season games minimums). If you need a specific coach evaluated, contact QFTR.
EVALUATION SCALES
QFTR has had evaluation scales for players since 2007, but it took until late 2010 before evaluation scales for coaches were developed. Prior to then the overall RCR system was not sufficiently developed to warrant a formal evaluation scale. As already mentioned, the relevant measure for the playoffs recommendation is the Playoffs Sub Rating of the Real Coach Ratings System with the Phil Jackson adjustment included. The relevant measure for the regular season recommendation is the Regular Season Sub-Rating of the Real Coach Ratings System.
Note that after Phil Jackson retires (almost certainly in 2010) the Phil Jackson adjustment will be phased out. What will probably happen is that the adjustment will be cut by 10% each year. In 2010 and 2011 80% of the effect from all Phil Jackson encounters is removed from each coach’s score. For 2012 that removal percentage will probably be 70%, for 2013 it will probably be 60%, for 2013 it will be 50%, and so on until it is completely eliminated. It is very unlikely that QFTR will ever again need to have an adjustment due to a Coach who is far better than any other.
EVALUATION SCALE FOR COACHES FOR THE NBA PLAYOFFS
--At least 25 playoff games must be coached for the evaluation to be valid and official.
--The measure used is the Playoffs Sub Rating of the Real Coach Rating System.
--The effects from 80% of coach’s games versus Phil Jackson are removed.
Absolute Highest Possible Recommendation: 1,200 or more
Very Highly Recommended: 900 to 1,199
Highly Recommended: 600 to 899
Recommended: 350 to 599
Neither Recommended nor Not Recommended: 100 to 349
Not Recommended: -150 to 99
Strongly Not Recommended: -450 to -151
Very Strongly Not Recommended: -750 to -451
Absolute Lowest Possible Recommendation: -751 and less
WHEN DOES QFTR GUARANTEE THAT A COACH WILL NEVER WIN THE QUEST FOR THE RING?
The relevant measure is the Playoffs Coach Score with the Phil Jackson adjustment included. The guarantee is NOT based on the Playoff Sub Ratings, which add the experience factor and any Championship points earned by coaches to the Playoffs Coach Score. Remember though that the Playoff Coach Scores are the dominant factor in the Playoff Sub Ratings. The Playoff Coach Scores average about 150 points less than the Playoff Sub Ratings.
GUARANTEE LEVEL: -750 or less
That is, QFTR guarantees that any Coach with a Playoffs Coach Score of -750 or less will never win The Quest for the Ring.
If after being added to the guarantee list a coach wins one or more playoff games that should have been losses, he will be removed from the list if the score becomes higher than -750.
EVALUATION OF COACHES FOR THE REGULAR SEASON
--At least 200 regular season games must be coached for the evaluation to be valid and official.
--The measure used is the Regular Season Sub Rating of the Real Coach Rating System.
Absolute Highest Possible Recommendation: 1,300 and more
Very Highly Recommended: 1,050 to 1,299
Highly Recommended: 800 to 1,049
Recommended: 550 to 799
Neither Recommended nor Not Recommended: 350 to 549
Not Recommended: 100 to 349
Strongly Not Recommended: -150 to 99
Very Strongly Not Recommended: -400 to -151
Absolute Lowest Possible Recommendation: -401 and less
HOW TO INTERPRET DIFFERENCES IN RATINGS
The best way to explain this is with the aid of an example. We will use Larry Brown versus George Karl from the 2010 Real Coach Ratings Look Ahead Version, published in November, 2010. Rounded to the nearest whole number, Brown’s overall rating is 2,420. His Playoffs Sub Rating is 2,199 and his regular season Sub Rating is 221. George Karl’s overall rating is 405. His Playoffs Sub Rating is -648 and his regular season Sub Rating is 1,053.
Comparing directly:
Larry Brown / George Karl
Playoffs: 2,199 / -648
Regular Season: 221 / 1,053
Overall: 2,420 / 405
The reason this is a very good example to use here is that Brown and Karl are the two completely different types of coaches we often talk about at QFTR. Brown is a high quality playoffs coach whereas his regular season record is surprisingly poor. Karl is precisely the opposite: he is a very low quality playoffs coach whereas his regular season record is surprisingly good. Comparing two coaches who are not opposites the way these two are is easier.
The main thing and the most important thing to do is to look at the evaluations using the scales above:
Larry Brown / George Karl
Playoffs: Absolute Highest Possible Recommendation / Very Strongly Not Recommended
Regular Season: Not Recommended / Very Highly Recommended
Overall: There is no evaluation scale for the overall ratings; see above for an explanation.
QFTR strongly recommends that the playoffs ratings and recommendations be given priority over the regular season ones. In numerical terms, QFTR recommends that playoffs ratings be considered between 40% and 80% more important than regular season ones. Therefore, in this example QFTR would recommend Brown over Karl by a fairly wide margin.
Compare each coach’s most favorable evaluation and then separately compare each coach’s least favorable evaluation. In this example Brown’s worst evaluation (Not Recommended) is not as bad as Karl’s worst evaluation (Very Strongly Not Recommended). Karl’s worst is two notches worse than Brown’s worst. Also, Brown’s best evaluation (Absolute Highest Possible Recommendation) is one notch better than Karl’s best evaluation (Very Highly Recommended). Brown is ahead of Karl when you compare the higher of their evaluations AND when you compare the lower of their evaluations.
EYEBALLING NUMERICAL DIFFERENCES
What if you are looking at ratings and for one reason or another you are not checking the evaluation scales. In this sub section we’ll give you some advice about how to interpret actual ratings and differences between ratings.
Not counting once in a century all-time greatest playoff coaches like Phil Jackson, the overall range of the Playoffs Sub Rating is going to be from approximately -1,000 to 2,500. The range is 3,500 points. The average at any time is going to be roughly 100 and the median not counting the zero ratings is going to be roughly 0. Since coaching playoff games is a high level skill, the median score is lower than the average score.
To make quick eyeball evaluations, start with -1,000 and divide the range (of 3,500) into ten equal mini ranges of 350 points each. The first one would be from -1,000 to -650; the second one would be from -300 to -650, and so on. Assign a simple zero to 10 rating to each category:
-1,000 to -651 > 1
-650 to -301 > 2
-300 to 49 > 3
50 to 399 > 4
400 to 749 > 5
750 to 1,099 > 6
1,100 to 1,449 > 7
1,450 1,800 > 8
1,801 to 2,149 > 9
2,150 to 2,500 > 10
Scores less than -1,000 could be translated as zero while scores greater than 2,500 (such as with Phil Jackson) could be translated as “off the scale”. Now you can compare any number of coaches for the playoffs using very simple single numbers.
For the regular season, the overall range of the Regular Season Sub Rating is going to be from approximately -500 to about 2,000. The range is 2,500 points. The average at any time is going to be roughly 400 and the median is going to usually be 200, which is the starting Sub Rating for all rookie coaches.
To make quick eyeball evaluations, start with -500 and divide the range (of 2,500) into ten equal mini ranges of 250 points each. The first one would be from -500 to -250; the second one would be from -250 to 0, and so on. Assign a simple zero to 10 rating to each category:
-500 to -251 > 1
-250 to -1 > 2
0 to 249 > 3
250 to 499 > 4
500 to 749 > 5
750 to 999 > 6
1,000 to 1,249 > 7
1,250 to 1,499 > 8
1,500 to 1,749 > 9
1,750 to 2,000 > 10
Scores less than -500 could be translated as zero while scores greater than 2,000 (such as with Phil Jackson) could be translated as “off the scale”. Now you can compare any number of coaches for the regular season using very simple single numbers.
EYEBALL INTERPRETATION EXAMPLE
We’ll use Larry Brown versus George Karl again:
Larry Brown / George Karl
Playoffs: 2,199 / -648
Regular Season: 221 / 1,053
Simplified to the single digits, we have:
Larry Brown / George Karl
Playoffs: 10 / 2
Regular Season: 3 / 7
We could then (unofficially!) make an overall comparison. Let’s use a 50% multiplier for the playoffs being more important than the regular season. We get:
Larry Brown Overall : (10 X 1.5) + 3 = 18
George Karl Overall: (2 X 1.5) + 7 = 10
Therefore, unofficially and roughly speaking, Larry Brown is almost twice as good a coach as is George Karl.
======= SECTION SIX: CAUTIONS INCLUDING THE WELL KNOWN EXPERIENCE GAP PROBLEM =======
Since the Real Coach Ratings system is essentially two systems / models combined only unofficially into one overall result, we will discuss cautions separately for the two separate models. For each we will discuss statistical error. All statistical models contain some statistical error but the actual amounts varies radically depending on (1) how good the model is, (2) how large any sample sizes used in the model are, (3) on the real nature of what is being studied, especially on how variable or “wild” the underlying reality is, and (4) on the effectiveness of any quality control and results validation procedures. The good models have amounts of statistical error so low that you can rely on the results for years and years and have more than a 99% chance of never being in error while relying on the results.
STATISTICAL ERROR IN THE REGULAR SEASON SUB RATING MODEL
How good the model is can always be argued, and naturally the designer will be at least a little biased in favor of his or her model. The QFTR Real Coach Ratings Regular Season Sub Rating model is based on two primary foundations: experience and wins and losses. For these, the highest downward bias is against newer coaches who have from the beginning been coaching poor teams. The highest upward bias is in favor of coaches who are poor in the playoffs but who have been repeatedly given above normal players to coach. Unfortunately, both of these biases are rather large and mostly unavoidable. This is one of the big reasons why the RCR system consists of the two sub ratings (regular season and playoffs) and for why overall ratings are published but are not officially given a lot of weight in discussions.
The experience bias has been substantially reduced but not eliminated by progressively (in stages) eliminating experience points available to long veteran coaches. Also, at the low end, rookie coaches are given 200 games worth of experience from the get go.
A very substantial amount of experience bias remains, however. If all of the experience bias was reduced then the experience factor would be meaningless. That would not make sense because in many cases coaches do get a little better with experience.
The problem is that there are a fairly large number of exceptions to the rule that coaches get better with experience. A minority of coaches do not get substantially better with experience, either because they are brilliant coaches to begin with who can’t possibly get substantially better, or because they learn some wrong things from their experience and so they on net stay the same or they actually get worse with experience. Unfortunately, there is no known valid way to determine on a case by case basis how experience changes various coaches. Unfortunately you can not simply use changes in win-loss percentages over time because (1) there are other variables that could explain all of those changes and (2) in most cases there are not enough such changes to constitute an adequate sample size.
With regard to the experience and the wins and losses foundations, the bad news is that we are left with a moderate amount of bias (as just discussed) but the good news is that we are left with no sampling error, simply because no samples are needed because one hundred percent of the information is available and is used. By contrast, other possible regular season coaching variables, such as opinions of sports writers, opinions of players, etc. are subject to very high bias and also high statistical sampling error; QFTR would never condone usage of opinions in any of our models regardless of how many of them we could get and regardless of which opinions we could get. The very fact that mere opinions are not valid is why we spend all the time on ratings systems such as RCR in the first place.
Unfortunately, moderate variability is believed to exist with respect to how variable or “wild” the real nature of coaches in the regular season is. While you are never going to see a completely incompetent coach coaching an NBA team in the regular season, you will at the low end see moderately or “somewhat” incompetent coaches from time to time and at the high end there will be brilliant coaches from time to time.
Very unfortunately, the RCR system by itself can NOT automatically identify brilliant regular season coaches who will be great playoff coaches. The regular season sub rating is not a fine enough instrument to accomplish that even if an attempt is made to flush out bias when looking at a particular coach. Further, if a brilliant coach is stuck with especially poor players, there will be little if anything that even he can do that will show up in anything you can easily see.
However, if you are looking for a great coach, RCR can in some cases point you in the right direction. For example, the playoffs sub rating might show that a brilliant coach has won one or two playoff games he was not supposed to win in just one or two series. Then you might be aware that the team he is coaching is doing better than most people expected. These two pieces of evidence (neither of which come from the regular season sub rating system) would strongly suggest (but would not prove beyond a shadow of a doubt) that you have discovered a great coach.
The regular season sub rating receives the same high level of general quality control that all QFTR systems and ratings do. General quality control primarily means that the model as a whole and everything specifically in it are continually reviewed to make sure they exactly match reality in accordance with everything known about (in this cases pro basketball coaching) in reality. Quality control also means that all correlations in the model are supposed to closely match correlations in the real world. Recalibration and iteration are among the primary tools used to achieve quality control.
Note that, unlike many other basketball statistical models, QFTR models and systems are subject to continual revisions and expansions. However, as of late 2010, QFTR asserts that both the regular season and the playoff sub rating components of the RCR system are well and extensively developed and will not in the future be subject to major overhauls or major expansions. More specifically, most or all variables that can correctly be incorporated have already been correctly incorporated. Future changes will most likely be limited to relatively minor adjustments that will change results only a little.
Variable validation is where a specific key result has an average value (or perhaps some other statistical attribute) that is in accordance with model design. In the moderately complicated models QFTR uses, validation of key variables is often possible. But in most simple models, no such validation is possible. The regular season sub rating model is (intentionally) simple and there are no variables in it that can be or need to be statistically validated.
STATISTICAL ERROR IN THE PLAYOFFS SUB RATING MODEL
How good the model is can always be argued, and naturally the designer will be at least a little biased in favor of his or her model. The QFTR Real Coach Ratings Playoffs Sub Rating model is based first and foremost on efficiency of teams, which is extremely highly correlated with NBA playoff results. The model sets playoff expectations based on those efficiencies and then looks at actual results versus those expectations for each coach. QFTR is extremely confident that this is a very valid and strong model for correctly comparing coaches with respect to playoffs coaching.
WARNING: STATISTICAL ERROR IN PLAYOFF SUB RATINGS FOR COACHES WHO HAVE COACHED FEWER THAN 25 PLAYOFF GAMES MAY BE EXCESSIVE
This is the most important caution and warning! For coaches who have coached fewer than 25 playoff games, playoff sub ratings are calculated and published despite being subject to possible excessive statistical error, but no official recommendations are given. Therefore, if in any way you use playoff sub ratings for coaches who have coached fewer than 25 playoff games, do so with extreme caution. For these coaches, variances between expected and actual playoff results could be caused mostly or entirely by injuries rather than coaching. Therefore, to use sub ratings for coaches who have coached fewer than 25 playoff games, you would have to research which players didn’t play in the series to see if your inexperienced playoff coach was lucky or not with respect to injuries (to his players and to the players of his opponents.) The “manual injury adjustment” of the Real Team Ratings system can be used to do this. See the User Guide to Real Team Ratings.
Except for coaches who have coached few playoff games, especially fewer than 25, the variances between expected and actual playoff results are going to be due mostly to coaching. The other possible reasons: injuries and players playing better or worse than they did during the regular season (not due to coaching) are going to mostly statistically cancel themselves out for coaches who have coached more than 25 playoff games and are going to virtually completely cancel themselves out for coaches who have coached more than 50 playoff games. As the number of playoff games coached rises from 25 to 50, little of the difference will be due to the other factors and most of the difference will be due to the coaching. As the number of playoff games coached rises above 50, essentially all of the difference between expected wins and actual wins will be due to the coaching. Therefore, QFTR relatively confidently issues official recommendations for all coaches who have coached between 25 and 50 playoff games and QFTR extremely confidently issues official recommendations for all coaches who have coached at least 50 playoff games.
The next thing to look at is how variable or “wild” what we are looking at is in the real world. One of the very most important themes of the entire QFTR project is that coaches in the playoffs vary by more than most people think and by enough to easily change the outcome of close series. At the very least it can be said that coaches in the playoffs have a fairly high variability: the worst of them are much worse than the best of them. Roughly speaking, the worst playoff coach needs at least one more star player than the best playoff coach needs in order to have an even chance of beating the best coach.
But with respect to cautions the real issue with regard to variability is not with what we are focused on as the end product but with other “wild” factors that could explain differences in ratings. Unfortunately, injuries are a very large wild factor that is out there. As already explained, this factor invalidates playoff sub ratings for coaches who have coached fewer than 25 playoff games and makes caution in order regarding playoff sub ratings for coaches who have coached between 25 and 50 playoff games.
In order to validly use playoff sub ratings for inexperienced playoffs coaches, you must adjust the ratings for injuries. Using the manual injury adjustment as shown in the User Guide to Real Team Ratings is recommended for coaches who have coached between 25 and 50 playoff games and it is required for coaches who have coached fewer than 25 playoff games. QFTR has no specific way to do this at this time. At this time you have to devise your own adjustments to the playoffs sub ratings based on the injuries you find out about.
Another possible wild factor is players playing better or worse than they did in the regular season for reasons unrelated to the coaching. QFTR research indicates that this is a relatively minor factor which would not cause much statistical error at all except possibly for coaches who have coached fewer than 10 playoff games, and even for these coaches it would be unlikely.
Other than injuries and players better or worse “on their own”, there are no other known factors (other than coaching, obviously) that could explain differences between expected and actual results in pro basketball playoff games.
The playoffs sub rating receives the same high level of general quality control that all QFTR systems and ratings do. General quality control primarily means that the model as a whole and everything specifically in it are continually reviewed to make sure they exactly match reality in accordance with everything known about (in this cases pro basketball coaching) in reality. Quality control also means that all correlations in the model are supposed to closely match correlations in the real world. Recalibration and iteration are among the primary tools used to achieve quality control.
Variable validation is where a specific key result has an average value (or perhaps some other statistical attribute) that is in accordance with model design. In the moderately complicated models QFTR uses, validation of key variables is often possible. But in most simple models, no such validation is possible.
For the moderately complicated playoffs sub rating model, a validation on an extremely key variable is possible and has been done. This variable, expected versus actual wins for away teams, is at the core of the model and it should have an average value of zero as soon as the number of playoff games studied is large enough to be rid of any significant sample size error. The database contains enough playoff games in it that any error from sample size is extremely small, so that is not a problem. The reason the value should be zero is that the scale which translates differences in net efficiencies into expected number of wins is correct only if actual real world results result in a long term average of:
Expected number of wins minus actual number of wins of the away teams equals zero (or at least very close to zero).
Validation was performed and the initial result was that the scale and the model were slightly in error. After recalibration validation was redone. Now, the sum of all of the expected number of wins minus the sum of all of the actual number of wins divided by the number of playoff series equals .108. This is extremely close to zero and additional recalibration is neither required nor recommended. However, later in 2011 another recalibration may possibly be performed. Alternatively, the home court adjustment may be very slightly tweaked.
Other less important requirements for validity are that the overall range (of the scale) is correct and that the rates of change in various sections of the range are correct. The overall range has been verified as correct; specifically, if the difference in net efficiencies is twelve or greater, it is essentially impossible for the lower team to win even one playoff game in a series (unless there is a major injury to the higher team).
Although the rates of change in various sections of the range have not been completely and exactly verified because it is extremely difficult and time consuming to do so, it is unnecessary to do this because any possible error due to the rates of change in sections of the range is very small. Specifically, the highest possible error would translate into approximately five points (up or down) for a coach in a playoff series.
Now we will proceed to a few other cautions.
BE CAREFUL REGARDING THE VERY LARGE TIME SCALE OF THESE RATINGS
Keep in mind that each coach is rated using information from every season that he has ever been a head coach in the NBA. Some coaches will currently be substantially better than their overall career ratings indicate. On the other hand, it is very possible that a small number of current coaches could be substantially worse than their overall career ratings indicate. Much more likely would be that a very small number of coaches would be just slightly worse right now than they have been on average.
While I am on this subject, I want to warn you to not make the assumption that all or even most coaches get better as they accumulate more and more experience. Most coaches who have coached for less than five seasons will be getting at least a little better from one year to the next. Many coaches who have coached for between five and ten seasons will be getting a little better from one year to the next. Beyond ten years, very few coaches will be getting even a little better from one year to the next.
In any event, there is no empirical evidence I know of to back up a sweeping generalization stating that coaches always get better with experience, and nor is that assumption obvious or even likely to be true most or much of the time.
It is very plausible that most coaches do not really improve that much after roughly five or six years of experience. One thing that might prevent the more experienced coaches from automatically getting better is that many of the heaviest experience coaches may not have completely updated their beliefs and coaching schemes to reflect the current ways of basketball. Some older coaches may not have fully adjusted to rule changes of recent years, for example. They may be hurting their teams a little or even a lot by persisting with strategies and tactics that used to work well years ago but are not working very well in the NBA in 2011 and 2012.
THE INFAMOUS WIDELY DIFFERENT AMOUNTS OF EXPERIENCE PROBLEM
In the very early days of RCR back in 2007, it was feared that the widely different amounts of experience among NBA coaches would doom the system to either total failure, or at the very least, to being much less valid and reliable than Real Player Ratings are. This problem originates in the huge discrepancies in the amount of experience between long-term veteran coaches and much younger coaches. To some extent this makes comparing NBA coaches like trying to compare apples and carrots rather than like trying to compare various apples.
In general, some points of comparison will be biased in favor of newer coaches while other points of comparison will be biased in favor of long veteran coaches.
As recently as 2009 QFTR was still very worried about this. But after several years of thinking about the problem and introducing changes to RCR in response to it, we think we have now largely “solved” it. That is, we think now that the ratings and the evaluations based on the ratings that we publish are fair and unbiased to all coaches regardless of their experience level.
The following aspects of RCR largely solve the “apples and carrots problem”:
(1) The experience points available for regular season games (for the regular season sub ratings) differ depending on the experience level of the coach. Long veteran coaches get virtually no experience points; newer coaches get the maximum. Coaches in the middle get about half way between the maximum and the minimum.
(2) Rookie coaches are given 200 experience points from the get go, which eliminates the experience bias that would otherwise exist against those brand new coaches.
(3) No experience points are given for any playoff game coached beyond 200 playoff games. This cuts down on the bias in favor of the long veteran coaches who have coached the most playoff games.
(4) No evaluation scales, no official evaluations not using scales, and no official recommendations are produced or given for the overall ratings. At this time, only unofficial usage of overall ratings is done. This is obviously a powerful way to respond to the problem; it’s basically a divide and conquer strategy, where the overall ratings exist but are largely ignored in favor of the two sub ratings that add up to the overall ones. The main reason why ignoring the overall ratings is advised is that certain long veteran coaches are poor playoff coaches but they are decent to good regular season coaches and they also have a lot of regular season experience points. Therefore, the overall ratings of these coaches are very misleading when it comes to the playoffs.
Even though QFTR does not officially use the overall Ratings, we unofficially do, and may officially use them if and when a valid way to precisely calibrate the regular season and playoff sub ratings becomes available. The following cautions apply to the overall ratings.
CAUTIONS REGARDING THE OVERALL REAL COACH RATINGS
Where we are right now on the overall ratings is that we still have a small problem left with the experience discrepancy problem. In a nutshell, in the overall ratings we decided to take the risk that the problem is not completely solved so as to avoid being overly harsh toward certain long-term coaches. "First, do no harm..." Although many hours have been spent trying to solve the problem, and although much progress has been made, the RCR system still can not completely bridge the gap created by the huge differences in experience.
The worst of the long-term veteran coaches most likely have overall ratings that are higher than what they really should be. If a Coach has received some "lucky breaks" by not being fired after bad losing seasons, and/or after bad losses in the playoffs, and he has over the years now accumulated 1,000 or more regular season games and 100 or more playoff games, his rating will very likely still be distorted on the high side relative to the other coaches. This is because the long-time veteran Coach, who could have been fired a long time ago but was not fired, will max out on the experience points, and he will also have a few winning seasons to go with the losing seasons. The sum of the maximum experience points plus any positive net from winning seasons will tend to more than offset all the losses from the year(s) he might have been fired, despite the heavy negatives that losses carry.
Another way of thinking about this issue is that assuming a long-term veteran Coach has a too high rating due to the above, keep in mind that Coach would not even be in the ratings had he actually been fired. Coaching a professional sports team is about the worst job in existence for job security, since the vast majority of coaches are involuntarily fired. If all coaches who are “supposed to be” fired were fired, this distortion would disappear from the RCR system!
Yet another way of focusing on this problem is realizing that pro basketball coaches are fired or not fired based on different criteria, because managers and owners of pro teams do not all think in similar ways.
We can not simply remove experience from the set of factors, since in every single career that exists, the more experience you have, the better you tend to be. Moreover, even if we did reduce or remove the experience factor, the same problem would still be there in the case of coaches who probably should have been fired, but are not and then end up fortunately coaching very skilled teams in subsequent years, thus piling up wins with those teams.
In other words, we have no choice but to proceed as if all coaches face the same criteria as to whether they are fired or not, even though we know that some coaches, especially veteran coaches, are treated much more leniently than others.
CAUTION ABOUT THE AGE OF COACHES
One other thing to keep in mind about long-term veteran coaches (the ones with more than 1,000 regular season games coached) is that once such a Coach gets older than 60, 65, and then maybe even 70 years old, that Coach's abilities will probably be less than they were when he were younger. Whereas almost all coaches with little experience are under the age of 55.
For example, Utah Jazz Coach Jerry Sloan is 68 years old on March 28, 2010, so it is possible that he is a little too old now for maximum effectiveness.
The bottom line is that there will be a small number of older, veteran coaches whose ratings are misleading on the high side. Unfortunately, we are unable to completely correct for this or to properly estimate the amount of the unavoidable distortion at this time. So we advise you when looking at the ratings to make sure you give the benefit of the doubt to younger coaches who seem to have good potential.
PROBABLE DOWNSIDE DISTORTIONS IN THE OVERALL RATINGS
If you have a younger coach who has just started out, and he has a bad team to start with (and a lot more new coaches start with bad teams than good ones) then his rating will be much lower than it will be in future years if he avoids getting fired and in the future gets much better teams to work with.
However, it is also very possible that in most cases the worst teams get only the medium and poor coaches, that in other words the really good coaches never have to start out coaching a bad team, so that any downside distortions are small and mostly moot points.
Here is an interesting excerpt from what was probably the very first User Guide for Real Coach Ratings, written in 2008 when I tackled the big experience differences problem for the first time:
“As I was working on this I often had a sinking feeling that trying to fairly compare coaches with more than 10 years of experience with those with less than 2 years experience would be in the end impossible. But I persevered and scrapped and fought my way to the goal line and got it done. I achieved all of the balancing that I needed to achieve. Specifically, for example, I kept the points given for experience within reason, while making sure that regular season and playoff losses were penalized to the full extent they should be.”
FUTURE CHANGES TO REAL COACH RATINGS
Are the factors set in stone forever and ever? No, and unlike many sites that make use of statistics, QFTR will make radical changes in models and procedures whenever new basketball discoveries are made. But the odds are that changes to the RCR system will be relatively minor in future years, with one notable exception. As you may already be aware, QFTR will try in the future to develop a valid way to combine the regular season sub ratings and the playoff sub ratings, so that the overall ratings are considered completely valid and official. The only way to do this is to achieve a total solution to the experience discrepancy problem.
In summary, although this is not a perfect system, it is at the very least a very good system, and it is light years ahead of having no system at all with which to fairly compare coaches of radically differing amounts of professional basketball head coach experience. In fact, as surprising as this may sound, RCR is literally the only known coach rating system publicly published that is based on sound statistics, sound statistical modeling, real information, and extensive quality control.
Quest for the Ring is proud and pleased to present what is apparently the world's first serious effort to scientifically and accurately rate and rank all of the current NBA head coaches. Even the academic oriented basketball statistics sites do not have any formulas or specialized ratings for coaches, although some of them thank goodness keep track of basic coach data including wins and losses.
The QFTR coach rating product is called Real Coach Ratings (RCR). The first edition of these annual ratings, which compared to the latest version was relatively crude (and yet still much more than mere opinion) was published in October 2008. The second edition, which features substantial but relatively modest improvements over the 2008 edition, was published in early December, 2009. In late November 2010 the third edition, which featured relatively large scale and important improvements over the 2009 edition, was published. At this time it is not known to what extent it will be desirable and possible to improve RCR further, but there is a fairly high probability that most and possibly all future changes to and expansions of RCR will all be small compared with the changes and expansion in 2010.
Why should the coaches hide behind a black curtain as they do in the USA? Concerning coaches, there is virtually a total lack of the kind of statistical comparing and contrasting that goes on with players 24/7. To say there is a double standard where players get the short end of the stick would be an understatement. Coaches can get away with relative incompetence and negligence for many years, in some cases indefinitely, whereas players will within days, weeks, or a few months at the most have their minutes cut at the least, and they can easily be bounced around the NBA or demoted to some other League. When QFTR started to rank and rate Coaches in 2008, it was way, way overdue that someone did it.
The big Corporation sites such as ESPN have editorial limitations which prevent them from being severely critical of NBA head coaches, managers, or owners. ESPN writers can be mildly critical at the most (which in practice means they have to hint at criticism rather than directly criticize). For heavy criticism of NBA coaches, managers, or owners, you have to go somewhere other than ESPN, CBS Sports, Fox Sports, and NBC Sports. As one of many examples, you might see some heavy criticism at SlamOnline.com. And then even when you do venture elsewhere and see some heavy criticism of coaches, managers, or owners, you are most often going to see only opinions as opposed to conclusions based on hard research. I mean, if you are lucky, the opinions are dead on accurate, but since there is little if any evidence from research backing up those opinions they could easily be wrong. Here at QFTR it is the reverse: you seldom will see a mere opinion and most of what you see are conclusions backed up by valid and adequate research.
I can pretty much guarantee you that no one has ever, even with the capabilities created by the Internet age, put in as much effort, thought, and technology as QFTR has into fairly comparing NBA coaches with widely different lengths of time spent in professional head coaching. Despite the fact that QFTR has little or no competition for coach ratings, it applies full scale quality control to RCR and provides a very detailed User Guide that exceeds 20,000 words. And the Real Coach Ratings (RCR) system CAN be used in other Leagues, other countries, and on other planets, if there are any other basketball planets, that is!
The Real Coach Rating system has been extensively improved in the second half of 2010. The biggest improvement is the new factor called "Playoffs Games Coaching Score". A lot of time went into developing this factor, much of which went into developing an underlying data base called the "NBA Playoffs Series, Teams and Coaches Database". This database consists of every playoff series ever played since 1980 except for twenty best of three first round playoff series played between 1980 and 1983.
To summarize simply, for each series, a statistically valid estimate of exactly how many games should have been won by each team is calculated (to two decimal places, for example, 3.25 wins) and then the actual number of wins is compared to this and either a positive score or negative score is derived from this.
THE NBA PLAYOFFS SERIES, TEAMS, AND COACHES DATABASE
In 2010 Quest for the Ring developed a database which has details about virtually all playoff series of the world’s premier pro basketball League, the NBA, from 1980 to the present. The number one reason why the database was developed was so that RCR could be substantially improved. Specifically, one of the main objectives for creating this database was to identify which coaches of pro teams win more games in the playoffs that they “were supposed to lose” than they lose games that they were "supposed to win" (net playoff winners). And of course, we also want to find out which coaches lose more playoff games they were supposed to win than they win playoff games they were supposed to lose (net playoffs losers). (And of course there are some coaches who win some that should have been losses and lose some that should have been wins whose overall record on that is about even up.)
In late November 2010 and in very early 2011 much of the information that can be obtained from the database was published in various Reports. See especially:
“NBA Playoffs Upsets: How Many are There and Why do They Happen?”
“Real Coach Ratings for the NBA, 2010-11, Look Ahead”
“Official NBA Coach Recommendations: Can the Coach of Your Team Win the Quest or Not?”
Note however that the actual database has not been published and is not scheduled to be at this time. Not all of the information that can be obtained from this database has been published in Reports yet. And although QFTR has more and more in recent years published Excel worksheets that are products of databases or in effect are micro databases, the templates for the largest databases can not be published due to risks associated with copyright violation. The QFTR public email address can be used for inquiries about how someone could possibly obtain a copy of the database and about the terms of use for it. For the email address, at the QFTR home page, click the “Contact” link that is on one of the horizontal menus just under the banner.
Using what is formally known as the “NBA Playoffs Series, Teams, and Coaches Database", and also using knowledge about statistics and basketball, it has been proven beyond a shadow of a doubt that some coaches are better in the regular season than they are in the playoffs. Actually, to be more precise, the playoff losing coaches are ones who have their teams playing in ways that lead to relatively more wins in the regular season than in the playoffs. And vice versa: coaches who win extra games in the playoffs have selected strategies and tactics that work better in the playoffs than they do in the regular season.
This is not really all that surprising as long as you know that the game of basketball itself changes a little in the playoffs from what it is in the regular season. The rules stay the same and to the untrained eye it may seem like the same game, but in reality the way it is played changes a little and the way the referees call games changes a little. Although most people do not know all of the details of the changes (the magnitudes and the components and so forth) most people are aware in general terms that defending is more important in the playoffs than it is in the regular season. To state it a little differently, most people are aware that many if not most teams ramp up their defending for the playoffs; they play defense more aggressively, more energetically, more athletically, and sometimes smarter.
Defending can be improved almost overnight through will and effort. But this is not really true with offense. Here it’s appropriate to insert a few paragraphs from the User Guide to Real Team Ratings:
DO NOT MAKE THE MISTAKE OF OVERSTATING THE IMPORTANCE OF DEFENSE
But don’t fall into a trap here; don’t get carried away. In basketball defense is relatively less important than it is in many and very possibly most other sports. Basketball is designed to be a game that favors the offense more so than for many, many other sports.
The tightrope here is that on the one hand you have to realize that defense is more important in the playoffs than it is in the regular season. On the other hand you have to understand that in basketball exactly how important the defense can be is limited fairly strictly. Defense alone can not possibly win you a Championship in basketball.
By contrast, in American pro football the limitations on how important the defense can be are far weaker, meaning that unlike in basketball, you can win the Super Bowl Championship in football pretty easily with the best defense in the League but a below average offense. For example, the Pittsburgh Steelers have done this several times over the years. But in basketball it is extremely difficult to win the Championship (and you are going to need some luck) to win it with even the best defense in the League but only the 20th best offense (out of 30). What you really need in basketball to go along with the best defense in the League is at the very least the 15th best offense (out of 30); and to have a good chance you need at least the 10th best offense to go along with the best defense.
So even though in basketball defense is more important in the playoffs than it is in the regular season, the magnitude of the change is not really all that large; in basketball defense is only a little more or, arguably in some cases, moderately more important in the playoffs than in the regular season.
Note also that, ironically, the teams that are the very best defensively in the regular season are unable to increase the quality of their defending in the playoffs as much as teams that come into the playoffs with lower ranked defenses. Coming into the playoffs, teams with one of the best two or three offenses in the League but whose defenses are down around 10th best are generally more likely to win the Championship then teams which come in with one of the top two or three defenses but only about the 10th best offense.
It’s obvious that teams have the opportunity to be better defensively in the playoffs than they were in the regular season; after all, this happens all the time. Defensively in the playoffs, it’s mostly a matter of doing the same things that were done in the regular season harder, faster, and/or smarter. But the opportunity for a team to be better offensively in the playoffs than it was in the regular season is very limited. In other words, offensively, what you saw in the regular season is pretty much all you are going to see in the playoffs. Teams should assume they can improve a little defensively but they should never ever assume they can get substantially better offensively when the playoffs come, because that is unlikely to happen.
This is indirectly another reason why teams that run slightly organized offenses are much smarter and more likely to win The Quest for the Ring than are the teams that run more street ball type offenses. Coaches who run the street ball type offenses often think that that strategy will work better in the playoffs than in the regular season. They may think that unlike a slightly organized offense a street ball type offense can be ramped up in the playoffs. And they may think that a street ball type offense is exactly what you want to try to offset the ramped of defenses you see in the playoffs.
All of these suppositions are false to one extent or another. First, street ball type offenses work less well in the playoffs against ramped up defenses than they do in the regular season against lesser defenses. Second, you can not substantially ramp up any type of offense in the playoffs including the street ball type. For offense more so than defense, it is crucial that in the regular season you are playing in a way that will allow you to win in the playoffs. For defense it is theoretically very recommended but not required that you in the regular season play in a way that will allow you to win in the playoffs. Third, ramped up defenses are relatively more effective against street ball type offenses than they are against slightly organized offenses.
For convenience, this Guide is developed into main sections and subsections. The main sections are:
Section 1 Introduction (Which ends here)
Section 2 Components of and Format of Real Coach Ratings Reports
Section 3 Discussion of and Calculation of Factors used for the Playoffs Sub Rating
Section 4 Discussion of and Calculation of Factors used for the Regular Season Sub Rating
Section 5 Interpretation of Ratings and Evaluation of Coaches
Section 6 Cautions Including the Well Known Experience Gap Problem
Within each section subsections are in all caps as shown.
======= SECTION TWO: COMPONENTS OF AND FORMAT OF REAL COACH RATINGS REPORTS =======
Starting in 2010 QFTR produces two Real Coach Ratings Reports. One of them, scheduled for August is called the "Look Back Version" which, as the name implies, gives the ratings for all the head coaches from the season just gone by. The other one, scheduled for October, is called the “Look Ahead Version” which, as the name implies, gives the ratings for all the head coaches as the new season gets underway.
Note that QFTR has data that would allow a rating to be calculated for any coach who ever coached any playoff series in 1980 or later (including retired and deceased coaches). This information will be published as time permits in future years. A total of 89 coaches have coached at least one playoff series since 1980, all of whom are in the database.
For anyone who has seen a prior Coach Ratings Report, you will see that the format of the report has changed and that the Report is even bigger than before. Yes, this Report is longer than most, but the length is justified because if a team has the wrong coach it is going to be wasting money and wasting player talents. For any of the worst playoffs coaches, winning the NBA Championship is literally impossible unless perhaps they end up with one of the very best teams of all time, and even then the poor playoffs coach might still lose the Championship.
The RCR Reports are now divided into three primary sections:
--Rankings
--Key Details About Coaches
--Coach by Coach Details
Each primary section is divided into sub sections (which are themselves sometimes divided into sub sections of the sub sections).
The Rankings Section of a RCR report is the core of the Report, and there are three sub sections for it, all of which are rankings:
--Real Coach Ratings (overall)
--Real Coach Playoffs Sub Ratings
--Real Coach Regular Season Sub Ratings
The second of the three primary sections of a RCR report, the Key Details About Coaches Section, contains four sub sections:
--Listing by team of coaches who appear in the report
--Coaching changes by team (appears in the Look Ahead Version)
--Coaches who QFTR guarantees will never win The Quest for the Ring (and those coaches close to this status)
--Coaches who have never coached any NBA playoff games (who because of this have a Playoffs Sub Rating of zero)
The first two and the fourth of these four are self-explanatory. For the criteria used to declare that a coach will never win the Quest, see “Section 5: Interpretation of Ratings and Evaluation of Coaches” below.
The third of the three primary sections of a RCR Repot, The Coach by Coach Details Section, consists of numerous facts about all the coaches. The coaches are presented alphabetically by team. Let’s look at an example to see what information can be found here. Most of the information is self-explanatory. We’ll use Larry Brown, coach of the Charlotte Bobcats for 2010-11:
CHARLOTTE BOBCATS
COACH: LARRY BROWN
Real Coach Rating: 2420.14
Rank Among 2010-11 Coaches: 2 out of 30
PLAYOFFS / REGULAR SEASON BREAKDOWN
Playoffs Rating: 2199.00
Playoffs Rank: 2 out of 30
Regular Season Rating: 221.14
Regular Season Rank: 14 out of 30
PLAYOFFS DETAILS
Playoffs experience: Number of playoff games coached: 193
Net Playoff games WON that should have been losses: 16.1
How many EXTRA playoff games this coach will WIN out of 100: 9.4
NBA Championships won: 1
Number of times this Coach won a Conference final but not the Championship: 2
REGULAR SEASON DETAILS
Games coached with current team: 164
Regular season games coached: 1974
Regular season wins: 1089
Regular season losses: 885
As you can see, most of this is self-explanatory.
As for the more mysterious items, first note that the overall Real Coach Rating equals the sum of the Playoffs Sub Rating and the Regular Season Sub Rating. One of the many interesting things about RCR is that you can easily see that some coaches have much higher playoffs ratings than they do regular season ratings (like Larry Brown does) whereas other coaches have much higher regular season ratings than they do playoffs ratings (like George Karl does). This is more proof of what QFTR talks about all the time regarding how the playoffs are more different from the regular season than most people think and regarding how some coaches are good for the regular season but bad for the playoffs.
In the Playoffs Details area, there are two things that are going to be mysterious because most likely no one has every calculated such a thing until now. The first item is this one: “Net Playoff games WON that should have been losses: 16.1”. This is not a rate but instead it is an absolute and actual number. It is neither a directly observable number nor a certain number, but rather a number derived from the model used in the playoffs database.
Why is this number valid? QFTR strongly endorses the database, all its components including its formulae, and all results derived from the database. To see if you agree with QFTR and for all of the details about the database and about how information is derived from the database, see "Section 3: Discussion of and Calculation of Factors used for the Playoffs Sub Rating" below.
This first of the two mysterious items, number of wins that were supposed to be losses (or the number of losses that were supposed to be wins) is information which is free of the kind of statistical error involved with rates discussed immediately below, and so QFTR publishes it for all coaches who have coached at least one playoff game. But there is another kind of statistical error involved, so extreme caution is warrented when evaluating coaches who have coached fewer than 25 playoff games. See Section Six for complete details.
The actual real life absolute minimum a coach in the database could have coached is three playoff games. Although the database begins with 1979-80, it excludes all four of the first round playoff series played each year from 1980 through 1983 because those were best of three series, which are so short that the database model used to determine unexpected wins and losses is not statistically valid. In a best of three, whichever team wins two games first wins the series.
Note that from 1980 through the present sixteen teams have made the playoffs every year, but the format of the playoffs has changed several times. From 1980 to 1983 there were only four first round series, and these were best of threes. From 1980 through 1983, four teams were given first round byes; these four played the winners of the round one series in round two. In 1984 the playoffs format was changed extensively. Now there were eight first round series instead of just four, and now they were best of five rather than best of three games. There were no more byes starting in 1984. Both prior to and after 1984, rounds after round one were all best of seven series.
Starting from 1984, all series (including the round one best of fives) are included since the model can be used without excessive statistical error for best of five series, where whoever wins three games first wins the series. The last year that the round one best of five was employed was in 2001-2002. Starting in the next year and through the present, the round ones have all been best of sevens (and of course all the other rounds have remained best of sevens.
If a coach has not coached any playoff games, this is clearly evident because it is reported this way (in the Coach by Coach Details Section):
Playoff games won that should have been losses: 0
Playoff games lost that should have been wins: 0
Going back to the Larry Brown example for the Coach by Coach Details Section of the RCR Report, the second mysterious item, which is right below the first, is this: “How many EXTRA playoff games this coach will WIN out of 100: 9.4”. (Or it could tell you how many EXTRA playoff games the coach will LOSE out of 100.) This is a rate with the actual number of extra wins or losses as the numerator and the actual number of games coached as the denominator. The words “extra” and “lose” or “win” are in all caps to make the coach detail section easy to read or skim through.
All rates calculated with relatively small amounts of data based on real events have relatively high statistical errors. The statistical error increases exponentially for very small and tiny amounts of data. To avoid reporting rates that are likely to be in error, QFTR does not publish rates for any coach who has coached fewer than 25 playoff games. For these inexperienced coaches, instead of a rate, you will see:
“The extra playoff games this coach will win or lose out of 100 is not reported for this coach due to insufficient number of playoff games.”
Remember that both of these very important numbers come directly from the NBA Playoffs Series, Teams, and Coaches Database. For further details, see Section 3: Discussion of and Calculation of Factors used for the Playoffs Sub Rating below.
Note that even those who disagree with the innovative QFTR evaluation measures and those who are not sure and don’t have time to evaluate the model can make extensive use of the raw data that is in the Coach by Coach Details sub section. But you can only do this with the regular season details because the playoffs details are almost entirely made up of custom designed information and the simple playoffs wins and losses are NOT published by QFTR.
Quite frankly, the raw playoffs wins and losses is information that is not only inferior to what QFTR does publish, but also it has very little information value in general. Unless you know what the playoffs record was “supposed to be”, you can’t do much of anything with the raw wins and losses (or with the raw percentage of wins in the playoffs). For one thing, there are radical differences in how many playoff games different coaches have coached. Another problem is that many coaches have coached too few games for making any judgments just based on raw wins and losses. Yet another problem (and it is a big problem) is that different coaches average different quality of players over their playoffs coaching careers. All of these problems are tackled and largely or completely solved by the QFTR methodology.
WHY THE SUB RATINGS ARE NEEDED AND ARE AT LEAST AS IMPORTANT AS THE OVERALL RATINGS
As you know already the RCR system involves two sub ratings that you combine to get the overall coach ratings. With all other QFTR systems the overall rating is more important than any of the sub ratings. With Real Coach Ratings, though, the playoffs and regular season sub ratings are by themselves extremely important and at this time are considered more valid than the overall ratings. The reasons are rather involved and are discussed in Section 5. QFTR thinks that the playoffs sub ratings are more important than either the regular season sub ratings or the overall ratings, but of course QFTR is biased because it is focused like a laser on the NBA playoffs and championship. For much more about this subject see “Section 5: Interpretation of Ratings and Evaluation of Coaches”.
NUMERICAL PARAMETERS OF RATINGS AND SUB RATINGS
Only a handful of coaches (who are likely the worst coaches) have overall Real Coach Ratings below zero. Unlike Real Team Ratings, where all the ratings average out to about zero (and where the teams not likely to make the playoffs have negative scores) with Real Coach Ratings, the vast majority of the coaches have positive ratings. And many if not most of the coaches who end up with negative ratings are going to be only slightly below zero.
One of the ways the QFTR system is validated is that it is much more likely for coaches with low and negative ratings to be fired than ones with higher ratings.
But the firing of coaches with negative ratings is far from automatic. Unfortunately, some teams persist with coaches who have negative ratings who in many cases could not possibly win The Quest for the Ring, and in some cases can never be and will never even be truly successful regular season coaches either. Apparently, managers and owners have a whole lot of difficulty evaluating coaches, something which is not surprising here at QFTR given all we have discovered and proven.
Let’s look at the average, the median, and the range of the overall ratings and of the two sub ratings.
In the November 2010 (like many QFTR Reports it was a little late) Look Ahead Version, the average Real Coach Rating is 706 and the median is 275. The highest rating is 8,801 (Phil Jackson, with Larry Brown the second highest at 2,420). The lowest overall rating is -326 (Mike D’Antoni). Twenty five coaches have overall Real Coach Ratings above zero and five coaches have ratings below zero.
In the November 2010 Look Ahead Version the average playoffs sub rating is 227 and the median is 0. The highest playoffs sub rating is 6,035 (Phil Jackson, with Larry Brown the second highest at 2,199). The lowest playoffs sub rating is -793 (Rick Carlisle). Eleven coaches have playoffs sub ratings above zero and twelve coaches have playoffs sub ratings below zero. Seven coaches, all of whom have never coached a NBA playoff game, have playoffs sub ratings of exactly zero.
In the November 2010 Look Ahead Version the average regular season sub rating is 479 and the median is 201. The highest regular season sub rating is 2,766 (Phil Jackson, with Greg Popovich second at 1,884). The lowest regular season sub rating is -107 (Lionel Hollins). Twenty eight coaches have regular season sub ratings above zero. Two coaches have regular season sub ratings below zero.
We just presented those numbers not only to make using the 2010 reports easier, but also because it is likely that, unlike prior to now, those parameters are not going to change much in the future.
======= SECTION THREE: DISCUSSION OF AND CALCULATION OF FACTORS USED FOR THE PLAYOFFS SUB RATING =======
Mechanically, the playoffs sub rating is simply the rating you get when you factor in only the playoffs-related factors. The playoffs sub rating consists of the following factors which will be discussed in detail in order:
(1) Playoff games coached
(2) Championships won
(3) Conference Titles won (but where the Championship was not won)
(4) Playoff Games Coaching Score
This list is deceptively short because the fourth item actually requires numerous components, and it has a very sophisticated data base backing it up and validating it. If those components were listed separately, the total number of components comprising the playoffs sub rating would differ depending on exactly how the system was broken down, but would be at least ten.
1 PLAYOFF GAMES COACHED
This is also known as the playoffs experience factor. This is very simple: two points are awarded for every playoff game coached regardless of result.
The limit is 200 playoff games. There will most likely never be a coach who benefits in any significant way from getting more playoff coaching experience beyond 200 games. Coaches who have coached more than 200 playoff games are going to be older, very veteran coaches who are extremely unlikely to change how they coach.
Also, the number of coaches coaching currently who have coached more than 200 playoff games is always going to be a tiny number. As of January 2011, there are only three current coaches who are close to or over 200 playoff games coached:
Phil Jackson 323
Jerry Sloan 202
Larry Brown 193
Coaches such as these already know as much as they ever will know about winning NBA playoff games. If some of their beliefs are wrong, everyone is going to have to live with that, because coaches this experienced are not going to change their ways after all this experience spanning many, many years. And unfortunately, it is very possible for even coaches this experienced to have false beliefs about how playoff games and championships are won. QFTR has hard, smoking gun evidence to prove that; see Section 5 of this Guide and see also various Reports at QFTR.
2 CHAMPIONSHIPS WON
100 points are added for each Championship win. It is always 100 points regardless of how many games the Championship consisted of. These points are first and foremost awarded for merit but also they can be looked at as extra points given for extremely valuable experience. Counting the two points every coach gets for experience for every playoff game (assuming less than 200 playoff games have been coached) and assuming an average Championship of about six games, the total experience points for each Championship game (where the Championship is won) is approximately nineteen.
3 CONFERENCE FINALS WON BUT THE CHAMPIONSHIP IS NOT WON
50 points are given to each coach who wins a Conference Final but loses the Championship. It is always 50 points regardless of how many games the Conference Final consisted of and regardless of how many games the Championship consisted of. These points are first and foremost awarded for merit but also they can be looked at as extra points given for extremely valuable experience. Counting the two points every coach gets for experience for every playoff game, and assuming an average Conference Final of about six games, the total experience points for each Championship game (losing effort) is approximately ten.
There is no bonus for mere losing appearances in the conference finals. Only two playoff series need to be won to merely reach these finals, and either an extra outstanding bunch of players and/or mere luck could in many cases allow a team with even a bad playoffs coach to fairly easily reach a Conference Final.
PLAYOFF GAMES COACHING SCORE
This last of the four factors making up the Playoffs Sub Rating is by far the most important one. This is where all of the good, successful playoffs coaches are going to get most of their points from. On the flip side, this factor is where the bad playoff coaches get heavily penalized up to and including cases where they end up with a very negative playoffs sub rating despite having a lot of experience.
The following will take you on a little journey whose destination is the Playoff Games Coaching Score. This score is calculated for each playoff series and for each coach. The key to the score is statistically determining (for each coach and for each series) the exact number of playoff games won that were supposed to be losses, and also the exact number of playoff games lost that were supposed to be wins. All of this is calculated using the QFTR Playoffs Series, Games, Teams, and Coaches Database, or QFTR Playoffs Database for short.
THE QUEST FOR THE RING PLAYOFFS DATABASE
The QFTR NBA Playoffs Series, Teams, and Coaches Database has every playoff series played beginning with the 1979-80 year through the present (2010) except for sixteen best of three series played from 1980 through 1983 (four of them each year). Why these were excluded was explained in Section 2 above. As of 2010 there are 433 NBA playoff series in the database.
For each playoff series, there are 22 primary information items:
DATABASE ITEM ONE: The Year (the series was played)
DATABASE ITEM TWO: The Round; in all years there were four rounds, but round one series played from 1980 through 1983 are not included as explained earlier.
DATABASE ITEM THREE: Away Team; this is the team that does not have the home court advantage
DATABASE ITEM FOUR: Offensive Efficiency of the Away Team: This is the average points scored per 100 possessions (in the regular season leading up to the playoffs).
DATABASE ITEM FIVE: Defensive Efficiency of the Away Team: This is the average points given up per 100 possessions (in the regular season leading up to the playoffs).
DATABASE ITEM SIX: Net Efficiency of the Away Team: This is Offensive Efficiency minus Defensive Efficiency for the Away Team. This can either be a positive or negative number, but most playoff teams have positive net efficiencies and most teams that do not make the playoffs have negative net efficiencies.
DATABASE ITEM SEVEN: Offensive Efficiency of the Home Team: This is the average points scored per 100 possessions (in the regular season leading up to the playoffs).
DATABASE ITEM EIGHT: Defensive Efficiency of the Home Team: This is the average points given up per 100 possessions (in the regular season leading up to the playoffs).
DATABASE ITEM NINE: Net Efficiency of the Home Team: This is Offensive Efficiency minus Defensive Efficiency for the Home Team.
DATABASE ITEM TEN: Home Team Net Efficiency minus Away Team Net Efficiency: This is the Net Efficiency of the Home Team minus the Net Efficiency of the Away Team.
In almost exactly 90% of the series, this number is positive. When it was, the better team according to efficiency had the home court advantage. Note that since home court advantage is determined by wins and losses, this means that wins and losses are extremely highly correlated with net efficiency. But for looking at results of and for predicting series, net efficiency is even more reliable than simple wins and losses.
In about 2% of the playoff series, both teams had the same net efficiency; in these cases Item Ten is zero.
In almost exactly 8% of the playoff series, this number is negative. When it is, the team that is not as good according to efficiency was able to somehow get the home court advantage, from a tie breaker for example.
The most lopsided playoff series in history according to efficiency was the round one 1992 series between Miami and Chicago which had Michael Jordan that year. Miami’s record that year was just 38-44 while Chicago was 67-15. Chicago’s net efficiency that year was 11.0 and Miami’s was -4.2. Item Ten was 11.0 minus negative 4.2 or 15.2; this is the highest difference since 1980 to date. The Chicago Bulls were overwhelmingly favored and, sure enough, they defeated the Miami Heat three games to zero in that one.
The series where the away team was better than the home team by the greatest margin was in round two in 1997 where the Seattle Supersonics were the Away Team and the Houston Rockets were the Home Team. Seattle had a net efficiency of 8.5. Houston had a net efficiency of 4.8. In this case Item Ten was -3.7. Despite being much less efficient than Seattle, Houston had the home court advantage. Both teams finished with 57 wins and 25 losses. Houston won game seven of this series at home and thus won this series 4 games to 3. Houston went on to the West Conference Final but lost to the Utah Jazz 4-2.
DATABASE ITEM ELEVEN: Home Team Net Efficiency minus Away Team Net Efficiency plus the Home Court Advantage Adjustment: The adjustment is always 1.4 points which represents the advantage that the home team has expressed in terms of net efficiency. Having home court advantage is approximately equivalent to having a net efficiency that is 1.4 points better than the one calculated from the regular season.
This Item Eleven essentially tells you how close the series should be, with the home court advantage factored in.
For example, for the Seattle vs. Houston series just discussed, Houston’s net efficiency was boosted from 4.8 to 6.2. Seattle still had the better net efficiency (8.5) but it lost game seven in Houston. In this case Item Eleven was 6.2 minus 8.5 equals negative 2.3. Remember, this being negative is very unusual. Only 8 percent of series have negative numbers for Item Eleven and less than that still have negative numbers after 1.4 is added to the home team’s net efficiency.
As another example, for the Miami-Chicago series discussed just prior to the Seattle-Houston one, Chicago had home advantage and so its’ net efficiency was boosted from 11.0 to 12.4; Miami’s net efficiency remained minus 4.2. In this case Item Eleven was 12.4 minus negative 4.2 equals 16.6.
DATABASE ITEM TWELVE: Favored Team: This field is a text field and is either “Home” or “Away” depending on which team is favored. If Item Eleven is positive as it is most of the time, the team with home court advantage was favored, and vice versa. Out of the total of 433 series, only 14 have been ones where the team without the home court advantage was favored to win the series. These series have been split seven a piece: seven times the Away Team won as expected and seven times the Home Team won unexpectedly. None of these were all that surprising upsets because the Away Team was favored by a small amount in all of these.
The favored team needs to be clearly identified so that the expected wins and losses process can be worked relatively easily; read on for details.
DATABASE ITEM THIRTEEN: Away Team Actual Wins: The number of games actually won in the series by the Away Team.
DATABASE ITEM FOURTEEN: Home Team Actual Wins: The number of games actually won in the series by the Home Team.
DATABASE ITEM FIFTEEN: Expected Away Team Wins
DATABASE ITEM SIXTEEN: Expected Home Team Wins
For items fifteen and sixteen, the first step is that whichever team is favored (according to Item 11 and as shown in Item 12) is expected to win the number of games that wins the series. For best of seven series, the expected wins for the favored team is four. For best of five series, the expected wins for the favored team is three.
The expected wins for the team not favored (the underdog) is determined based on a very carefully constructed and calibrated scale. For very and extremely close series, the expected wins of the underdog is one game fewer than the number of wins needed to win the series. In a best of seven series between two very closely matched teams, the expected number of wins for the underdog is three (and the favored team is expected to win four games).
At the opposite extreme, for series where the difference between the teams is large, which is most common in the first round, the expected number of wins of the underdog is often zero.
In between the extremes of razor close series and very lopsided series, the expected number of wins for the underdog ranges between one fewer than the number of wins needed to win the series (which is three for best of sevens) and zero. The scale is calibrated down to net efficiency differences of just 0.1. Here is the actual scale with just the round number efficiency differences shown:
DIFFERENCE IN NET EFFICIENCY VERSUS EXPECTED WINS BY UNDERDOG SCALE
The first number just below here is Item Eleven (the difference in the net efficiencies with the home court adjustment factored in) and the second number is the expected wins for the underdog in a best of seven series.
0.0: 3.00 games
1.0: 2.90 games
2.0: 2.78 games
3.0: 2.58 games
4.0: 2.28 games
5.0: 1.93 games
6.0: 1.53 games
7.0: 1.20 games
8.0: 0.90 games
9.0: 0.60 games
10.0: 0.40 games
11.0: 0.20 games
12.0: 0.00 games
(If the gap is greater than 12, zero games are expected to be won by the underdog.)
The scale, which you could look at as the all-important core of the entire Playoffs Sub Rating (and even of the entire RCR system) was very carefully constructed in accordance with and validated against all of the actual historical results of NBA playoffs series from 1959-60 through 2009-10.
There is a different scale for best of five series which is constructed, calibrated, and validated in the same way.
So now we have Items Fifteen and Sixteen, the expected wins for each team, and we are ready to move on.
DATABASE ITEM SEVENTEEN: Actual Away Team Wins minus Expected Away Team Wins: Positive numbers are good and negative numbers are not good for the Away Team and its Coach.
DATABASE ITEM EIGHTEEN: Actual Home Team Wins minus Expected Home Team Wins: Positive numbers are good and negative numbers are not good for the Home Team and its Coach.
DATABASE ITEM NINETEEN: Away Coach: The Coach of the team that did not have the home court advantage is identified here. (This is a text field.)
DATABASE ITEM TWENTY: Home Coach: The Coach of the team that did have the home court advantage is identified here. (This is a text field.)
DATABASE ITEM TWENTY ONE: Away Coach Score
DATABASE ITEM TWENTY TWO: Home Coach Score
Items 21 and 22 are the most important and innovative end products coming out of the database.
Items 21 and 22 are calculated in a coordinated way rather than separately. For every playoff series, one of the coaches will have a positive Coach Score and the other one will have a negative Coach Score that is the inverse. For each series, if you add the two coach scores the result is always zero. For the entire database, if you add every single coach score the result is always zero.
These two coach scores are calculated for each playoff series in three steps:
STEP ONE
First, Item Seventeen times 100 is the preliminary Away Coach Score (Item 21). Similarly, Item Eighteen times 100 is the preliminary Home Coach Score (Item 22).
STEP TWO
Step two is that preliminary Item 21 and preliminary Item 22 are compared. Whichever is farther from zero is declared to be the “controlling score”. (Another way to think of this is that the absolute value of the negative score is compared to the value of the positive preliminary score, and whichever number is greater is the “controlling score”. Of course, using the absolute value is a very temporary thing; the final coach score will be negative whenever the preliminary score was negative.
STEP THREE
The controlling score is the actual score for the corresponding coach. (The preliminary, the controlling, and the actual scores are all the exact same number). The other score (the “non-controlling score” if you will) is discarded. In its place goes the inverse of the controlling score. Note that all that is being changed is the magnitude of the number; whether the score is positive or negative is never changed. This inverse of the controlling score is the final score for the other coach.
What are we actually doing with this procedure? We are identifying the biggest expectation gap; specifically, we are identifying whether the Home Team and Coach had the biggest gap between expectation and result (either positive or negative) or whether it was the Away Team and Coach that had the biggest gap (either positive or negative). Once the biggest gap is identified and scored, the other coach receives the invoice or opposite score.
Note that the gaps for all the playoff series in the database should, if the model is statistically valid, add up to very close to zero. In other words, the absolute value of the sum of the negative gaps should be very similar to the sum of the positive gaps. If they are substantially different, the scale can be slightly adjusted in a process known as recalibration. This type of recalibration is very important and very effective for insuring quality control and for insuring the reliable validity of results. For complete details, see Section Six.
EXAMPLE OF THE CALCULATION OF COACH SCORES FOR A PLAYOFF SERIES
Here is an example; we’ll use the 2010 NBA Championship between the Boston Celtics and the Los Angeles Lakers. Boston was the Away Team and Los Angeles was the Home Team. The Coach of Boston was Doc Rivers and the Coach of Los Angeles was Phil Jackson. Boston had a net efficiency of 3.9 and Los Angeles had a net efficiency of 5.1. The difference (Item 10) was 1.2. Item 11 is where the home court adjustment of 1.4 is factored in, so Item 11 is 2.6. Los Angeles was the favored team.
According to the chart that QFTR uses that gives expected wins according to adjusted difference in net efficiency, the expected wins by the underdog in a best of seven series where the adjusted efficiency difference is 2.6 is 2.66. Boston, the underdog and the Away Team, actually won three games in that series. So for them, Item 17 (actual minus expected Away Team wins) was 3.0 minus 2.66 = .34.
Next, you can see that Item 21 (Away Coach Score) preliminary is .34 times 100 equals 34.
For Los Angeles, the expected number of wins (Item 16) was four and the actual number of wins was four. So Item 18 (actual minus expected Home Team wins) is 4 minus 4 equals zero. Then Item 22 (Home Coach Score) is 0 times 100 equals zero.
Now we compare the two preliminary coach scores:
Preliminary Away Coach Score: 34
Preliminary Home Coach Score: 0
The one greatest from zero (regardless of whether negative or positive) is 34, which is the one for the Away Team and the away Coach. This is declared to be the controlling score, and the score of 34 is the coach score for this series for the Coach of the Away Team, which in this case was Doc Rivers. So in this particular series, the Away Team did a little better than expected, which earned the Coach, Doc Rivers, a “coach score” for this series of 34 points.
In accordance with step three (above) the inverse or opposite of the controlling score is minus 34 (-34). This is the score given to the Coach of the home team, which in this case was Phil Jackson. That is, Jackson’s preliminary score of zero is changed to minus 34 because Doc Rivers did a little better than he was supposed to according to the statistical model which, remember, is based on and validated by more than 600 playoff series played during a 50 year period ending in 2010.
COACH PLAYOFF SCORES CLOSE TO OR EXACTLY ZERO
Note that with this method the only way for a coach score to be exactly zero is for the series to be decided exactly according to expectations. Realistically, the only series that can possibly be decided exactly according to expectations are ones which are supposed to be 4-0 routs (or 3-0 routs in best of fives). If the actual result is 4-0 (or 3-0), if in other words the actual result is identical to the expected result, both coaches will have coach scores for that series of zero. In this case there is no effect whatsoever on either coach’s playoff sub rating (or their overall RCR).
But coach scores can be very close to zero regardless of how close the series was expected to be. For example, the scale might project a series to be decided (statistically, of course) 4 games to 1.99 games. If the actual result is 4-2, then the underdog coach will have a coach score of 1: (.01 times 100). The favored coach has a coach score of -1 in this example.
The main point is that the model embedded in the database accurately measures the difference between expected and actual playoff wins for each playoff series (and for both coaches in each series). Again, the larger of the two differences (between actual and expected) is the operative one.
PLAYOFF COACH SCORES FOR ALL SERIES COACHED
For each coach, the combined total of all his coach scores for all series he coached is called his “Playoff Games Coaching Score”. This in turn is one of the four components of the Playoffs Sub Rating of the Real Coach Ratings system. As discussed earlier, this Playoff Games Coaching Score is more important than the other three components of the Playoffs Sub Rating combined.
NUMBER OF GAMES WON THAT SHOULD HAVE BEEN LOST OR NUMBER OF GAMES LOST THAT SHOULD HAVE BEEN WON
For each coach, the Playoff Games Coaching Score divided by 100 equals the number of games won that should have been lost (if positive) or the number of games lost that should have been won (if negative). This derived result is reported in the Coach by Coach Details Sub Section of the Rankings Section of Real Coach Ratings Reports. Although technically this is a statistical construct as opposed to exact reality, we know for a fact that the real life numbers are very, very similar to the calculated numbers.
By dividing the unexpected wins or losses by the total number of playoff games coached, we can then calculate a rate of unexpected wins for the good playoff coaches and the rate of unexpected losses for the bad playoff coaches. For more details, see Section Two above.
SCORES FOR GAMES WON AND LOST ACCORDING TO EXPECTATIONS
Coaches’ playoff sub ratings do not change at all when they win games they were supposed to win or when they lose games they were supposed to lose. If a series is decided in exactly the way it is supposed to be, both coaches get the experience points (two points for each game) and they get nothing else.
You can see from this how it is not an exaggeration to say that the Playoffs Sub Rating completely ignores raw wins and losses. Instead, it awards only differences between actual and expected wins and losses.
This is not only valid but is much superior to awarding or penalizing anything at all based on the raw wins and losses. Raw wins and losses are determined more by the quality of the players than by the quality of the coaches. What we want to know and what the playoffs sub rating shows for each coach is whether that coach won any games the players would not have won were it not for the above average coaching. And of course we also want to know for each coach is whether that coach lost any games the players alone would not have lost were it not for the below average coaching.
This ends the primary and in detail discussion of the Playoffs Sub Rating. For those who are a little confused, and/or for those not convinced that the system just discussed in detail works well, please read the following, which is a revised version of a discussion that first appeared in the May 2010 User Guide (when the framework of the new system was established but all the details and the database were awaiting development). The following is a relatively simple but accurate and effective summary of the QFTR Playoffs Sub Rating system.
SUPPLEMENTARY, SUMMARY DISCUSSION OF THE COACHING SCORES FOR THE PLAYOFFS SUB RATING
For each playoffs series we start with four measures, the offensive efficiency of the two teams and the defensive efficiency of the two teams (all from the regular season, of course). Efficiency is how many points scored or how many points given up per 100 possessions. Over the course of the regular season, the thousands of possessions result in precise efficiency numbers where seemingly very small differences are actually big differences between teams that can easily be big enough to cause wins or losses in the playoffs.
Then for each team we subtract the defensive efficiency from the offensive efficiency to find the net efficiency. Most but not all playoff teams have positive net efficiency numbers and most teams that do not make the playoffs have negative net efficiency numbers.
Then we add a small “bonus” amount to the net efficiency of the team that has the home court advantage in the series.
Then we compare the two net efficiencies and whichever team is higher is the favorite. Of course this is true in real life: the team with the better net efficiency beats the other team the vast majority of the time, although when the differences are smaller this is not so certain.
The exact difference between the two net efficiencies is crucial, because it determines the likelihood or probability of the favored team winning. The greater the difference in net efficiency is, the greater the probability that the better team will win the series. Assuming no injuries, in many first round series and even occasionally in a second round series, the probability that the better team will win the series is almost 100%. QFTR has carefully constructed a scale to translate deceptively small differences in net efficiency to how many games the underdog should win on average in a best of seven game (and a best of five) series. For example, if the difference in net efficiency is 5.0, the underdog will on average win 1.5 games in a best of seven series (with the favored team winning 4 games). This average number of wins by the underdog is usually called the “expected number of wins”.
Next, for each playoff series, we compare the number of games actually won and lost by the coach versus what the expected number of wins and losses are. The difference between the actual and the expected is the all-important thing; this difference is then amplified (with a multiplier) to accurately reflect the great (and underestimated by the general public) importance of coaching in the playoffs.
Unexpected wins and losses are rewarded and penalized heavily but not excessively. Unexpected playoff losses are one of the worst things that can happen to a team and a franchise. Among other things, unexpected losses waste the owners’ money, because they partly waste the efforts of a lot of players and managers, and because they make the franchise less likely to attract top free agents. Obviously, unexpected losses also waste the talents and efforts of the players. Unexpected playoff losses are a nightmare and the fewer of them you have the better.
Note that for a coach who is exactly good enough to win exactly the number of playoff games he is supposed to win and no more than that, statistically speaking, unexpected playoff losses are going to be exactly offset by unexpected playoff wins once the sample size (number of playoff games in this case) is large enough. In real life, this means that all coaches are going to have a series once in awhile where his team performs below standard (and loses one or more games that should have been wins) but these will statistically eventually be offset by that coaches’ unexpected playoff wins.
This is the most crucial thing you have to keep in mind: the main purpose of the playoffs sub rating system is to on the downside flush out and penalize coaches who have more unexpected playoff losses than unexpected playoff wins. On the upside, the primary purpose of the advanced system is to flush out and to award coaches who have more unexpected playoff wins than unexpected playoff losses.
Quest for the Ring already knows many of the basketball strategies and tactics that work better in the playoffs than in the regular season, and you do to if you read the site because we review and illustrate most of them from time to time.
======= SECTION FOUR: DISCUSSION OF AND CALCULATION OF FACTORS USED FOR THE REGULAR SEASON SUB RATING =======
There are four components of the Regular Season Sub Rating:
(1) Number of Regular Season Games Coached
(2) Number of Consecutive Regular Season Games Coached with Current Team
(3) Number of Regular Season Wins
(4) Number of Regular Season Losses
1 NUMBER OF REGULAR SEASON GAMES COACHED
One Point is given for each regular season game coached up to 500 games, which is about six seasons worth of games. If a Coach has not learned just about everything he needs to by this point, it is unlikely he ever will, so the award for experience is sharply reduced for all games coached beyond 500. 0.25 points (1/4 of a point) is given for games 501 through 1,000. 0.06 points (about 1/16 of a point) is given for all games over 1,000. Note that in early versions nothing was given for games coached in excess of 1,000; the latest version corrects that very minor error by recognizing that even long, veteran coaches might make extremely small improvements in their later years.
What about rookie and near rookie coaches? Just because they have never coached in the NBA, should their experience rating be zero? No, I don't believe so. They either have substantial coaching experience in other Leagues, or they were extremely talented and/or intelligent players, or both, or else they would not have been hired to be a head Coach in the NBA. So any coach who has coached for fewer than 200 NBA games is given exactly 200 points for experience. So rookie coaches start out with Real Coach Ratings of 200 and they go up or down from there. For new coaches, the Regular Season Games Coached is fixed at 200 until the coach has coached 200 games; then it goes up from there (by 1 for each game through 500, by 0.25 for games 501 through 1,000, and by 0.06 for any games above 1,000).
2 NUMBER OF CONSECUTIVE REGULAR SEASON GAMES COACHED WITH CURRENT TEAM
This is a supplementary experience score which most benefits coaches who have gone the longest without being fired by their current teams. The points given are 0.30 (3/10 of a point) for all games coached, up to 1,000 games, by the coach for the team the Coach is currently working for.
The one side of the coin regarding this is that the coach must be doing what the organization wants to avoid being fired, and he can't be a total failure basketball wise, so starting with those things he deserves credit in proportion to how long he has kept his post. The other side of the coin is that the more experience a Coach has with a particular team, the more valuable he is to that franchise, because he knows everybody and everything concerned with the franchise better and better with each passing year. Generally speaking, the more successive games a Coach has coached with the same team, the more effectively and efficiently he can help the team squeeze out wins that would otherwise be losses.
Jerry Sloan, who coming in to 2009-10 had coached a mind boggling 1,668 games for the Utah Jazz, is the ultimate example of a Coach who due to his many years with the same team is going to be more effective and efficient than he would be if he had just switched to a different team. Due partly to this factor, do not be surprised if the Jazz become a losing team shortly after Sloan finally retires.
Another name for this factor might be "franchise specific experience." For 2009-10 the Washington Wizards hired a new head Coach, Flip Saunders, who has a lot of prior experience with other teams and has a relatively high rating. But he is brand new to the Wizards, so be careful not to expect miracles or even to assume that his coaching is going to be as good as it has been in the past from the get go. Look instead for the Wizards to get a little better as the season goes along and in the coming years if Saunders remains the coach. This is because Saunders needs time to merge his skills and abilities with the specific factors involved with making the Wizards a winning team.
3 NUMBER OF REGULAR SEASON WINS
Four points is assigned per regular season win.
4 NUMBER OF REGULAR SEASON LOSSES
Minus 5.5 points is assigned per regular season loss.
WHY THE PENALTY FOR LOSING A REGULAR SEASON GAME USUALLY EXCEEDS THE GAIN FOR WINNING ONE
You must keep in mind that any coach who has been fired for not winning enough in the regular season, for not winning enough in the playoffs, or for both, and has not been rehired by another team, is not on the list of coaches being rated. We don't care about them. In theory we are supposed to be evaluating mostly coaches who are among the best in the country.
The whole idea in multi-billion dollar professional sports is to win more than you lose, and that most obviously and most definitely includes the coaches. So a 50/50 record in either the regular season or in the playoffs is not good enough long term, and coaches who are not better than .500 sooner or later get fired and not rehired, and those who have met that fate already are not on the list of current coaches.
To reflect the reality that coaches who can not win more than they lose are sooner or later going to be fired, and will most likely never advance in the playoffs before they are fired, it is necessary to make sure that losses entail a bigger negative number than do wins entail a positive number. But we have to avoid getting carried away. So when I add in the amount given for experience, the apparent gap between the award for winning and the penalty for losing is shrunk down to a small amount.
Now consider the true underlying net positive and negative scores for the four types of regular season games and results, which you get by combining the experience points with the points for the win or the loss:
TRUE REGULAR SEASON COACH GAME SCORES FOR WINS
For the majority of coaches, this will be 5 Points: 4 points for the win and 1 point for the experience. Here is the breakdown for each type of coach:
Rookie and Very New Coaches (less than 200 games): 4 for the win + 0 for the experience equals 4.0 points
Relatively New Coaches (201 to 500 games): 4 for the win + 1 for the experience equals 5.0 points
Veteran Coaches (501 games to 1,000 games): 4 for the win + .25 for the experience equals 4.25 points
Long Veteran Coaches (more than 1,000 games): 4 for the win + 0.06 for the experience equals 4.06 points
TRUE REGULAR SEASON COACH GAME SCORES FOR LOSSES
For the majority of coaches, this will be -4.5 points: -5.5 points for the loss and 1 point for the experience. Here is the breakdown for each type of coach:
Rookie and Very New Coaches (less than 200 games) -5.5 for the loss + 0 for the experience equals -5.5 points
Relatively New Coaches (201 to 500 games) -5.5 for the loss + 1 for the experience equals -4.5 points
Veteran Coaches (501 games to 1,000 games): -5.5 for the win + .25 for the experience equals -5.25 points
Ultra Veteran Coaches (more than 1,000 games): -5.5 for the loss + 0.06 for the experience equals -5.44 points
In summary and in comparison:
--Rookie and very new coaches get 4 points for regular season wins and lose 5.5 points for losses.
--Relatively new coaches get 5 points for regular season wins and lose 4.5 points for losses.
--Veteran coaches get 4.25 points for regular season wins and lose 5.25 points for losses.
--Ultra Veteran Coaches get 4.06 points for regular season wins and lose 5.44 points for losses.
Important note: the rookie and very new coaches actually get the same points as the relatively new coaches when you look at the bigger picture, because they already have received 200 experience points for their first 200 games.
The key thing to note here is that with respect to wins and losses the regular season sub rating is a little biased in favor of relatively new coaches versus the veteran coaches. This is on purpose, of course. This substantially offsets what would otherwise be an unfair advantage in the rating system. The more experienced coaches are expected to do somewhat better in winning and losing in order to achieve a net positive from their winning and losing. This is the primary mechanism used by QFTR that substantially evens the playing field between coaches of widely differing amounts of experience, without being unfair to any type of coach. Without this slightly differing treatment, the ratings system would be biased to some extent in favor of the veteran coaches, because the veteran coaches are eligible for far more points from the sheer number of experience points they get, from the consecutive games coached with current team item, and often from any or all of the items in the playoff sub rating system.
In any future tweaking of the RCR system, one of the areas most likely to be tweaked is points given or taken away for regular season wins or losses by the different types of coaches. A case can be made that relatively new coaches should be even more favorably treated in the regular season relative to the veteran coaches than they already are. But if there is any future tweaking, we will as always be careful to avoid going overboard.
See Section 5 and especially Section 6 for more on the difficulties in comparing coaches with widely different numbers of games coached.
======= SECTION FIVE: EVALUATION OF COACHES AND SPECIFIC INTERPRETATION OF RATINGS =======
The primary objective of Quest for the Ring (QFTR) is to determine and report exactly how NBA playoff games are won and lost. Since in the playoffs, and especially in later rounds of the playoffs there is usually very little difference between how good the players are, any difference in the coaches, sometimes including even very small differences, can determine who wins the series. Therefore, one of the most recurring themes at QFTR is what is good and what is bad coaching for the playoffs. This means that QFTR gives very heavy attention to coaching in its reports on the main home page.
Further, the general public is unaware of just how important coaching is in the playoffs, especially in the Conference Finals and in the Championship, and this fact makes QFTR all the more motivated to keep reporting on the subject. Very, very few other basketball writers attempt to cover this subject at all; it’s like a lonely frontier out here. Regardless of their being very little if any competition for reporting on pro basketball coaching, QFTR uses the same high quality standards and high and reliable quality control for this area as it does for other areas (which other writers and broadcasters do attempt to cover).
Since there is so much on the subject in the hundreds and hundreds of reports on the QFTR home page, any single article on the subject, assuming it was not a full scale and lengthy book, could only highlight the main points. Similarly, this Section of this User Guide (which obviously can not be even a short book in length let alone a long one) can only discuss some of the most important points about coaching in the playoffs in particular and in the NBA in general.
Moreover, this Section has the second objective of explaining specifically how to use the overall ratings and the two sub ratings of the Real Coach Ratings System. The need for this second focus further limits the amount of coverage we can devote to the evaluation of coaches topic. We will try to more than scratch the surface here, but trust me; this topic is way too big for this Section of this Guide.
Given all of the limitations we have for this Section, anyone who wants “full and complete coverage” of what good and bad coaching is in the NBA, and especially in the playoffs, should read any or all of hundreds of reports that are at the QFTR home page.
“Evaluation of coaches” generally will be covered first in this dual focus Section and then “specific interpretation of ratings” will be last.
PART ONE OF TWO PARTS OF SECTION FIVE: EVALUATION OF COACHES
IMPACT OF COACHING IN THE REGULAR SEASON VERSUS THE PLAYOFFS
Theoretically, unless he is stuck with a truly lousy roster, any reasonably good coach can win a lot of regular season games and get his team into the playoffs. Plus, any coach at all, including a bad one, can squeak a very good or great team into the playoffs. For any reasonably good coach, merely getting into the playoffs is really not much of an accomplishment at all.
Many, many owners, managers, and fans do not seem to understand this, but the only thing that really matters with regard to coaching is what happens in the playoffs. Only the truly good coaches can win in the playoffs. The playoffs are where the wheat is separated from the chaff. In the NBA, the regular season is quite honestly nothing more than the preseason for the "playoff season," which is the only season which really matters when all is said and done.
Another way to look at the regular season is that it is a sort of D-League for the off-season. What I mean by that is that owners, managers, and coaches should be watching other teams in the regular season so that they can spot up and coming players who they should try to obtain in the off season (and to a lesser extent in trades in the regular season prior to the trading deadline in February).
Playoff games are generally more intense in all respects: individual players' efforts, team play as a whole, and coaching efforts are all ramped up. And as most of the general public is generally aware of, most teams ramp up their defending in the playoffs.
CERTAIN VETERAN PLAYERS CAN COACH THEMSELVES TO SOME EXTENT
Always keep in mind that older, more veteran teams can coach themselves to one extent or another, particularly if the roster is both highly skilled and highly experienced. It doesn't matter who comes up with the winning schemes and patterns; what matters is that someone does. Younger teams, however, always need a good coaching staff to make headway in the playoffs.
Quest for the Ring has gone on record claiming that the 2007-08 Champion Boston Celtics are a good example of a team that could coach itself well to a large extent.
However, coaches are important in the late playoff rounds even for teams that can partly coach themselves. Coaches determine playing times, which are much more important than most people realize. If the coach of a really good, veteran team that is to some extent “coaching itself” often inserts the wrong players in the game at the wrong time and/or does not have the playing times roughly correct, and/or has a player completely benched who should be playing, then the team will be damaged from bad coaching regardless of how well the players are “coaching themselves”.
COACHES' NUMBER ONE OBJECTIVE IS TO AVOID BEING FIRED
The number one objective for all coaches, but especially for rookie and newer coaches, is to avoid being fired. Calculations indicate that the average Real Coach Rating is currently 706 and the median is about 275. So the objective of all rookie coaches must be to increase their starting rating of 200 toward the median and later on toward the average of 706 in as few years as possible.
Although there will occasionally be exceptions to the rule, coaches who move up even a little from 200 are generally safe from being fired while those who move down from 200 are not safe. Even achieving just a 250 gives the coach a little job security, 325 gives substantial job security, and 400 gives very substantial job security. I’m not saying that the job security achieved for those relatively modest ratings is a good or right thing. Rather, I am merely reporting what is going on in the real world.
The firing of coaches with ratings higher than 250 is relatively uncommon. But when a coach who has a rating of 250 or higher is fired, he is likely to be hired by a new team, most often for the very next season, but sometimes after a delay of a year or two or three. Coaches with ratings higher than 400 who are fired are very likely to be hired by a new team within at most a few years. If a coach with a rating higher than 600 is fired but is never rehired, then something exceptional happened; for example, maybe there was a complete and humiliating collapse in a playoff series. Or perhaps there was a vicious argument between that coach and one of the managers or the owner.
Note also that there is a huge exception to the general rule of thumb that coaches with ratings below 200 (and especially those with ratings below zero) are not safe from being fired. Long veteran coaches, those who have coached about 800 games or more, are often not fired even if they are very poor playoff coaches whose sharply below zero playoff sub rating drives their overall coach rating below zero. This is because many owners do not understand that some coaches do well in the regular season but can not do well in the playoffs, or worse, because of owners who are willing to settle for a good, “dependable” regular season coach even if he is a bad playoffs coach.
You can think of the range between 200 and 400 as "the proving ground" for coaches. Most coaches who drop below zero instead of going up from 200 during their first 3-6 years will be bounced out of the NBA. No mercy is given for coaches stuck during all of those years with sub par teams.
QFTR recommends that coaches who have ratings below 200 for more than about five straight years, and especially coaches who have ratings below zero for about five straight years should be fired unless the managers and owners involved are sure that the coach has not had competitive players to work with, or unless the managers and owners involved are sure that the coach is getting better at his job, or unless there is some other unusual mitigating factor.
Coaches, whether they are newer ones or long veterans, who maintain their jobs with Real Player Ratings below 200, and especially with Real Coach Ratings below zero, are frequently going to be men who have very cordial relations with the managers and owners. In other words, they are being kept on the payroll because the managers and/or the owners involved personally like the coach in question enough to brush aside any concerns about whether that coach is doing a good enough job for their team. These dubious coaches are given the benefit of the doubt or, in other words, sort of a free pass. These free passes generally don’t last for longer than roughly six years for newer coaches, but can last indefinitely for long veteran coaches.
It is not just owners and managers who can be fooled into thinking that a coach is a good one just because he has been coaching for many, many years. It honestly seems that most basketball writers and broadcasters are fooled in this way also. And of course, much of the general public is also fooled.
It is also true that some managers and owners live in fear that they might go from bad to worse if they exchange one coach for another. They simply do not have enough courage to strike out and try a rookie or a near-rookie coach, or to pick up a coach who has been fired by another team but who deserves a second chance.
The key is balance. On the one hand you don't want to be stuck out of caution or fear with a veteran coach who is simply not among the best coaches. On the other hand, you can't just strike out and pick any one who has never coached an NBA team before but seems like he might be a good coach. Rather, you have to do a lot of homework and research. You have to spend a lot of time and make every effort to find that one coach out of a hundred candidates who will actually become one of the better and maybe even one of the best NBA coaches.
Note that in the real world, most owners who strike out on this subject do so by erring on the side of too much caution or fear. In the real world, it appears to be pretty rare and pretty difficult for an owner to choose a coach who has never coached in the NBA before who ends up being a waste of time in effect. Due to the fact that coaching in the NBA is at least a little more complicated and a lot more important than most people and owners think it is, owners who gamble a little by trying a coach who has never coached in the NBA before have a fairly good chance to get a big reward for the little gamble.
THE COACH RUT AND WHY IT CAN EASILY HAPPEN TO OTHERWISE DECENT FRANCHISES
Teams should avoid getting stuck in a rut that the public is completely unaware of but that QFTR has proven exists. This rut is where a team has a very good regular season coach but a lousy playoffs coach. It can be extremely difficult to get out of this rut because it is very hard to fire a coach who usually does very well in the regular season.
Plus, which coaches are not good for the playoffs is basically a secret from the public. This is one of QFTR’s favorite and most important topics, and yet it took even us until November 2010 before we assembled all the hard proof and officially reported out which coaches are lousy playoffs coaches. And this is most likely the first time in history anyone carefully and mathematically investigated this. It took many hours of work to prove this beyond a shadow of a doubt and it was not very easy to do. So it is understandable that most people are in the dark and would not believe that there are a substantial number of coaches who are very good in the regular season but are poor in the playoffs.
The point is, this is basically unknown territory, so don't expect that these good in the regular but bad in the playoffs coaches are going to be fired when they should be (or never hired in the first place). Instead, expect that teams are going to make mistakes with these types of coaches year after year after year. People and things other than the coach will get the blame, and in some cases other people and things are also to blame. But the problem remains that this type of coach is very seldom if ever blamed simply because no one is aware that this type of coach exists and is fairly common.
Most lousy playoffs coaches get away with being lousy playoffs coaches year after year after year as long as they are good regular season coaches. A franchise can be in the dark about this for many years, for the entire time the coach is the coach. A team stuck with this type of coach will typically go along year after year thinking they have a chance to win the Quest, whereas their coach may be so poor in the playoffs that realistically they have no chance whatsoever to win it regardless of who the players are.
NEVER EVER HIRE A COACH WITH A POOR PLAYOFFS RECORD IF YOU WANT TO WIN A CHAMPIONSHIP
The best way to explain this section is with an example. The Denver Nuggets hired George Karl in January 2005 as their head coach despite the fact that he had a poor playoffs record and rating. RCR did not exist back then (and nor would the Nuggets use RCR even now) but they did have Karl’s playoffs win/loss record which should have been enough for them to avoid the mistake of hiring Karl. Specifically, when the Nuggets hired Karl, his playoffs record was 59-67. While coaching the Nuggets, Karl's playoffs record is 15-26 as of January 2011. So overall, his playoffs record as of January 2011 is 74-93. Percentage wise, Karl’s' playoff record has gotten worse while he has coached the Nuggets, not better (despite a strong result in 2009). In short, Karl had a losing playoffs record when he was hired and it has only gotten worse since.
The Nuggets were wrong to hire Karl and they are also wrong not to fire him unless he wins the NBA Championship within the next year or two. Which by the way, the Nuggets probably were in 2007, definitely were in 2008, possibly were in 2009, and were possibly again for 2010 talented enough to win a Championship if the playoffs coaching had been top notch. The now fired Nuggets general managers of the 2006-2010 era were experts at bringing relatively obscure but surprisingly good players (especially surprisingly good scorers) to the Nuggets.
Coaches with losing playoff records are fired by all truly serious NBA franchises these days regardless of regular season records. The absolute top franchises, including at least the Lakers, the Celtics, and the Spurs, would never in the first place hire a coach with a losing record in the playoffs. If their coach ever dropped to where he had a losing playoffs record, he would be fired by the top franchises regardless of how fantastic the coach’s regular season record was.
Why did the Nuggets hire Karl? I can only offer educated guesses. The Nuggets either knew in advance they would never win the Quest with Karl and hired him anyway, or they figured incorrectly that Karl's playoff record was trumped by better aspects of Karl's record, or they decided that Karl's playoff record could be excused for irrational reasons, or there was some other unknown, off the wall reason for hiring Mr. Karl.
The most favored specific “off the wall” theory regarding why Karl was hired is that the Nuggets decided roughly in 2002 to go for a certain kind of player who can be a major bargain because other teams generally avoid that kind of player. The Nuggets decided to go for more volatile players who might need to be contained by a crack the whip type of coach so that they don't "fly off the reservation" and harm team cohesion and morale. Karl is in fact a good coach if you have a bunch of players more emotional and more volatile than average, because for one thing he will not hesitate to bench players who get enraged about this, that, or the other thing. He will bench anyone at any time and for any reason, good or not.
Whatever the Nuggets' management thought, they thought wrongly. If you are a team owner or manager, you can not afford to take any risk or to make any benign assumptions or weak rationalizations when you choose a head coach. If a coach has a poor playoffs record, you have no choice but to not hire that coach if you are serious about winning the Quest. There are going to be coaches who are good enough to do well in the regular season but not good enough to prevail in the playoffs. You should not be the goober who hires one of them, obviously. Let some other franchise/team get stuck in the mud for years and years with that type of coach.
I have to be blunt and a little repetitive here to make absolutely sure I am understood. You should never, ever do what the Nuggets did if you are serious about winning the Quest. Your coach should have a good record for BOTH the regular season and the playoffs. The playoff record is even more important than the regular season record.
Finally, before leaving this crucial subject, I am going to state that given the choice between on the one hand a younger coach who is considered to be a good or great up and coming coach, but who has no NBA playoff record at all, and not much of a regular season one, and on the other hand a long-term veteran coach who has a decent, good, or even great regular season record but a poor, losing playoffs record, you are better off choosing the young coach with no playoff record.
In point blank and clear summary, hiring a coach with a bad playoffs record is one of the worst things you can do if you want to win the Quest.
MORE ON THE EVALUATION OF GEORGE KARL
Ever since our project started QFTR has focused on George Karl more so than any other coach (simply because when we first started we only intended to be a Denver Nuggets site). This may sound sarcastic but we actually do not intend it to be: George Karl has, by doing things that are wrong (or unwise if you prefer) alerted QFTR to many things that you DON’T want to do if you are coaching playoff games in the NBA.
Karl will go down in history as not the only one but certainly as one of the all-time most famous coaches among the ones whose coaching beliefs and methods work much better in the regular season than they do in the playoffs. There have always been coaches like this, there are other coaches like this right now, and there will always be coaches like this. But Karl will always stand out as a particularly good, example of this kind of coach, a “textbook case” if you will.
Out of twenty years in the playoffs, Karl has managed to get winning playoffs records in only four years. One of those was 2009, which was surprising to say the least. That year, Karl tried an ultra aggressive and energetic type of defending and proved that it can win you a few playoff games that you would otherwise have lost as long as the referees fail to call a good number of the fouls. However, the deep hole that Karl dug in many earlier years was so deep that the Nuggets' miraculous 2009 playoffs campaign was not enough to lift George Karl all that much in his playoffs sub rating. In the 2009 playoffs, his win-loss went from 62-83 to 72-89. (Then in 2010 it went to 74-93). Karl was still after 2009 and is still right now showing up in the win-loss and also in the ratings as a very poor playoffs coach.
PART TWO OF TWO PARTS OF SECTION FIVE: SPECIFIC INTERPRETATION OF RATINGS
In late 2010 QFTR evolved what was a general and vague coach recommendation system to a more organized and exact one tied to Real Coach Ratings that can be called the QFTR Coach Recommendation System (CRS). Separate playoffs and regular season recommendations are given for all NBA head coaches. These are given in a report that appears within a few days (or a few weeks at the most) following the Real Coach Ratings Reports. Specifically, the Reports with the official recommendations are scheduled for late August and for October; however, production limitations will sometimes cause them to be late.
QFTR gives two recommendations for each coach but paradoxically does NOT give any overall recommendation. Two main reasons explain this paradox. First, it turns out that there is a big, big difference for a lot of coaches in how well their coaching works out in the regular season versus how well it works out in the playoffs. It turns out that it is relatively common for pro basketball coaches to be very good regular season coaches but poor or very poor playoffs coaches. For these coaches, the way they look at and understand basketball and how they have their team playing works better in the regular season than it does in the playoffs. Because of this alone, making combined regular season / playoffs recommendations would be far less productive than you might think.
The second reason why we don't even attempt an overall recommendation is that franchises will look at the importance of the regular season and the playoffs differently. For franchises who know already they are most likely not going to be in the playoffs for a while, and also for franchises who think the regular season is more important for them than the playoffs, they might perhaps use the regular season recommendations more than the playoff ones.
However, QFTR strongly disagrees with any owner or manager who places the importance of the regular season above the importance of the playoffs. By rights, the playoffs should always be considered as more important than the regular season. If a team is not going to be making the playoffs this year it should by rights have a great playoff coach anyway, so when the team does make the playoffs in the near future it has the right coach for winning in the playoffs.
RECOMMENDATIONS ABOUT THE RECOMMENDATIONS
QFTR highly recommends that all franchises use the playoff recommendations more strongly than they do the regular season recommendations.
But some words of caution are in order. Never completely ignore the regular season recommendations. It is going to be very unusual for a great playoff coach to be a not so good regular season coach (unlike the reverse which is surprisingly common) but if there ever was a coach with an outstanding playoff record but a poor regular season record, you would want to avoid this coach as a kind of insurance policy against having the wrong coach overall. This scenario could play out if the number of playoff games coached was relatively low and a fluke amount of statistical error resulted in an artificially high playoffs rating (whereas meanwhile the lower regular season rating was exactly accurate).
At an absolute minimum, the playoffs should be considered equal in importance to the regular season and the playoff coach recommendations should be just as important as the regular season coach recommendations.
One thing QFTR could do (and what QFTR would do if forced to make an overall recommendation) would be to use a formula where the playoffs rating was more important than the regular season rating. Or for that matter we could change the overall Real Coach Ratings system so that it was even more weighted in favor of the playoffs than it already is. We choose not to do either of these things at this time because of the complexities already discussed and because of other factors not mentioned here.
To some extent this discussion about which recommendations to use is not completely on point, because obviously, the best thing and what you want is a coach who is above average for BOTH the playoffs and for the regular season. Unfortunately however such coaches are much rarer than most people think they are. It turns out that although it is not rocket science, coaching basketball at the NBA level is much more difficult and complex than most people think it is. And then NBA playoff coaching is more difficult and complex than regular season coaching is. Ironically, many of the head coaches themselves apparently underestimate how difficult their job is and many of them don’t even begin to understand the magnitude and nature of the differences between the regular season and the playoffs.
THE PHIL JACKSON ADJUSTMENT FOR THE PLAYOFFS COACH RECOMMENDATIONS
Phil Jackson is by far the best and most successful NBA playoffs coach among current and recent head coaches. Actually, he is most likely the best NBA playoffs coach of all time (although there are a handful of other ones who are in Jackson’s ballpark). Jackson has repeatedly won playoff games he wasn't supposed to win versus some of the very best of the other NBA coaches. Jackson has won just about 42 playoff games he wasn’t supposed to win out of a total of 323 playoff games. Jackson’s all time playoffs record is 225-98 but according to the QFTR investigation his “par record” is just 183-140.
This means that if you think (as most of the general public does) that Phil Jackson wins in the playoffs mostly according to how good his players are and that he has little or no impact on how many wins his teams get you are completely wrong. Jackson has had good teams, since he was “supposed to be” 183-140 in the playoffs but he boosted that to 225-98 and this was such a big improvement that we know, for example, that Jackson would not have won 11 rings (and very possibly not even half a dozen rings) if he were an average playoffs coach. We also know that Jackson would have won very few if any rings if he was a well below average playoffs coach.
Some coaches have come up against Phil Jackson in many more playoff games than others. Rick Adelman, Jerry Sloan, and Greg Popovich lead this pack, having faced Jackson in 29, 27, and 26 playoff games respectively. Adelman has pretty well held his own but Sloan and especially Popovich have been hammered by Jackson. After these three there is a group of five coaches who have faced Jackson in between 12 and 16 playoff games and three out of five of these have been handed (by Jackson) a big bunch of losses that should have been wins. The damage to them, though, is far less than the damage to Popovich.
For a big majority of coaches, the more playoff games a coach has played against Phil Jackson, the more his Playoff Rating is going to be depressed because Jackson has heavily dominated in playoffs coaching. Therefore, for my playoff coach recommendations, I decided to remove most of the bias caused by big differences between coaches in the number of games versus Phil Jackson. For determining the recommendations, 4/5 or 80% of the scoring resulting from games versus Phil Jackson is removed.
The "Phil Jackson adjustment" is NOT done in the main Real Coach Ratings Report. All of the numbers in the playoffs sub ratings in that Report include all games played against Phil Jackson. Only in the official recommendations Report are in effect 80% of the games versus Phil Jackson taken out.
The advantages of the Phil Jackson adjustment outweigh the disadvantages. The main advantage is that without it, coaches who have been severely hammered by Jackson (due to having to play him more than other coaches) will have misleadingly low ratings.
However, the disadvantage is that if a coach goes up against Phil Jackson in the playoffs, the coach might in theory appear to be a little more competitive versus Jackson than he really is. Really though, that is a moot point because Phil Jackson’s ratings are far, far ahead of any other coach’s whether or not the other coach’s ratings are boosted by the Phil Jackson adjustment.
RATINGS FOR NON-CURRENT COACHES CAN BE CALCULATED AND PROVIDED
Note that QFTR can in theory include in these recommendations any coach who has ever coached in the NBA (subject to the 25 playoff games and 200 regular season games minimums). If you need a specific coach evaluated, contact QFTR.
EVALUATION SCALES
QFTR has had evaluation scales for players since 2007, but it took until late 2010 before evaluation scales for coaches were developed. Prior to then the overall RCR system was not sufficiently developed to warrant a formal evaluation scale. As already mentioned, the relevant measure for the playoffs recommendation is the Playoffs Sub Rating of the Real Coach Ratings System with the Phil Jackson adjustment included. The relevant measure for the regular season recommendation is the Regular Season Sub-Rating of the Real Coach Ratings System.
Note that after Phil Jackson retires (almost certainly in 2010) the Phil Jackson adjustment will be phased out. What will probably happen is that the adjustment will be cut by 10% each year. In 2010 and 2011 80% of the effect from all Phil Jackson encounters is removed from each coach’s score. For 2012 that removal percentage will probably be 70%, for 2013 it will probably be 60%, for 2013 it will be 50%, and so on until it is completely eliminated. It is very unlikely that QFTR will ever again need to have an adjustment due to a Coach who is far better than any other.
EVALUATION SCALE FOR COACHES FOR THE NBA PLAYOFFS
--At least 25 playoff games must be coached for the evaluation to be valid and official.
--The measure used is the Playoffs Sub Rating of the Real Coach Rating System.
--The effects from 80% of coach’s games versus Phil Jackson are removed.
Absolute Highest Possible Recommendation: 1,200 or more
Very Highly Recommended: 900 to 1,199
Highly Recommended: 600 to 899
Recommended: 350 to 599
Neither Recommended nor Not Recommended: 100 to 349
Not Recommended: -150 to 99
Strongly Not Recommended: -450 to -151
Very Strongly Not Recommended: -750 to -451
Absolute Lowest Possible Recommendation: -751 and less
WHEN DOES QFTR GUARANTEE THAT A COACH WILL NEVER WIN THE QUEST FOR THE RING?
The relevant measure is the Playoffs Coach Score with the Phil Jackson adjustment included. The guarantee is NOT based on the Playoff Sub Ratings, which add the experience factor and any Championship points earned by coaches to the Playoffs Coach Score. Remember though that the Playoff Coach Scores are the dominant factor in the Playoff Sub Ratings. The Playoff Coach Scores average about 150 points less than the Playoff Sub Ratings.
GUARANTEE LEVEL: -750 or less
That is, QFTR guarantees that any Coach with a Playoffs Coach Score of -750 or less will never win The Quest for the Ring.
If after being added to the guarantee list a coach wins one or more playoff games that should have been losses, he will be removed from the list if the score becomes higher than -750.
EVALUATION OF COACHES FOR THE REGULAR SEASON
--At least 200 regular season games must be coached for the evaluation to be valid and official.
--The measure used is the Regular Season Sub Rating of the Real Coach Rating System.
Absolute Highest Possible Recommendation: 1,300 and more
Very Highly Recommended: 1,050 to 1,299
Highly Recommended: 800 to 1,049
Recommended: 550 to 799
Neither Recommended nor Not Recommended: 350 to 549
Not Recommended: 100 to 349
Strongly Not Recommended: -150 to 99
Very Strongly Not Recommended: -400 to -151
Absolute Lowest Possible Recommendation: -401 and less
HOW TO INTERPRET DIFFERENCES IN RATINGS
The best way to explain this is with the aid of an example. We will use Larry Brown versus George Karl from the 2010 Real Coach Ratings Look Ahead Version, published in November, 2010. Rounded to the nearest whole number, Brown’s overall rating is 2,420. His Playoffs Sub Rating is 2,199 and his regular season Sub Rating is 221. George Karl’s overall rating is 405. His Playoffs Sub Rating is -648 and his regular season Sub Rating is 1,053.
Comparing directly:
Larry Brown / George Karl
Playoffs: 2,199 / -648
Regular Season: 221 / 1,053
Overall: 2,420 / 405
The reason this is a very good example to use here is that Brown and Karl are the two completely different types of coaches we often talk about at QFTR. Brown is a high quality playoffs coach whereas his regular season record is surprisingly poor. Karl is precisely the opposite: he is a very low quality playoffs coach whereas his regular season record is surprisingly good. Comparing two coaches who are not opposites the way these two are is easier.
The main thing and the most important thing to do is to look at the evaluations using the scales above:
Larry Brown / George Karl
Playoffs: Absolute Highest Possible Recommendation / Very Strongly Not Recommended
Regular Season: Not Recommended / Very Highly Recommended
Overall: There is no evaluation scale for the overall ratings; see above for an explanation.
QFTR strongly recommends that the playoffs ratings and recommendations be given priority over the regular season ones. In numerical terms, QFTR recommends that playoffs ratings be considered between 40% and 80% more important than regular season ones. Therefore, in this example QFTR would recommend Brown over Karl by a fairly wide margin.
Compare each coach’s most favorable evaluation and then separately compare each coach’s least favorable evaluation. In this example Brown’s worst evaluation (Not Recommended) is not as bad as Karl’s worst evaluation (Very Strongly Not Recommended). Karl’s worst is two notches worse than Brown’s worst. Also, Brown’s best evaluation (Absolute Highest Possible Recommendation) is one notch better than Karl’s best evaluation (Very Highly Recommended). Brown is ahead of Karl when you compare the higher of their evaluations AND when you compare the lower of their evaluations.
EYEBALLING NUMERICAL DIFFERENCES
What if you are looking at ratings and for one reason or another you are not checking the evaluation scales. In this sub section we’ll give you some advice about how to interpret actual ratings and differences between ratings.
Not counting once in a century all-time greatest playoff coaches like Phil Jackson, the overall range of the Playoffs Sub Rating is going to be from approximately -1,000 to 2,500. The range is 3,500 points. The average at any time is going to be roughly 100 and the median not counting the zero ratings is going to be roughly 0. Since coaching playoff games is a high level skill, the median score is lower than the average score.
To make quick eyeball evaluations, start with -1,000 and divide the range (of 3,500) into ten equal mini ranges of 350 points each. The first one would be from -1,000 to -650; the second one would be from -300 to -650, and so on. Assign a simple zero to 10 rating to each category:
-1,000 to -651 > 1
-650 to -301 > 2
-300 to 49 > 3
50 to 399 > 4
400 to 749 > 5
750 to 1,099 > 6
1,100 to 1,449 > 7
1,450 1,800 > 8
1,801 to 2,149 > 9
2,150 to 2,500 > 10
Scores less than -1,000 could be translated as zero while scores greater than 2,500 (such as with Phil Jackson) could be translated as “off the scale”. Now you can compare any number of coaches for the playoffs using very simple single numbers.
For the regular season, the overall range of the Regular Season Sub Rating is going to be from approximately -500 to about 2,000. The range is 2,500 points. The average at any time is going to be roughly 400 and the median is going to usually be 200, which is the starting Sub Rating for all rookie coaches.
To make quick eyeball evaluations, start with -500 and divide the range (of 2,500) into ten equal mini ranges of 250 points each. The first one would be from -500 to -250; the second one would be from -250 to 0, and so on. Assign a simple zero to 10 rating to each category:
-500 to -251 > 1
-250 to -1 > 2
0 to 249 > 3
250 to 499 > 4
500 to 749 > 5
750 to 999 > 6
1,000 to 1,249 > 7
1,250 to 1,499 > 8
1,500 to 1,749 > 9
1,750 to 2,000 > 10
Scores less than -500 could be translated as zero while scores greater than 2,000 (such as with Phil Jackson) could be translated as “off the scale”. Now you can compare any number of coaches for the regular season using very simple single numbers.
EYEBALL INTERPRETATION EXAMPLE
We’ll use Larry Brown versus George Karl again:
Larry Brown / George Karl
Playoffs: 2,199 / -648
Regular Season: 221 / 1,053
Simplified to the single digits, we have:
Larry Brown / George Karl
Playoffs: 10 / 2
Regular Season: 3 / 7
We could then (unofficially!) make an overall comparison. Let’s use a 50% multiplier for the playoffs being more important than the regular season. We get:
Larry Brown Overall : (10 X 1.5) + 3 = 18
George Karl Overall: (2 X 1.5) + 7 = 10
Therefore, unofficially and roughly speaking, Larry Brown is almost twice as good a coach as is George Karl.
======= SECTION SIX: CAUTIONS INCLUDING THE WELL KNOWN EXPERIENCE GAP PROBLEM =======
Since the Real Coach Ratings system is essentially two systems / models combined only unofficially into one overall result, we will discuss cautions separately for the two separate models. For each we will discuss statistical error. All statistical models contain some statistical error but the actual amounts varies radically depending on (1) how good the model is, (2) how large any sample sizes used in the model are, (3) on the real nature of what is being studied, especially on how variable or “wild” the underlying reality is, and (4) on the effectiveness of any quality control and results validation procedures. The good models have amounts of statistical error so low that you can rely on the results for years and years and have more than a 99% chance of never being in error while relying on the results.
STATISTICAL ERROR IN THE REGULAR SEASON SUB RATING MODEL
How good the model is can always be argued, and naturally the designer will be at least a little biased in favor of his or her model. The QFTR Real Coach Ratings Regular Season Sub Rating model is based on two primary foundations: experience and wins and losses. For these, the highest downward bias is against newer coaches who have from the beginning been coaching poor teams. The highest upward bias is in favor of coaches who are poor in the playoffs but who have been repeatedly given above normal players to coach. Unfortunately, both of these biases are rather large and mostly unavoidable. This is one of the big reasons why the RCR system consists of the two sub ratings (regular season and playoffs) and for why overall ratings are published but are not officially given a lot of weight in discussions.
The experience bias has been substantially reduced but not eliminated by progressively (in stages) eliminating experience points available to long veteran coaches. Also, at the low end, rookie coaches are given 200 games worth of experience from the get go.
A very substantial amount of experience bias remains, however. If all of the experience bias was reduced then the experience factor would be meaningless. That would not make sense because in many cases coaches do get a little better with experience.
The problem is that there are a fairly large number of exceptions to the rule that coaches get better with experience. A minority of coaches do not get substantially better with experience, either because they are brilliant coaches to begin with who can’t possibly get substantially better, or because they learn some wrong things from their experience and so they on net stay the same or they actually get worse with experience. Unfortunately, there is no known valid way to determine on a case by case basis how experience changes various coaches. Unfortunately you can not simply use changes in win-loss percentages over time because (1) there are other variables that could explain all of those changes and (2) in most cases there are not enough such changes to constitute an adequate sample size.
With regard to the experience and the wins and losses foundations, the bad news is that we are left with a moderate amount of bias (as just discussed) but the good news is that we are left with no sampling error, simply because no samples are needed because one hundred percent of the information is available and is used. By contrast, other possible regular season coaching variables, such as opinions of sports writers, opinions of players, etc. are subject to very high bias and also high statistical sampling error; QFTR would never condone usage of opinions in any of our models regardless of how many of them we could get and regardless of which opinions we could get. The very fact that mere opinions are not valid is why we spend all the time on ratings systems such as RCR in the first place.
Unfortunately, moderate variability is believed to exist with respect to how variable or “wild” the real nature of coaches in the regular season is. While you are never going to see a completely incompetent coach coaching an NBA team in the regular season, you will at the low end see moderately or “somewhat” incompetent coaches from time to time and at the high end there will be brilliant coaches from time to time.
Very unfortunately, the RCR system by itself can NOT automatically identify brilliant regular season coaches who will be great playoff coaches. The regular season sub rating is not a fine enough instrument to accomplish that even if an attempt is made to flush out bias when looking at a particular coach. Further, if a brilliant coach is stuck with especially poor players, there will be little if anything that even he can do that will show up in anything you can easily see.
However, if you are looking for a great coach, RCR can in some cases point you in the right direction. For example, the playoffs sub rating might show that a brilliant coach has won one or two playoff games he was not supposed to win in just one or two series. Then you might be aware that the team he is coaching is doing better than most people expected. These two pieces of evidence (neither of which come from the regular season sub rating system) would strongly suggest (but would not prove beyond a shadow of a doubt) that you have discovered a great coach.
The regular season sub rating receives the same high level of general quality control that all QFTR systems and ratings do. General quality control primarily means that the model as a whole and everything specifically in it are continually reviewed to make sure they exactly match reality in accordance with everything known about (in this cases pro basketball coaching) in reality. Quality control also means that all correlations in the model are supposed to closely match correlations in the real world. Recalibration and iteration are among the primary tools used to achieve quality control.
Note that, unlike many other basketball statistical models, QFTR models and systems are subject to continual revisions and expansions. However, as of late 2010, QFTR asserts that both the regular season and the playoff sub rating components of the RCR system are well and extensively developed and will not in the future be subject to major overhauls or major expansions. More specifically, most or all variables that can correctly be incorporated have already been correctly incorporated. Future changes will most likely be limited to relatively minor adjustments that will change results only a little.
Variable validation is where a specific key result has an average value (or perhaps some other statistical attribute) that is in accordance with model design. In the moderately complicated models QFTR uses, validation of key variables is often possible. But in most simple models, no such validation is possible. The regular season sub rating model is (intentionally) simple and there are no variables in it that can be or need to be statistically validated.
STATISTICAL ERROR IN THE PLAYOFFS SUB RATING MODEL
How good the model is can always be argued, and naturally the designer will be at least a little biased in favor of his or her model. The QFTR Real Coach Ratings Playoffs Sub Rating model is based first and foremost on efficiency of teams, which is extremely highly correlated with NBA playoff results. The model sets playoff expectations based on those efficiencies and then looks at actual results versus those expectations for each coach. QFTR is extremely confident that this is a very valid and strong model for correctly comparing coaches with respect to playoffs coaching.
WARNING: STATISTICAL ERROR IN PLAYOFF SUB RATINGS FOR COACHES WHO HAVE COACHED FEWER THAN 25 PLAYOFF GAMES MAY BE EXCESSIVE
This is the most important caution and warning! For coaches who have coached fewer than 25 playoff games, playoff sub ratings are calculated and published despite being subject to possible excessive statistical error, but no official recommendations are given. Therefore, if in any way you use playoff sub ratings for coaches who have coached fewer than 25 playoff games, do so with extreme caution. For these coaches, variances between expected and actual playoff results could be caused mostly or entirely by injuries rather than coaching. Therefore, to use sub ratings for coaches who have coached fewer than 25 playoff games, you would have to research which players didn’t play in the series to see if your inexperienced playoff coach was lucky or not with respect to injuries (to his players and to the players of his opponents.) The “manual injury adjustment” of the Real Team Ratings system can be used to do this. See the User Guide to Real Team Ratings.
Except for coaches who have coached few playoff games, especially fewer than 25, the variances between expected and actual playoff results are going to be due mostly to coaching. The other possible reasons: injuries and players playing better or worse than they did during the regular season (not due to coaching) are going to mostly statistically cancel themselves out for coaches who have coached more than 25 playoff games and are going to virtually completely cancel themselves out for coaches who have coached more than 50 playoff games. As the number of playoff games coached rises from 25 to 50, little of the difference will be due to the other factors and most of the difference will be due to the coaching. As the number of playoff games coached rises above 50, essentially all of the difference between expected wins and actual wins will be due to the coaching. Therefore, QFTR relatively confidently issues official recommendations for all coaches who have coached between 25 and 50 playoff games and QFTR extremely confidently issues official recommendations for all coaches who have coached at least 50 playoff games.
The next thing to look at is how variable or “wild” what we are looking at is in the real world. One of the very most important themes of the entire QFTR project is that coaches in the playoffs vary by more than most people think and by enough to easily change the outcome of close series. At the very least it can be said that coaches in the playoffs have a fairly high variability: the worst of them are much worse than the best of them. Roughly speaking, the worst playoff coach needs at least one more star player than the best playoff coach needs in order to have an even chance of beating the best coach.
But with respect to cautions the real issue with regard to variability is not with what we are focused on as the end product but with other “wild” factors that could explain differences in ratings. Unfortunately, injuries are a very large wild factor that is out there. As already explained, this factor invalidates playoff sub ratings for coaches who have coached fewer than 25 playoff games and makes caution in order regarding playoff sub ratings for coaches who have coached between 25 and 50 playoff games.
In order to validly use playoff sub ratings for inexperienced playoffs coaches, you must adjust the ratings for injuries. Using the manual injury adjustment as shown in the User Guide to Real Team Ratings is recommended for coaches who have coached between 25 and 50 playoff games and it is required for coaches who have coached fewer than 25 playoff games. QFTR has no specific way to do this at this time. At this time you have to devise your own adjustments to the playoffs sub ratings based on the injuries you find out about.
Another possible wild factor is players playing better or worse than they did in the regular season for reasons unrelated to the coaching. QFTR research indicates that this is a relatively minor factor which would not cause much statistical error at all except possibly for coaches who have coached fewer than 10 playoff games, and even for these coaches it would be unlikely.
Other than injuries and players better or worse “on their own”, there are no other known factors (other than coaching, obviously) that could explain differences between expected and actual results in pro basketball playoff games.
The playoffs sub rating receives the same high level of general quality control that all QFTR systems and ratings do. General quality control primarily means that the model as a whole and everything specifically in it are continually reviewed to make sure they exactly match reality in accordance with everything known about (in this cases pro basketball coaching) in reality. Quality control also means that all correlations in the model are supposed to closely match correlations in the real world. Recalibration and iteration are among the primary tools used to achieve quality control.
Variable validation is where a specific key result has an average value (or perhaps some other statistical attribute) that is in accordance with model design. In the moderately complicated models QFTR uses, validation of key variables is often possible. But in most simple models, no such validation is possible.
For the moderately complicated playoffs sub rating model, a validation on an extremely key variable is possible and has been done. This variable, expected versus actual wins for away teams, is at the core of the model and it should have an average value of zero as soon as the number of playoff games studied is large enough to be rid of any significant sample size error. The database contains enough playoff games in it that any error from sample size is extremely small, so that is not a problem. The reason the value should be zero is that the scale which translates differences in net efficiencies into expected number of wins is correct only if actual real world results result in a long term average of:
Expected number of wins minus actual number of wins of the away teams equals zero (or at least very close to zero).
Validation was performed and the initial result was that the scale and the model were slightly in error. After recalibration validation was redone. Now, the sum of all of the expected number of wins minus the sum of all of the actual number of wins divided by the number of playoff series equals .108. This is extremely close to zero and additional recalibration is neither required nor recommended. However, later in 2011 another recalibration may possibly be performed. Alternatively, the home court adjustment may be very slightly tweaked.
Other less important requirements for validity are that the overall range (of the scale) is correct and that the rates of change in various sections of the range are correct. The overall range has been verified as correct; specifically, if the difference in net efficiencies is twelve or greater, it is essentially impossible for the lower team to win even one playoff game in a series (unless there is a major injury to the higher team).
Although the rates of change in various sections of the range have not been completely and exactly verified because it is extremely difficult and time consuming to do so, it is unnecessary to do this because any possible error due to the rates of change in sections of the range is very small. Specifically, the highest possible error would translate into approximately five points (up or down) for a coach in a playoff series.
Now we will proceed to a few other cautions.
BE CAREFUL REGARDING THE VERY LARGE TIME SCALE OF THESE RATINGS
Keep in mind that each coach is rated using information from every season that he has ever been a head coach in the NBA. Some coaches will currently be substantially better than their overall career ratings indicate. On the other hand, it is very possible that a small number of current coaches could be substantially worse than their overall career ratings indicate. Much more likely would be that a very small number of coaches would be just slightly worse right now than they have been on average.
While I am on this subject, I want to warn you to not make the assumption that all or even most coaches get better as they accumulate more and more experience. Most coaches who have coached for less than five seasons will be getting at least a little better from one year to the next. Many coaches who have coached for between five and ten seasons will be getting a little better from one year to the next. Beyond ten years, very few coaches will be getting even a little better from one year to the next.
In any event, there is no empirical evidence I know of to back up a sweeping generalization stating that coaches always get better with experience, and nor is that assumption obvious or even likely to be true most or much of the time.
It is very plausible that most coaches do not really improve that much after roughly five or six years of experience. One thing that might prevent the more experienced coaches from automatically getting better is that many of the heaviest experience coaches may not have completely updated their beliefs and coaching schemes to reflect the current ways of basketball. Some older coaches may not have fully adjusted to rule changes of recent years, for example. They may be hurting their teams a little or even a lot by persisting with strategies and tactics that used to work well years ago but are not working very well in the NBA in 2011 and 2012.
THE INFAMOUS WIDELY DIFFERENT AMOUNTS OF EXPERIENCE PROBLEM
In the very early days of RCR back in 2007, it was feared that the widely different amounts of experience among NBA coaches would doom the system to either total failure, or at the very least, to being much less valid and reliable than Real Player Ratings are. This problem originates in the huge discrepancies in the amount of experience between long-term veteran coaches and much younger coaches. To some extent this makes comparing NBA coaches like trying to compare apples and carrots rather than like trying to compare various apples.
In general, some points of comparison will be biased in favor of newer coaches while other points of comparison will be biased in favor of long veteran coaches.
As recently as 2009 QFTR was still very worried about this. But after several years of thinking about the problem and introducing changes to RCR in response to it, we think we have now largely “solved” it. That is, we think now that the ratings and the evaluations based on the ratings that we publish are fair and unbiased to all coaches regardless of their experience level.
The following aspects of RCR largely solve the “apples and carrots problem”:
(1) The experience points available for regular season games (for the regular season sub ratings) differ depending on the experience level of the coach. Long veteran coaches get virtually no experience points; newer coaches get the maximum. Coaches in the middle get about half way between the maximum and the minimum.
(2) Rookie coaches are given 200 experience points from the get go, which eliminates the experience bias that would otherwise exist against those brand new coaches.
(3) No experience points are given for any playoff game coached beyond 200 playoff games. This cuts down on the bias in favor of the long veteran coaches who have coached the most playoff games.
(4) No evaluation scales, no official evaluations not using scales, and no official recommendations are produced or given for the overall ratings. At this time, only unofficial usage of overall ratings is done. This is obviously a powerful way to respond to the problem; it’s basically a divide and conquer strategy, where the overall ratings exist but are largely ignored in favor of the two sub ratings that add up to the overall ones. The main reason why ignoring the overall ratings is advised is that certain long veteran coaches are poor playoff coaches but they are decent to good regular season coaches and they also have a lot of regular season experience points. Therefore, the overall ratings of these coaches are very misleading when it comes to the playoffs.
Even though QFTR does not officially use the overall Ratings, we unofficially do, and may officially use them if and when a valid way to precisely calibrate the regular season and playoff sub ratings becomes available. The following cautions apply to the overall ratings.
CAUTIONS REGARDING THE OVERALL REAL COACH RATINGS
Where we are right now on the overall ratings is that we still have a small problem left with the experience discrepancy problem. In a nutshell, in the overall ratings we decided to take the risk that the problem is not completely solved so as to avoid being overly harsh toward certain long-term coaches. "First, do no harm..." Although many hours have been spent trying to solve the problem, and although much progress has been made, the RCR system still can not completely bridge the gap created by the huge differences in experience.
The worst of the long-term veteran coaches most likely have overall ratings that are higher than what they really should be. If a Coach has received some "lucky breaks" by not being fired after bad losing seasons, and/or after bad losses in the playoffs, and he has over the years now accumulated 1,000 or more regular season games and 100 or more playoff games, his rating will very likely still be distorted on the high side relative to the other coaches. This is because the long-time veteran Coach, who could have been fired a long time ago but was not fired, will max out on the experience points, and he will also have a few winning seasons to go with the losing seasons. The sum of the maximum experience points plus any positive net from winning seasons will tend to more than offset all the losses from the year(s) he might have been fired, despite the heavy negatives that losses carry.
Another way of thinking about this issue is that assuming a long-term veteran Coach has a too high rating due to the above, keep in mind that Coach would not even be in the ratings had he actually been fired. Coaching a professional sports team is about the worst job in existence for job security, since the vast majority of coaches are involuntarily fired. If all coaches who are “supposed to be” fired were fired, this distortion would disappear from the RCR system!
Yet another way of focusing on this problem is realizing that pro basketball coaches are fired or not fired based on different criteria, because managers and owners of pro teams do not all think in similar ways.
We can not simply remove experience from the set of factors, since in every single career that exists, the more experience you have, the better you tend to be. Moreover, even if we did reduce or remove the experience factor, the same problem would still be there in the case of coaches who probably should have been fired, but are not and then end up fortunately coaching very skilled teams in subsequent years, thus piling up wins with those teams.
In other words, we have no choice but to proceed as if all coaches face the same criteria as to whether they are fired or not, even though we know that some coaches, especially veteran coaches, are treated much more leniently than others.
CAUTION ABOUT THE AGE OF COACHES
One other thing to keep in mind about long-term veteran coaches (the ones with more than 1,000 regular season games coached) is that once such a Coach gets older than 60, 65, and then maybe even 70 years old, that Coach's abilities will probably be less than they were when he were younger. Whereas almost all coaches with little experience are under the age of 55.
For example, Utah Jazz Coach Jerry Sloan is 68 years old on March 28, 2010, so it is possible that he is a little too old now for maximum effectiveness.
The bottom line is that there will be a small number of older, veteran coaches whose ratings are misleading on the high side. Unfortunately, we are unable to completely correct for this or to properly estimate the amount of the unavoidable distortion at this time. So we advise you when looking at the ratings to make sure you give the benefit of the doubt to younger coaches who seem to have good potential.
PROBABLE DOWNSIDE DISTORTIONS IN THE OVERALL RATINGS
If you have a younger coach who has just started out, and he has a bad team to start with (and a lot more new coaches start with bad teams than good ones) then his rating will be much lower than it will be in future years if he avoids getting fired and in the future gets much better teams to work with.
However, it is also very possible that in most cases the worst teams get only the medium and poor coaches, that in other words the really good coaches never have to start out coaching a bad team, so that any downside distortions are small and mostly moot points.
Here is an interesting excerpt from what was probably the very first User Guide for Real Coach Ratings, written in 2008 when I tackled the big experience differences problem for the first time:
“As I was working on this I often had a sinking feeling that trying to fairly compare coaches with more than 10 years of experience with those with less than 2 years experience would be in the end impossible. But I persevered and scrapped and fought my way to the goal line and got it done. I achieved all of the balancing that I needed to achieve. Specifically, for example, I kept the points given for experience within reason, while making sure that regular season and playoff losses were penalized to the full extent they should be.”
FUTURE CHANGES TO REAL COACH RATINGS
Are the factors set in stone forever and ever? No, and unlike many sites that make use of statistics, QFTR will make radical changes in models and procedures whenever new basketball discoveries are made. But the odds are that changes to the RCR system will be relatively minor in future years, with one notable exception. As you may already be aware, QFTR will try in the future to develop a valid way to combine the regular season sub ratings and the playoff sub ratings, so that the overall ratings are considered completely valid and official. The only way to do this is to achieve a total solution to the experience discrepancy problem.
In summary, although this is not a perfect system, it is at the very least a very good system, and it is light years ahead of having no system at all with which to fairly compare coaches of radically differing amounts of professional basketball head coach experience. In fact, as surprising as this may sound, RCR is literally the only known coach rating system publicly published that is based on sound statistics, sound statistical modeling, real information, and extensive quality control.
Subscribe to:
Posts (Atom)