Thursday, September 10, 2009

The College Football Blog: Thoughts on Ranking Systems

Systems of Ranking Teams in College Football

Over the past few years I’ve been trying to figure out the best way to rank college football teams in lieu of playoffs and balanced scheduling. While I think most people would agree that top 25 polls should be for reference and entertainment purposes only, the reality is that two of the major polls make up two thirds of the BCS poll which is used to determine the “postseason.” So whether we like it or not, the polls do matter. When you consider this truth, it seems worse than silly that there are no clear guidelines as to how the voters are supposed to be ranking the teams. Are the teams to be ranked based on who has “earned” it the most? Is it the team playing the best right now or the team that has played the best over the course of the season? Is it the team that the individual voter believes is the best, regardless of whether or not they’ve proven it on the field through a difficult schedule? It’s not at all clear.

The next time there is a debate about the #1 or #2 or #5 team in any of these polls, pay attention to the myriad of criteria that people use to justify why a particular team should be ranked at a certain position. If you think about it, it’s pretty stupid to hold any sort of heated debate about a list of rankings when 10 people could approach the task of ranking the teams and wind up using 10 different sets of criteria.

There are a number of different types of ranking systems. I’m not talking about the AP, the Coaches, the BCS, etc. I’m saying there are a number of different ways that voters rank teams when they vote in the AP Poll or Coaches Poll or Veteran Pitching Coach Dick Pole. The most basic and easily applied strategy is the eye test. You study the teams; you look at the scores and stats; you watch the games; and then you decide whether you think one team is better than another. This is the sort of poll I use to make up my Power Rankings. I think this is the best sort of ranking system to use when the rankings have no effect on anything of importance. However, I think this can be the worst sort of system to use when determining anything that matters (like which teams play for the National Title). Nobody really knows what’s going to happen when two teams take the field. People may act like they have a good enough handle on things to be able to determine which teams are better than others, but nothing compares to letting things get figured out on the field. No matter how sure people may feel that they know who the best team is, they could always, always be proven wrong if there was one more week or one more game to be played.

There are slightly over a billion examples to support these claims, but I always site the pre-game and post-game coverage of the 2004 Orange Bowl National Championship Game, when the Trojans slaughtered Oklahoma 55-19 to win the BCS National Championship (which by the way is their one and only BCS National Title, something most seem to have forgotten due to the hoopla surrounding the 2005 NC game between Texas and USC, when the Trojans were said to be going for a 3rd NC in a row, despite not even having played in the NC game in 2003). The two teams had begun the season ranked #1 and #2 and they had both gone 12-0 to reach the title game, with USC ranked #1 and Oklahoma #2. The Sooners had beaten 3 ranked teams during the year; USC had beaten 2 ranked foes. There was no clear favorite going into this eagerly anticipated matchup; you could find lines favoring either team by ½ to 3 points. Just before kickoff, ESPN’s main CFB crew stated their final thoughts on the game. I don’t have the quote verbatim but the most recognizable college football analyst on ESPN--and perhaps one of its most respected—stated that if he had to go one way or the other he would lean towards taking Oklahoma to win by a field goal. There were different views on both sides. The bottom line is that there was no clear consensus on which team was going to win the game; no consensus on which team was better. Then Oklahoma went out and shit themselves and USC carved them up in a blowout reminiscent of a pre-salary cap Super Bowl. The performance on the field left no doubt which team was superior between USC and Oklahoma; clearly the Trojans were better and they were seen as the worthy National Champions.

However, it wasn’t all perfect for the BCS backers. USC wasn’t the only team that finished the year 13-0. The Trojans weren’t even the only team from a power conference who ended up 13-0. In fact, the champion of the most respected conference in football, the SEC, had finished the year undefeated but hadn’t cracked the top 2 and therefore was excluded from the title game. Instead, #3 Auburn had played Virginia Tech in the Sugar Bowl, winning 16-13 to finish 13-0. The Tigers had not begun the year with great expectations and they were totally overshadowed by the two preseason favorites who had gone wire to wire to setup an epic title bout. However, Auburn had gone the entire season without a loss, playing an SEC schedule, winning the SEC Championship Game, and winning the Sugar Bowl. After the Orange Bowl when the post-game analysis turned to the idea of Auburn fans feeling that they had a right to claim part of the National Title, that aforementioned highly visible ESPN analyst sneered at the notion. He put the matter to bed, stating that if Auburn fans wanted to have their own private NC party that was fine, but there was no doubt who the #1 team in the country was, and there was no need to see any other games, specifically an Auburn-USC matchup. This was stated with confidence and assuredness, and it probably summed up the feelings of a huge portion of the sporting public. But I couldn’t help noticing that this was the same chap who had gone with Oklahoma over USC a few hours earlier. Now he was saying that there was no need to see what would happen if Auburn played USC because he and everyone else who was paying attention that night already knew the answer.

Incidentally, if I had to pick one team or the other to win a head to head matchup between 2004 Auburn and 2004 USC, I’d probably take USC, but the Tigers had beaten more ranked teams (5) than USC (3) and they had played a conference championship game (which USC had not). The one common opponent between USC and Auburn was Virginia Tech, who each team had beaten on a neutral field, with USC winning 24-13 and Auburn winning 16-13. To make a long story short, I just don’t think the “I think this team would beat that team” approach to ranking teams is a good one unless the poll is just for banter and reflection and not for determining the postseason matchups.

Anyway, clearly some voters do use the above strategy as a guide to ranking teams. Another popular system is one that looks strictly at the facts and attempts to determine which teams have had the better season. This system relies less on trying to predict which team would beat another and focuses more on a team’s “résumé;” which team has “earned it” on the field through their play rather than simply being judged to be good. The problem that one immediately runs into is that in order to judge how impressive a team’s season results are, one must have a good idea of the quality of the competition. And there are a lot of different teams, different settings, and different situations.

Within this “résumé” system there is the whole idea of trying to determine which teams have “earned” or “proven” to be worthy teams by virtue of playing a challenging and meaningful schedule. Right away adding this element does two things: it allows for teams from power conferences to lose a game and make up for it by having played a number of good teams; and it automatically puts all non-BCS conference and lower BCS conference (Big East, ACC) teams at the back of the line. Then you have to decide which teams count as making a schedule tough and which ones make a schedule soft. This is a challenge in itself.

That actually brings me to a point that I don’t feel has been made often enough by analysts and other media folks. If you just look at how the college football schedule is built, how the whole landscape of the season unfolds, you realize that it’s just not well suited to determining national rankings without the use of a tournament or playoff. For the most part, teams now play at least 1 or 2 legitimate out of conference opponents a season, but even these games are often not easily used as cut and dried evidence. The overwhelming majority of these games are played early in the season and played at the home location of one team or the other. In any given season, the chances a team has of beating Penn State are going to vary greatly depending on whether or not the game is played at Beaver Stadium. Playing a team from the state of Florida in early September is different from playing them up north in November. Also, teams can change dramatically from week 1 to week 14. The point is that these inter-conference battles don’t always provide great examples to use when determining the strength of each team.

But the bigger point is that this inter-conference portion makes up less than a 3rd of the college football schedule, and teams spend the overwhelming majority of the season fighting amongst teams from their own conference. To have all the teams in the land play amongst themselves for 2 and a half months, and then pick and choose the best teams from the entire country and attempt to rank them 1 to 15 is pretty silly when you think about it. How can you judge the importance of a Big Ten team beating another Big Ten team if you aren’t sure how good the teams in the Big Ten are compared with teams from the Big East, ACC, SEC, and Big XII?

The season would make almost perfect sense if you were going to have teams play a conference schedule in order to determine the top teams from each conference and then have those top teams square off against each other to determine a “national” champion. But this isn’t what happens. Teams play conference schedules all season long and then people attempt to determine which two teams should play for the championship of college football, and the rest of the teams are pretty much randomly pared off into essentially meaningless “season finales.”

The system of ranking teams based on the “résumé” often clashes with another system of ranking guidelines. This system borrows from the “eye test” system and the “résumé” system and combines these two strategies with the added factor of placing greater value on how a team is playing at the moment or in the recent past. This is the “peaking” theory or the “dangerous team” theory. Subscribers to this strategy are often heard talking about some team that they supposedly “would not want to play right now.” The most obvious thing that this sort of system does is place greater importance on the latter part of the schedule and less importance on the early portion of the schedule. This leads to the idea of “losing early enough” or losing “at the right time.” The fans of this system of ranking teams are going to clash often with fans of the “résumé” guidelines because part of the résumé is deemed more important than another part. It’s virtually impossible for people to accept that a team could lose one week and then come back and beat any other team in the country the next week, or the week after, or the week after that.

On the other hand, if a team reels off 9 straight wins it becomes easy for some voters to see a bad loss early in the year as if it happened to a totally different team that no longer exists. Inevitably one has to start thinking about injuries and other always developing situations. And then there’s the question of just why does it matter which team is playing the best at the moment that the regular season ends? It matters (seemingly) in the NFL because that team would perhaps head into the playoffs on a roll which they might be able to ride all the way to winning the Super Bowl. But such a tournament doesn’t exist in college football.

Perhaps my least favorite aspect of the traditional ways of ranking teams these days is the “Top 40 Rotation.” Voting for the top 25 one of the polls is often thoughtless and careless to the point that teams that lose are dropped down and teams that win or don’t play are moved up regardless of any other factor besides the previous week’s rankings. This goes on from week to week in a continuous manner, creating a rotation of teams up and down, in and out, around and around. The heavily intra-conference nature of the college football season makes this even worse because situations arise where it doesn’t matter what happens on a given weekend, 3 or 4 teams from a certain conference are going to move into or up the rankings, and 3 or 4 teams from that conference are going to move out of or down the rankings. There’s never much thought given to the idea that a team that loses within one conference might still be better than a team that wins in another conference. For example, say there is a 3-1 team from the SEC ranked 12th and a 3-0 team from the ACC loses to drop to 3-1 and falls from 10th to 19th in the rankings. Over the following two weeks, the SEC team loses to the #3 team at home and the #7 team on the road to fall to 3-3, while the ACC team has a bye week and then beats Duke at home to move to 4-1. On the following Monday, is there really any reason for the 4-1 ACC team to be ranked 15th and the 3-3 SEC team to be ranked 24th? If you consider this matter for 20 seconds or so you’ll see that these rankings should be stripped of all their influence on anything other than sports talk radio and general water cooler discussion.

Anyway, there are countless other systems used as guidelines to ranking the top teams. I’m going to use a few of these different systems to rank the teams during the course of the year. My power rankings are easily understood. They’re based on which team I think would win a head to head matchup on a neutral field with both teams at close to full strength (outside of season ending injuries). This season I’m also going to do a weekly “Season Résumé” top 10 that will be based on each team’s body of work during the season as it progresses. For this first week it will be a little weird because no team has played more than 1 game and because we don’t know much about opponent strength yet. Early on I’m going to have to rely more on some preconceptions but that will decrease as the season goes on. Later in the season I’m going to start doing a weekly “Dangerous Team” top 10 that will rank the “eligible” National Championship Game contenders based on who is the hottest at the moment. For now, here’s the first ever Season Résumé Top 10.

1. Florida 1-0
2. Alabama 1-0
3. Oklahoma State 1-0
4. BYU 1-0
5. USC 1-0
6. Mississippi 1-0
7. Boise State 1-0
8. Cal 1-0
9. Cincinnati 1-0
10. Miami 1-0

No comments: