The College Football Playoff Selection Committee released its first set of rankings for the 2024 season tonight, and Oregon, Ohio State, Georgia, Miami (FL), and Texas round out the top 5.

In a world increasingly driven by technology, automation, and AI, it is striking that the highest level of college football competition remains in the hands of thirteen human beings. Each week until December 8, their collective judgment, rather than algorithms or machines, will shape the path to the national championship.

The College Football Playoff Selection Process

For the first time ever, the College Football Playoff will feature 12 teams, introducing new dynamics into college football’s highest level of competition. The top four highest-ranked conference champions will be seeded 1-4 and will earn first-round byes. The next highest ranked conference champion plus the top seven highest ranked teams will all play in the first round as seeds 5 through 12. In an exciting twist, the better seeded teams in the first-round matchups will host playoff games at their home stadiums adding an element of campus excitement that sets the college game apart from its professional counterpart. This expanded playoff structure more closely resembles the FCS postseason structure bringing a more comprehensive, multi-round competition to the top teams.

The current method of ranking teams involves intense debate and discussion between the thirteen members of the College Football Playoff Selection Committee. According to the official College Football Playoff website, rankings are based on “the members’ evaluation of the teams’ performance on the field, using conference championships won, strength of schedule, head-to-head results, and comparison of results against common opponents.” While the committee has a rigorous protocol in place to ensure as much objectivity in its rankings as possible, the human element of the selection process is not immune to bias. Hype, brand recognition, and style points are difficult to ignore even when making the best attempts to be objective. Just last season, the committee made the controversial decision to exclude undefeated ACC champion Florida State from the College Football Playoff in favor of Alabama, highlighting the challenges of balancing fairness with judgment in high-stakes rankings.

In an era where college football is embracing new technologies, such as helmet communications and tablets, there is a continued rejection of AI- or computer-based rankings due to the unpopular legacy of the Bowl Championship Series (BCS) rankings.

The BCS Ranking Process

From 1998 to 2013, the Bowl Championship Series (BCS) determined the top two teams to face off in the National Championship Game. The rankings were composed of three parts including the Harris Interactive College Football Poll, the USA Today Coaches’ Poll, and “the computers.” This formula was in constant flux, with tweaks to the ranking calculations and shifts in the polls used—such as the replacement of the Associated Press (AP) Top 25 with the Harris Poll after 2004.

Despite these adjustments, the BCS quickly became a target for public criticism and political scrutiny, with antitrust investigations almost launched by the President and Congress over the perceived inequities of the BCS system. The most contentious element was “the computers,” six independent, statistically-based rankings that often yielded results clashing with human perception. The six computer polls were as follows:

  1. Anderson & Hester: Focused on strength of schedule, this system weighed wins against high-ranked teams more heavily to reward teams that succeeded against tougher opponents.
  2. Billingsley Report: This poll used a sequential approach, valuing consistent performance throughout the season and incorporating both margin of victory and strength of schedule.
  3. Colley Matrix: A purely win-loss based system, the Colley Matrix applied mathematical rankings without using margin of victory, emphasizing teams’ records and opponents’ records.
  4. Massey Ratings: Integrating score margins and strength of schedule, Massey Ratings balanced both to assess overall team strength.
  5. Sagarin Ratings: Jeff Sagarin’s model combined multiple ranking methods (such as ELO) to account for team performance, schedule strength, and margin of victory.
  6. Wolfe Ratings: This system used iterative calculations to evaluate teams’ rankings, focusing heavily on strength of schedule without considering margin of victory.

The BCS sometimes exerted control over these computer polls by mandating or restricting certain factors; for example, it famously banned margin of victory from consideration in rankings in 2002. However, this only fueled skepticism.

The beginning of the end for “the computers” could perhaps be traced to the 2003 season and the Oklahoma-LSU-USC controversy. Driven by the algorithmic reliance on strength of schedule, the computer and the BCS rankings placed USC in the third position, leaving them out of the BCS National Championship game despite their #1 ranking in both the AP and Coaches’ polls. LSU would go on to beat Oklahoma in the BCS National Championship and USC would beat Michigan in the Rose Bowl. When the polls were released at the end of the season, the BCS rankings kept LSU at the top while the AP kept USC at the top. This led to a split national championship between LSU, who won the BCS title, and USC, who was highest in the AP rankings.

Ultimately, the BCS’s reliance on computer polls highlighted a core issue: “the computers” frequently diverged from human judgment, producing results that did not align with the eye test or fan sentiment. This discrepancy, along with other ranking controversies, eroded confidence in the system, paving the way for the human-led College Football Playoff format that replaced it.

The Future Role of AI in College Football Rankings

College football’s early courtship of computer-based rankings ended when the College Football Playoff era began. Critics blamed “the computers” in the BCS system for producing rankings that often felt confusing and disconnected from public expectations, primarily due to a lack of transparency and occasional counterintuitive outcomes. Fans and analysts were frustrated by the opaque algorithms which left them unable to understand how certain factors influenced rankings.

These criticisms echo recent debates about AI, where algorithms, though seemingly objective, are often criticized for introducing new, hidden biases that can be just as challenging to address. Any future role for AI in college football rankings will likely be complementary to, rather than a replacement of, human judgment. For instance, AI models could simulate game outcomes based on team and performance data offering predictive insights into potential matchups. AI might also audit past College Football Playoff rankings to identify overlooked factors that inadvertently influenced committee decisions.

Although the format of the College Football Playoff has been widely welcomed as an improvement over its predecessor, it reflects an unfortunate rejection of analytics-driven and computer-based decision-making. In recent years, other professional and collegiate sports have integrated analytics and AI with notable success. Perhaps, it is time for the college football world to forgive, forget, and reintroduce “the computers.”

Share.

Leave A Reply

Exit mobile version