clock menu more-arrow no yes mobile

Filed under:

Kenpom, NET Rankings and GIGO: Who To Trust, Eyeballs or Computers?

Nothing but the NET would be a bad formula for basketball success. Thankfully there are humans involved in making the final selections.

Boston Stock Exchange Photo by Mark Wilson/The Boston Globe via Getty Images

This year Penn State basketball fans watched the voter polls closer than they normally would. As the season progressed, from December on to the abrupt ending in early March, we watched as the sports writers selected their top 25 teams each week. Penn State was in the AP Poll ten times this year, tying the 1995-96 season for most appearances in program history, but thankfully we had other metrics to track the team’s effectiveness.

The vast majority of sports writers track one team or one conference exclusively. That makes it very difficult to measure the entire group of 353 teams, making the rankings they spit out more of a popularity contest than a result of the play on the court. The strength of a program, year to year, impacts where the reporters place the teams, since they are not able to watch all the games.

The voter polls used to be all that fans and the committee had at their disposal to differentiate one team from another, until the RPI came into existence a few decades ago. From 1981 to 2018 the RPI was the official index that the NCAA Selection Committee used to aid their decisions. They never said that it would be the only tool that they used, but it was the only metric that outsiders had that was being used. It gave the RPI more weight, in the eyes of fans, than it deserved. Thankfully the RPI is now RIP.

By the time the RPI fell out of favor with the committee it had lost the trust of most fans that watched a lot of basketball. The eyeball test remained a more reliable way of picking the field, though the numbers the computers spit out for the RPI sure did feel good. Like the old saying goes, if you put garbage into an equation then the answer that comes out will be garbage. GIGO; garbage in, garbage out.

The RPI’s fatal flaw was the information that was put into the equation. Ironically, the system was set up to differentiate teams using other metrics than wins and losses, to be able to show if an 18-14 team was actually better than a 20-12 team. In reality, by using a team’s winning percentage, the winning percentage of its opponents, and its opponents’ opponents winning percentage, as the RPI does, you get a number that is less reliable than anecdotal evidence gained by watching the games.

That made the voter polls and eyeball test more accurate but for years the RPI was used as though it added something to the conversation. It is seen now for its flaws but in real time, while it was being used to sort teams, some people swore by its virtues.

Ken Pomeroy began sharing his work two decades ago and has entered the mainstream in recent years. His KenPom rankings are as solid as any system in use today. Their goal, however, is to show what a team is capable of doing, rather than what it has done. Kenpom rankings measure categories such as luck, strength of schedule, non-conference strength of schedule, tempo, offense, defense and more. It is a very good way to measure teams and while no system is perfect, Ken Pomeroy seems to have the best working model.

NET Rankings

Two years ago the NCAA debuted a new system that would help the committee sort teams for the purposes of seeding and invitations for the NCAA Tournament. It was created by the NCAA and several respected statisticians, including Ken Pomeroy and Jeff Sagarin, to be used as the primary sorting tool, replacing the antiquated RPI.

During the first season using the NET to help choose the field, the top 45 teams that were in the NET Rankings at the end of the season were awarded at-large bids. Other teams outside the top-45 made the cut, too, so that showed us that the committee wasn’t just using the NET to decide, but rather, it roughly defined where the bubble would be each year. Sadly, after watching the entire season, tracking the NET as we did, we did not get the satisfaction of another data point to help us understand how the committee was truly using the NET.

With no selections made this year, it is hard to know if the committee made any adjustments to the way that they planned to use the NET moving forward. Having taken a close look at the final NET results, there seems to be some tweaking with the actual formula needed as well.

There are many data inputs used to calculate the NET and we gave a full explanation in the past on how it works, as confusing to some as it may be. While what is going into the equation may remain hard for math or attention-challenged readers to follow, those who followed the NCAA basketball season closely know that what the equation spit out was not entirely useful.

Let’s take a look at the final Net Rankings between numbers 30 and 42. This group was selected because it illustrates teams in the Big Ten that are clearly not represented in the order that they deserve, if the numbers in the NET are to be trusted more than the eyeball test.

You can see that Rutgers was awarded the 30th spot for their effort, ahead of teams such as Illinois (39) and Iowa (34). Teams 3-8 in the Big Ten had overall records of 21-10 or 20-11 and were projected to make the NCAA Tournament. It seems like this would be the perfect year to use a tool such as the NET to sort these teams. The manner in which the NET did that defies the fans’ knowledge gained by simply watching the games.

Ohio State finished 5th in the Big Ten and the AP Poll voters gave them some love at No. 19, but the NET had the Buckeyes at 16, two spots higher than Maryland. The Terps had 3 more Big Ten wins, 3 more overall and also nearly won the toughest league in the country had they not stumbled late, finishing 2nd. The AP Poll, selected by eyeballs, had Maryland at 12, roughly where they belonged and 7 spots above OSU.

Not to pick on Rutgers, they had a tremendous season, but no one who knows the game of basketball would place them ahead of Iowa or Illinois. The voters had Illinois at 21, Kenpom had them at 30. The NET, for some reason, had the Illini way down near the bubble for an at-large selection had they lost their first round Big Ten tournament game.

The Lions came in at 32 in the final AP Poll and were No. 26 in the Kenpom. The NET Ranking of 35 isn’t an insult, nor is it too far out of line, so had we simply been tracking Penn State in the rankings, and not the other Big Ten teams that we know so well, we may not have noticed the disparity between the NET model and reality.

Michigan finished 29 in the AP Poll with nearly 3 times the votes that the Lions got, even though the Wolverines had 2 fewer wins overall. Michigan also finished No. 24 in the NET rankings somehow, just one spot behind Wisconsin and 6 spots behind Maryland. We don’t need fancy computer models to tell us that this is not right.

With Purdue ahead of Iowa, Illinois and Penn State, the NET has shown its flaws pretty clearly. The Boilermakers were a team whose record this season was misleading, one would think that a system such as the NET would be perfect for illustrating this point. In theory, such a system would be, in reality, not so much. The voter polls had them at 41, but that’s just because they received one single vote. Kenpom had them at 24, between Iowa (23) and Penn State (26) but that system is geared toward showing a teams’ potential, not their actual results. The NET is supposed to be a useful sorting tool, but if it tells the committee that Purdue is better than Illinois, then there is a problem.

While I completely understand the equations behind the NET Rankings I do not know what needs to be done to make it more accurate. I’ll leave it up to the Urschels, Pomeroys and Sagarins to figure that out. The results that came out of the equation were clearly off, maybe not to the point of being called garbage, but certainly not as useful as expected.

It’s too bad that we didn’t get the benefit of finding out how the committee would have used the NET this year, but with its obvious flaws, maybe that’s a good thing. It’s possible that the powers that be see what we see and will tweak the system before the start of next season.

Tidbits

  • After all the concern during the season for the Lions’ non-conference strength of schedule, it would not have mattered. 5 of the top 10 teams in the country according to Kenpom had an NCSOS above 200, placing them in the bottom half. The Lions’ number at 304 was slightly better than Rutgers (309) and also Illinois (333). Texas Tech (296) finished 21 in the Kenpom and 22 in the NET with a similar NCSOS and a worse overall record of 18-13.
  • In illustrating the rift between the computers and the human voters, we take a look at Kentucky. The voters had the Wildcats at No. 8 while Kenpom (29) and the NET (21) had them closer to where their performance this year dictated. The voters have flaws of their own, mostly in using past results to influence the current year. As Rutgers and Penn State showed in 2019-20, results can vary drastically for a program from one year to the next.
  • Penn State was the highest ranking team in the NET last year with a losing record (14-18), coming in at No. 50. This year Minnesota finished No. 42, within the field that was selected a year ago, with a 15-16 campaign. The Gophers were certainly good enough to make the NCAA Tournament but with their results, they would have needed a couple more wins to even make the NIT. While it is encouraging to see teams such as Purdue and Minnesota this year, or the Lions last year, get the credit that they deserve with an ugly record, it is equally disturbing to see teams like Ohio State and Michigan higher than they should have been.
  • Virginia had a massive variance in their standings. Finishing 23-7, it is not a surprise that they found their way into the top-20 of the voter poll. Their reputation and recent success surely weighed on the minds of the humans as they selected them No. 17. The computers, however, each had them outside the top-40 and near the bubble. Kenpom (42) and the NET (44) were not impressed with the Cavaliers. It’s hard to say which group had it right, but the gap is unsettling.