Boyd's World-> Breadcrumbs Back to Omaha-> The Real 2000 RPI's About the author, Boyd Nation

The Real 2000 RPI's

Publication Date: October 17, 2000

That Time of Year

An annual event (for the last two years, at least) that results in great gnashing of teeth and moaning among the more knowledgeable of us took place recently. No, I'm not referring to the start of the fall TV season or the NBA-style crapshoot that the MLB playoffs have been reduced to, lamentable though those things may be. I'm referring to Baseball America's annual publishing of the actual RPI's, which they get from an unnamed source, probably in an athletic department office somewhere. Despite a harmless bit of posing of themselves as "the wrong hands", BA managed to get quotes from both Dick Rockwell, the past chairman of the selection committee, and Jim Wright, director of statistics for the NCAA, for the story, so I don't think the NCAA objects to the publication.

The actual RPI's differ a bit more this year than last from the pseudo-RPI's. Although the actual RPI's still underrank West Coast teams, rather badly at times, it's not quite as bad as the effect shows up in the pseudo-RPI's. I have some ideas on that; I'll probably do a future column on that as I continue my annual efforts to more closely match the actual formula. There's also a column coming at some point on the actual mathematical problems that cause the RPI's to perform poorly.

This week, though, I want to spend my time looking at some examples of the actual damage to the seeding process that the RPI's do, after this tangent.

Missing the Point

There's an interesting quote from Wright in the article: "Every year we hear complaints, but nobody ever comes up with a better system." I'll let others judge the truth of that statement -- I think I've published two better systems so far, but I'm hardly impartial -- but I think the truth of the statement is moot, since the whole idea is quite ludicrous. This is virtually identical to the old "You can only criticize a play if you can write a better one" or "You should only criticize a coach if you can hit a curve ball" arguments, and it's no more logical here than there.

They're the National Collegiate Athletic Association. They have easy access to most of the best computer science and math departments in the country. One of the best things you can do in life is to do what you're good at and know when to get help. Sports guys should, well, do sports. Administrators should run things. Sales people, which includes most of the people running college sports, should sell and promote their product. But when they need a system to measure performance (and that's what the RPI or its replacement should be measuring, not potential or history or anything else), why would they not turn the job over to the math and computer guys?

If Jim Wright were to call, say, the Stanford computer science department and say, "Hey, I've got a good thesis project if you guys want one" and then sent them enough data (this is a separate problem; the NCAA has apparently not done a perfect job collecting and maintaining past data, but they should have enough), I'm willing to bet that he'd have at least one much better replacement within a year. Not doing so, or doing something else to replace the RPI's, is just irresponsible.

An honestly curious question for folks who follow other sports more closely: Does the RPI seem to do damage in other sports as badly as it does in baseball? Men's basketball seems to escape somewhat due to the relatively large amount of interregional play, but do other sports, especially smaller ones, suffer? Write me and let me know on this, if you will.

Actual Examples

Well, on to the specific examples. Since the RPI list is apparently redone after the CWS, which makes it a bit hard to judge seeding effects from it, I'll compare them to the final release of the ISR's.

Although none of them are off by a huge amount, all of the top three Pac-10 teams come in lower than they should in the RPI's, with Arizona State coming in the worst at #11 in the RPI's compared to their #4 ISR. Although a difference of a few spots is seldom significant, the consistency here is the problem.

The big trouble for individual teams starts in the second ten of the RPI's. Miami comes in at #12. They were at #26 in the ISR's (they were at #19 in the TWP's, which may be a fairer assessment), and even the most die-hard Miami fan will agree that their actual season was much closer to the lower rating than the higher RPI rating. But the difference in #12 and #26 is the difference in hosting a regional and travelling to, say, Clemson, and that's a problem.

The next major problem (although Louisiana-Lafayette was a bit overranked, they weren't overseeded because of it, and pulled back together nicely in the postseason, obviously) is with Rutgers at #14. As I said last week in reference to Notre Dame, my beef is not with the Big East teams, it's with the NCAA's absurd treatment of them, and this is a great example of the source of some of the problems. For Rutgers to still be at #14 after the regional, they probably had to be up around #12 going into the post- season, and there's just nothing in their record to justify that ranking.

Finally, since I can't possibly make this a comprehensive list of the problem rankings, a quick look at some bad rankings for West Coast teams, coupled with their ISR's: Cal State Fullerton #31 (#19), UCLA #38 (#25), Loyola Marymount #40 (#22), San Jose State #52 (#27), and Long Beach State #57 (#33). UCLA's #1 seed shows that the selection committee understands that there is a problem with the RPI's; they just don't seem to have any idea what to do about it, so they overseed a team or two and either underseed or omit the rest.

Boyd's World-> Breadcrumbs Back to Omaha-> The Real 2000 RPI's About the author, Boyd Nation