Friday, January 6, 2012

Mo' Rankings, Mo' Problems

The Washington Post published a skeptical look at college rankings today. Colleges are of course eager to use the lists in their marketing materials when the results are flattering. But when one list claims that Johns Hopkins University is one of the 15 "least rigorous" in the country and that Georgetown is a hipster mecca, then (according to a Johns Hopkins University spokesman) “the reaction around here was a collective scratch of the head and a unanimous ‘Huh?’

We all ought to be skeptical of this mania for lists. Many of them use questionable data (obtained from anonymous surveys or other sketchy sources) to plant numbers on aspects of college life that are very difficult to quantify. Even the lists that use easily quantifiable criteria, such as that one that everybody's heard of, skew the results according to their own methodology.  In the U.S. News ranking formula, for example, "alumni giving rate" counts for 5% of a school's ranking, whereas "student/teacher ratio" counts only for 1%. It's nice to know that the alumni have deep pockets, but is that really five times as important as a critical aspect of the classroom experience?

That's why I argued for a kind of modular ranking system -- in which students and parents can choose which criteria are important to them -- in my post on this subject a few months ago.  I also liked the idea of a ranking system based on a school's commitment to the core curriculum.

If you're interested in learning more about college rankings, I encourage you to check out my other posts on the subject, as well as this extremely interesting article by Malcolm Gladwell on the misleading objectivity of lists.

No comments:

Post a Comment