February 7, 2012

SATs and College Rankings

This past week, Claremont McKenna has been in the news for sending inflated the SAT scores to various ranking entities. The score inflation was small - an average of 10 to 20 points per year - reported the Claremont Port Side, a local student newsmagazine. Statistically, one's score is likely to change each time one takes the test, which is why the College Board reports a score range along with the test score, which they believe is a better representation of one's true ability. So this inflation of 10 to 20 points would have little meaning in reflecting the student body's ability on a standardized test. So why would a school do such a thing?

As reported in the news, Claremont McKenna was concerned about its place in the popular U.S. News & World Report rankings. This year the school was ranked #9 in the liberal arts college rankings, and this small difference could have placed it out of the top 10 list. This concern isn't unique to Claremont McKenna; school administrators across the U.S. are under incredible pressure to "improve in the rankings" because so many students and parents turn to the rankings to select schools. In the past several years, several schools have been caught gaming the system by selectively reporting data such as not reporting the scores of their recruited athletes, or not admitting lower-scoring students until January, so that their numbers are not calculated into the class admitted for September.

Given that I just mentioned that one's score is likely to change within a range of 30-40 points on a given sitting of an exam, what does this say about college rankings? After all, one might expect that falling from #9 to #10 or #11 in a ranking would mean that a school's educational quality is declining...or is it otherwise?

Let's take a closer look at the U.S. News & World Report methodology:

  • 22.5% - peer institution survey
  • 20% - retention of students (including the 6-year graduation rate, and the freshman retention rate)
  • 20% - faculty resources (including class sizes, faculty compensation, student-faculty ratio, proportion of full-time faculty, and proportion of faculty who have the highest degree in their fields)
  • 15% - student selectivity
  • 10% - per-student spending (excludes sports, dorms, and hospitals, but presumably not fancy student centers)
  • 7.5% - graduation rate performance
  • 5% - alumni giving
Now, let's think about the ways in which one learns. If we were to take Bloom's Taxonomy, where the lowest level of knowledge is memorization, followed by understanding, application, and then analysis, evaluation, and creation, the highest levels of knowledge are likely encouraged through activities such as in-class discussion, writing papers and making presentations, and working on projects. Activities like these are a better measure of educational quality than a peer institution survey, which is essentially a popularity ranking. How likely is an administrator at one school able to really understand what students at another school are doing?

So what are students and parents to do if the U.S. News & World Report isn't an accurate measure of educational quality? Is anyone trying to measure how and how well students are actually learning?

Actually, the National Survey of Student Engagement (NSSE) conducts an annual survey of students at institutions across North America on just such matters. While there is no ranking of the survey results, the NSSE has a very helpful FAQ on how students and parents can use the NSSE to aid in their college search. They also have a helpful downloadable pocket guide on the questions one should ask during a college visit.