Gaming the System: Orin Kerr is scandalized by a report that top law schools are "gaming" the U.S. News ranking system, rejecting highly-qualified applicants in the hopes of improving their yield numbers. Yield is measured by the proportion of admitted students who matriculate. So when a law school admits a student who's over-qualified, and thus likely to turn it down for something better, the yield numbers (and the all-important U.S. News rankings) will suffer.
There doesn't seem to be much evidence of this yet on the law school front, but it's a well-documented phenomenon in undergraduate admissions. According to a fascinating NBER working paper my brother forwarded me, released by four scholars last October (including Caroline Hoxby, whose work I've always found worth reading), schools routinely engage in such manipulation to improve their rankings:
Another method by which a college can manipulate its matriculation rate is deliberately not admitting students who are likely to be admitted by close competitors or colleges that are often more highly preferred. A college administrator may say to himself, "My college will ultimately fail to attract good applicants unless I raise its matriculation rate. I can achieve this with a strategic policy that denies admission to students who seem likely to be accepted by colleges more desirable than mine. By systemically denying them admission, my college will of course lose of its some most desirable students (because some percentage of the highly desirable students would have matriculated). However, it is worthwhile to sacrifice the actual desirability of my college class in order to appear more desirable on a flawed indicator." . . .
. . .
In other words, the college will avoid admitting students in the range in which it is likely to lose in a matriculation tournament.
The authors back up their assertions with data on admissions rates for top students at Harvard, MIT, and Princeton, as indexed by combined SAT I percentile scores:
At Harvard and MIT, one's chances of admission generally increase with SAT score (although the Harvard probabilities are flat between the 93rd and 98th percentile). At Princeton, on the other hand, a candidate in the 98th percentile has a substantially worse chance of acceptance as compared to a candidate in the 93rd percentile. This is unlikely to be the result of legitimate admissions preferences -- as if the 98'ers were all timid bookworms, while the 93'ers were happy well-rounded types. This is especially clear since the chances of the students at the very top are the most favored of all. As the authors explain, "if the student's merit is high enough, a strategic college will probably admit the student even if the competition will be stiff. This is because the prospective gains from enrolling a 'star' will more than make up for the prospective losses from a higher admissions rate and lower matriculation rate. (Recall that the crude admissions rate and matriculation rate do not record who is admitted or matriculates.)"
In other words, it's quite clear that Princeton, and presumably many other schools, are departing from their standard admissions criteria in order to reject well-qualified candidates and to increase the yield. (Rejecting good students also improves--i.e., lowers--a school's overall admissions rate, by making the school appear harder to get into.)
How can this 'gaming the system' be prevented? So long as U.S. News pays attention to yield, and so long as schools pay attention to U.S. News, it's hard to imagine a solution. But the paper's authors propose an intriguing "revealed preference" method to measure student demand:
Our statistical model extends models used for ranking players in tournaments, such as chess or tennis. When a student decides to matriculate at one college, among those that have admitted him, he effectively decides which college "won" in head-to-head competition. The model efficiently combines the information contained in thousands of these wins and losses.
In other words, if we want an index of how eager students are to attend a given school -- the information yield is supposed to provide -- we should look to students' actual choices. Each school could be ranked by their success in head-to-head matchups against the rest. Manipulating these numbers is substantially harder than manipulating yield, since rather than rejecting students who are overqualified, schools would be forced to convince those overqualified students to attend.
Of course, the new measure isn't perfect, and would require certain separate sub-rankings when student preferences aren't nationally shared. (Students in California may prefer an in-state school to a slightly better university on the East Coast; "niche" schools with an engineering focus, or a religious affiliation, may attract students with stronger preferences.) But even if its answers aren't foolproof, the authors' model would at least be asking the right question: not which school can reject the most students, but which school those students prefer. And their new measurement would be a vast improvement over the less accurate rankings -- and costly admissions manipulation -- the system produces today.