This has certainly been an interesting week for scandals in higher education. While not minimizing the horrific things happening in the one you’re probably already thinking of, I’d like to focus instead on two others that may have not yet crossed your radar.
Both seem eerily similar: One law school and one undergraduate college have admitted that for years they, um, well, lied about admissions data. They changed LSAT/ACT/SAT/GPA averages, inflated application and yield numbers, and depressed admit numbers, among other things. It’s one thing to do this for USNWR, I suppose, but quite another when you submit to state and federal reporting agencies, or worse yet, bond rating agencies. “Pecuniae obedient omnia.”
It’s not hard to hypothesize why this happened, of course; a lot of people are under a lot of pressure to make numbers look better. Strategic plans call for an increase in these measures as a testament to academic quality, and to appeal to more students with similar credentials.
It happens with teachers or whole school districts and state-mandated standardized testing on occasion. And it’s not hard to see some parallels in things that happen regularly in our profession and are passed off without a second thought: Using “click here to apply” emails that raise application numbers to increase the appearance of selectivity; and reporting “super scores” that don’t exist anywhere but in the imagination of some analyst. Sometimes, as in the case I’m purposely ignoring because it’s not about admissions, overlooking things like this can be a form of complicity.
But what’s most interesting to me about all this is a simple little fact: As far as I can tell, no one on the faculty came forward and said, “Hey, this entering class with an average X of Y really seems more like their average should be Z. Something must be up.”
Instead, in one case it was an auditor, and in another, an inside snitch. What does that say about the way we use quantitative averages to measure and compare things?