Hot Takes from ACT on Test-optional

Absurdities seem to come in clumps.  While I was on the train reading Twitter earlier this week, I saw this. You may not see the final sentence, so I’ve cut it out for you here:

screenhunter_1471-oct-03-08-35

As absurdities go, it’s really hard to top that, but later that day I stumbled upon this document from ACT moments after I got into the office.  It’s a summary of their top five reasons (full report ) on Why Test-optional Policies Do NOT Benefit Institutions or Students. (Emphasis via capitalization is theirs. They don’t want you to think they’ve jumped on the bandwagon.)

There are several “WTF” moments in this document, (my personal favorite being statisticians–who really should know better–making guesses about what some of Bill Hiss’s data might possibly say if only they could analyze it) but let’s start with the most beautiful.  See if you can spot it here, as ACT attempts to debunk the claim of many researchers who don’t have to sell test services to make a living:

screenhunter_1472-oct-03-09-12

ACT is picking on kids who score 10 or less on the ACT in the callout, and using them as an example of the reason test-optional is a bad policy.  In the world of multiple-choice tests, there is a threshold score for guessing, that is, the score you’d expect to earn if you didn’t read any of the questions and simply filled in the bubbles.  Some test prep experts I know estimate this score to be about 12 on the ACT.

But let’s get back to the students who perform worse than random guessing and score 10 or less on the test. In case you were wondering, here is a chart of all ACT tests from 2002 to 2013, and the percentage (in the blue bar) who scored a 10 or less.  screenhunter_1473-oct-03-09-24

 

Can’t see it?  Look harder.  

It’s that little sliver of blue on top of the orange: In 2013, it was about 0.4% of all testers, or roughly four of every 1,000 students, or less than 8,000 students total, and that number is about twice as high as 2002, before many more students who probably weren’t thinking of college were forced to take the test anyway.  Over 5,000 of the 8,000 of them are under-represented students of color, and the largest single group are very-low income students. Just over 10% who listed a class rank were in the top quarter of their class, so the number of 4.0 students is an extraordinarily small sample size, unless these kids all went to the same five colleges. They didn’t.

However, even these students, who’ve probably done everything their high school has asked of them, but who are poor, statistically likely to be from an under-resourced high school, with parents who (almost certainly) did not go to college, and who score very low on this test, only (emphasis mine) have a 30% chance of a B or better, and presumably an even higher chance of a C or better. And remember, these kids are so far outside the range of “College-ready”–a term ACT loves to use with administrators in school districts as they try to make taxpayers happy and sell more tests, that ACT suggests they shouldn’t be in college at all.

Of course, there doesn’t seem to be any control for other important factors that contribute to student success, like, oh, income, or parental attainment, or taxpayer support for your school, or the need to work in college, or whether you commute (if I’m wrong, I’ll be the first to correct it publicly, but I’ve asked several people who know way more about statistics than I do, and they suggest I’m right after looking at this.)  If you don’t know that kids who score low tend to be poorer, tend to be from under-resourced high schools, tend to have parents who are not college educated; and further, if you fail to understand that each of those factors in and of itself predicts college enrollment and attainment, well, as the kids say, I can’t even.  You might want to read this.  And if you don’t know that colleges like test scores because they predict wealth, you should read this.

Additionally, take a deeper look at the first chart.  The real lesson, it would appear, is that if you’re a 2.0 student, you have a very low chance of getting a B-average in college, even if you score in the 99th percentile on the ACT.  College admissions officers know this already.  In fact, that’s the very basis for test-optional admissions: The realization that HS GPA is a way better predictor of success.

After we went test-optional, two representatives from ACT came to talk to us.  I asked a simple question: Do you acknowledge that four years of high school more closely resembles four years of college than a three-hour test resembles four years of college?  The answer they gave, of course, was yes. Testing agencies know this.

If you have that 35 and a 2.5 in high school, your chances are still only 50% (about the same as a student with a 20 ACT and a 3.3 GPA). And regardless of your test score–EVEN A 10–your chances go way up with your grades. (Again, before you look at other factors).

No one–not the most ardent critic of tests–has ever suggested that tests don’t predict something by themselves, but as the ACT report acknowledges, about 75% of students have scores commensurate with their HS performance, so in the majority of cases, ACT adds virtually nothing to understanding; it simply echoes the high school record. And it’s clear–even from the ACT data–that students with lower scores and high grade point averages have a solid chance of doing well in college.  Well, maybe not for the 0.4%, but still.

If this doesn’t make sense, consider the words of one test-prep expert who called this report “cherry-picking B.S.” except he used a longer word for “B.S.”

Standardized multiple-choice tests (whose creator, Frederick Kelly called them a measure of lower-order thinking skills) measure a specific type of skill, and all things being equal, choosing a “right answer” from four given is a skill you’d rather have than not have; every skill you bring to college (including many the ACT or SAT can’t measure) probably contributes in some way to success.  Bill Sedlacek and others have shown this: In short, people with more skills tend to be more successful.  Duh, as the kids say.

But we could devise lots of tests to measure creativity, leadership, drive, determination, the ability to overcome obstacles, a sense of humor, and even a realistic sense of self, all of which would likely add to our ability to predict college success.  The question is: Is it worth it? And how much time would be dedicated to teaching students how to do well on these tests?  It’s Campbell’s Law, all over again.

The same ACT that criticizes the Hiss and Frank study for looking at the whole sample presumes to speak for “Institutions” using national data. There is no recognition that every university has a different purpose and mission, or even that some universities might have different results: The title of the report leaves little wiggle room.

And, of course, we do our own research, and we look at those things that predict academic success: For us, a 2.5 GPA and 48 credits earned in the first year is the critical outcome: Hit that, and you’re pretty much guaranteed to graduate.  For us, ACT and SAT uniquely explain about 2% of the variance in that type of freshman performance, far less than GPA, and only about as much as our attempt to measure non-cognitive variables (which we developed without spending millions of dollars on research, and which are–unlike tests–almost perfectly gender, race, and income neutral.) Other institutions, like The University of Georgia, and the Cal System, have uncovered similar patterns.

Weigh the benefits of this 2% bump against the cost of standardized testing, against the ways the tests are misused (comparing one school district to another, or basing teacher pay on scores), and against the time spent on prepping for the tests themselves that could be used for other things (like, teaching math, for instance).  Then consider the ways in which testing perpetuates class and racial and income divisions, especially when colleges use it to make decisions.  Then ask if it’s all worth it.

 

 

Advertisements

Our National Saturday Morning Combines

Next week, the NFL Draft will come to Chicago again.  For most of us, it will mean a headache of closed streets, crowded restaurants, bad traffic, and a lot of fuss about a system of connecting players and teams that seems archaic or even anti-American to many.

For some people who live and die with sports, though, it’s among the most important weeks in their year.  And the way it works got me thinking about the process leading up to it, and how it connects–in an admittedly strange way–to college admissions.  Thus, my second blog post connecting football to test-optional admissions. Here’s the first one, in case  you’re interested.

Teams have a lot riding on the outcome of the draft, and they spend millions of dollars scouting players.  This includes weeks of film study, personal interviews with players, and participation in the NFL Combine, where they run players through a series of drills like the 40-yard dash, the bench press, and the 3-cone drill.  These results are provided to all the teams who analyze the data and combine it with their own individual analysis.

Lots of people have pointed out that these workouts provide almost no additional value to teams: The 40-yard dash, something offensive linemen almost never do, for instance, is particularly suspect, yet it’s often one of the most frequently cited statistics; even for positions where you think it might matter, it often doesn’t.  Jerry Rice, for instance, arguably the greatest receiver in NFL history, had an abysmal time of 4.71 seconds.

If you can’t get to that Wall Street Journal article, here is a chart from it that makes the point:

Xsm47B8

And there are other articles you can easily find showing players who did really well on these tests and ended up being not-so-great in football (although, of course, just making it to the NFL is quite an accomplishment).

The reason is simple, I think: The NFL is looking for people who can play football really well.  And the best way to find football players with potential is to watch them actually play football.  The other stuff might (emphasis on might) contribute to football ability, but the NFL would never draft a guy who finished first in every test, but had never set foot on a football field.

So it is with college admissions at test-optional colleges.  We’re looking for students who can do academics really well.  That’s best measured by high school performance, not your results in a three-hour Saturday morning dash.

The four years a student spends in high school is far more similar to the four years in college than three or four hours in a standardized testing environment on a Saturday morning: On Saturday, you choose the “right” answer from four given.  All things being equal, this is not a bad skill to have.  And if you’re a super-selective institution, you have the ability to demand both exceptional academic performance and great standardized tests.  (If the NFL draft were not such a shining example of socialism, the Bears might be able to attract more talent–as measured on both football ability and combine scores–because of a financial base stronger than the Tennessee Titans, for instance.)

But the four years in college isn’t spent choosing an answer on multiple choice tests under a tight time constraint, just like time on a football field isn’t spent doing 40-yard dashes or bench-pressing lots of weight repeatedly or jumping as high as you can.

It’s spent listening, reading, absorbing, synthesizing, dissecting, drafting, writing, and re-writing over a period of ten of fifteen weeks.  And if you’ve done something similar in high school, and done it well, there’s a good chance you’re ready for the big leagues of college.  Again, if you have the academic equivalent of a great broad jump, that’s terrific.  But not having it doesn’t mean you won’t do well.  Similarly, of course, there are really good football players who don’t measure up in the vertical jump, and in fact, almost never have to do a vertical jump on the field.

The NFL Combine is likely to continue, probably because it’s filled with people who have always done it that way, and people who have come through the system.  In that sense, it’s like the people at the most prestigious universities, who are there precisely because in part, they scored well on standardized tests throughout their whole life, and believe they are meaningful and important in selecting candidates.

They don’t seem to mind that they’re almost certainly missing a Tom Brady or a Jerry Rice; and they’re not interested in taking chances because a) they don’t have to, b) there is little reward for doing so, and c) they believe tests indicate something important.  Many other institutions don’t see life through the same lens.  And that’s the big difference.

Whence comes this new-found concern?

This has been an interesting couple of weeks for college admissions, following an interesting year.

The Harvard Graduate School of Education has issued a report entitled Turning the Tide, that advocates for a major overhaul in the way college admissions is done.  I spoke to the author of the document last year as he was pulling support together, and my first response was, frankly, not enthusiastic.  It seemed the things we talked about–highly stressed students focusing on developing the perfect resume solely for the purpose of getting into an elite institution–were not on my radar.  My university is one of the several hundred in the great middle of the distribution in higher education in the US: Moderately selective, reasonably well known, with a reputation and a student body unlike the Ivy League institutions; although my (gr)atitude has been labeled “sour grapes,” I can honestly say I wouldn’t want to work at those super-selective institutions, who are best known for making a silk purse out of silk.  (In the interest of fairness, I should point out that the president of DePaul, a colleague of the author at Harvard, signed in support of the paper.)

As I read through the document, though, I softened a bit.  Some of the recommendations are very consistent with my own philosophy: De-emphasizing standardized tests, promoting the idea that there are many, many more great colleges than the average American might think, and encouraging students to get off the hamster wheel.  It’s hard to argue with much of it, even if the problems it touches do seem to be experienced by a very small group of colleges at the top of the bell curve, and even if it’s always easier to find problems than to correct them.

Despite this, many school counselors in a Facebook group I belong to are still skeptical. Some reported calls from parents who asked whether junior could drop an AP course or two. Others want to know just how this could or should change the approach toward the application. Will Dix took a major swing at it here, and summed up a lot of thinking from a lot of people.   I don’t agree with all of what he writes, but much of it is spot on. Some of my colleagues are convinced that “kindness” consultants will crop up to help students appear more kind and caring.  This, of course, is simply Campbell’s Law, and no one should be surprised by it.

One poster in our group even suggested that changing admissions criteria was a way to legally discriminate against Asian students, who, when evaluated simply or solely on academic accomplishment and test scores, simply outshine others.  It’s an interesting theory, I think, and not unlike my recent post suggesting law suits by Asian students might be the first step in diminishing the importance of the SAT or ACT.  But I’m not sure that’s it.

Others have pointed out that this initiative may have the opposite effect: Students who lack superstar credentials might think “kindness” is sufficient for admission and might apply in droves, making these institutions even more selective, and thus even more prone to focus on those with extraordinary academic accomplishments.  Possibly.  Prestige–and thus selectivity–are the coin of the realm in much of higher education.

My concern is a conceptual one: Should kindness be a value measured and rewarded in the college admissions process?  I mean, the easy answer is yes, but shouldn’t that be a local decision each college makes on its own?  Should’t there be room for colleges who actually strive to make students kinder, or more compassionate?  Or will the super-selective institutions simply take kind students, turn out kind students, and take credit for it, much like critics say they do in matters academic?  A bigger question for another day, I think.

The most common observation and objection, however, was that this was somehow tied to The Coalition for Access, Affordability, and Success, a controversial alignment of strange bedfellows in higher education who, apparently frustrated with some technical difficulties the Common Application encountered, decided to take their ball and go home.  (I cannot confirm that any of them did or did not suffer irreparable harm, but egos are large in higher ed, and those bruises aren’t so easily seen).  It’s possible, of course, that the creators of The Coalition were a part of this: That they suspected their own initiative, one I suggested was really not about college access at all, would somehow seem sweeter if dipped in sugar and drizzled with caramel.

I’m not so sure about that, either, but I suppose it’s possible, even though the timing doesn’t seem right.  I do, however, see another connection, and one that seems to be overlooked.  Turning the Tide seems to owe more to Excellent Sheep than it does to other sources, and interestingly enough, both are faculty initiatives.

Could it be that Turning the Tide is simply an expression of faculty who yearn to teach fewer grinds, fewer Wall Street focused students, fewer students who want to be told what to do in order to get their reward?  Could it be they’d just like to teach students who care about bigger societal issues rather than their own comfort, amusement, and power? Students who want to chart their own course, and define success in ways they think are more personal?

In a chapter I wrote on the role of college admissions in the academy for a university textbook, I included this:

Similarly, Karabel (2005) suggests that the policies of the admissions offices at Harvard, Yale, and Princeton in the late 1800’s and early part of the 20th century had as much, or even more, to do with shaping the 20th century as what actually went on inside the classrooms at those quintessential brand names in American higher education. This might seem like a formidable burden to hoist onto the shoulders of mere gatekeepers, but it exists as perhaps an excellent introduction to the widely divergent perspectives on college admissions offices today.

If you believe admissions offices can shape the world we live in, Turning the Tide might stand as a milestone of societal change.  And I hope that’s the case. But if the support from the super-selectives is disingenuous, we’ll find out soon enough.

What do you think?

 

High School Counselors, It’s Your Turn

You might be surprised to learn that Harvard doesn’t care what I think.  No one at Cal Tech consults me before making decisions.  And no one at the University of Chicago–our neighbor on the south side of Chicago–has ever called and asked me to lunch.

This is my influence on higher education in the US.

But as I thought about all the buzz surrounding The Coalition for Access, Affordability and Success, the recently launched initiative by 83 colleges and universities, and the collective angst it’s generated, I recall an oft-repeated discussion on several versions of the old NACAC e-list.  It would go something like this:

  • A high school counselor would send a message, asking why colleges send letters (yes, back when this started, it was letters) to students at the point of application, saying the application was incomplete, even before checking the files to see if it really was incomplete.
  • A discussion would go on for a few days
  • I’d finally jump in and say something like this: “If just ten of you from large high schools near these offending colleges would write a letter (yes, a letter) to the dean of admissions, and say this process annoyed you, and that you find it harder to be enthusiastic about her college with your students, it would stop.  And if it didn’t, you could write the president, and then it would stop.”
  • The discussion would grind to a halt
  • No one would write the letter
  • Next year, the same discussion would happen

There are lots of reasons people might think The Coalition is a bad idea: I have a few of my own thoughts I put into a piece for the Washington Post yesterday.  Mostly, the label of “Access” is just a ribbon on a lump of coal in a pretty box, I think; but beyond that, I cannot understand how a more fractured application process is good for low-income kids.  I’m willing to admit I’m wrong if that proves to be the case and someone can make sense of it for me.

Counselors who work with high school students have other reasons, not the least of which concerns the questions they’ll get about an untested product and process, and the burden this will put on the mechanical systems associated with college guidance.

My concerns don’t matter much to The Coalition, I think.  If you’re a high school counselor, yours might.  Especially if you’re a public high school counselor, the group whose students The Coalition claims to want to serve better.  So it’s time.

First, I’m sorry to say, we in colleges, and probably independent counselors (whom I’m told are often treated like pariahs at the most selective colleges) can’t do much.  It’s all on you, high school people.  But there is a lot you can do:

  • Send an email to the chief admissions officer.  If you can’t find his or her real email, send it to the admissions account
  • Do the same to the president or provost
  • Share this post to your state or regional ACAC list serve (actually, anyone can do this)
  • Send it directly to high school counselors, being sure to include large, under-resourced public schools.  Ask them to write

Just a week after the launch, I sense this discussion is already grinding to a halt, which is what happens when no one on the other side is talking publicly about this.  It’s a brilliant move.  The question is whether you’ll let it succeed.

Colleague Rafael Figueroa at Albuquerque Academy reminded us of the old Turkish proverb: No matter how far you have gone on the wrong road, turn back. I think this is advice The Coalition needs to hear.  From you.

It’s your turn.  Princeton is not returning my calls.

At NACAC, some thoughts about The Coalition

A few days ago–probably not coincidentally just before the annual NACAC conference–we got a first look at the long-rumored Coalition for Access, Affordability, and Success.  Presumably, this group of about 80 high-profile private and large public institutions was founded to improve access, affordability, and success for populations traditionally underserved by our current admissions process.  Or, perhaps it would be better to say, “traditionally underserved by the institutions in ‘The Coalition for Access, Affordability, and Success’.”

According to this story in Inside Higher Education, the requirements for membership are a 70% graduation rate for all institutions; for public members, “affordable tuition along with need-based financial aid for in-state residents,” and for private institutions, a commitment to “provide sufficient financial aid to meet the full, demonstrated financial need of every domestic student they admit.”

So already, we’re in a sort of Alice and Wonderland mode.  Where to begin?

  • Several of these institutions are need-aware in admissions (and despite the rhetoric, I believe none of them are need-blind, a very nice sounding term that belies reality).  So if you’re poor, you don’t have an equal shot in the first place
  • Second, many of these institutions show graduation rates for Pell Grant students as much as 15 points lower than for non-Pell students, and sometimes as low as 58% for those students.  Shouldn’t the Pell Graduation rate be the measuring stick?  I’d think so.
  • Third, these places are among the very worst offenders when it comes to enrolling low-income students, according to their own data.  It’s not surprising that they are also the most selective (at least the privates.) But if they’re among the most selective, don’t you think they could find some low-income kids or first generation students among those they’re already rejecting? I would suspect so.  However, one of the admissions deans at one of these institutions implied–in public–that there just were not enough poor kids who were smart enough to do the work at her institution.  So maybe not. (Note: this visualization shows 2012 data instead of the most-recently available 2013 data because I was in a hurry and didn’t have time to start from scratch, so I reused an older visualization. I will update this when the 2014 data comes out this month.)
  • About a quarter of the public institutions have net prices of over $12,000 for students with family incomes under $30,000.  That’s a pretty big chunk of family incomes.  It’ seems to be inconsistent with affordable.
  • Finally, “meeting 100% of demonstrated need” might be a quibbling point for many.  Most of the private institutions in this group use Profile to collect additional financial information over and above what the federal form the FAFSA does, and while some students may get more aid after filing Profile, most get less.  Some colleges require a contribution of $5,000 from summer work for all students (even those who don’t make $5,000 in the summer), and others use large loans “to meet need.”  Need, of course, is an entirely silly construct, as I’ve written before.

To be fair, many of these institutions don’t have to share any of their wealth with poor students if they don’t think it’s in their mission to do so, so even 6% of freshmen with Pell might be viewed as altruistic beyond what is necessary.

So there’s that.

But these colleges also want to keep applications up and admit rates down, and offering poor kids the lottery ticket seems like a good way to do so.  If colleges told everyone they had to pay the $60,000 out of your own pocket, apps would probably plummet, or maybe increase dramatically from wealthier, but less qualified, students.  Thus, the expenditure on even a little bit of aid might seem like a good investment, or the cost of doing business in the higher education industry.

But there are a lot of other questions that might be asked, too.

For instance, one of the things The Coalition will be offering is an online suite of college planning tools, including a portfolio service to allow students to being assembling application support materials as early as the 9th grade.  This is not unlike what I recommended a while ago when I suggested Google might be a good way to manage college apps.  The difference is I was recommending it for everyone, not just applicants to these institutions.  You have to wonder, though: Who is most likely to jump on this service: Poorer kids with non-college-educated parents who attend under-resourced schools, or wealthier kids with college-focused parents who are already driving their college planning with counselors, test-prep, essay editing, and opportunities in “better” schools?  I’ll give you a moment to ponder that.  Along this line, one counselor said The Coalition should rename itself the “Independent College Counselor Full Employment Act of 2015.”

Do The Coaltion members plan to review all the portfolios of all the applicants or just those whom this initiative is intended to help?  I think this should be clear before many kids start this process.

What’s most puzzling in all this, at least to me, is the creation of a new Coalition Application students will be able to use to apply to the Coalition Colleges.  How will this application be different?  And, as it’s being built by College Net, a company that sued Common Application, and since the discussion of The Coalition appeared to surface after huge Common App problems in Fall of 2013 associated with roll out of a new platform, is this just a big “screw you,” to Common App?  Or is is a reaction to the Common App becoming, well, more common (in the pejorative sense of the word one might associate with wealth, prestige, and status)?

In that light, what about data sharing?  Will this be an opportunity for member institutions to share data on applicants, or will applicant privacy be respected as it is on the Common App?  Will the content in portfolios be reserved for member institutions, or will it be shared with other, non-Coalition schools?

Inquiring minds want to know.  And I’ve spoken to a lot of them today.  In fact, when there is an issue like this, and I’m the one serving as gadfly, I often do so alone.  This is unlike anything I’ve seen; I could not find a single person who thought it was good idea, or that it made sense.  That doesn’t mean there aren’t any, of course.  This appeared today.

Ultimately, I have a couple of big concerns:

  • First, the use of the name “Access” when there are many, many colleges who provide way more access to underserved kids.  True, we’re not the extraordinarily selective institutions, but still, thinking that you must go to a selective institution is part of the big college admissions problem this country already faces.
  • Second, the big question: How does a more fragmented application process help poor kids who are already intimidated by the complexity of the admission process?  I’m scratching my head on that one; I just don’t see it, and neither does a single high school counselor I’ve spoken to today.
  • The concern–like I have with the 568 Group–that this is a price-fixing scheme.  Will there be common needs analysis, or will competition (which only helps the student, despite what many tell you) reign supreme?
  • Finally, my feeling that this is mostly a public relations ploy by institutions who have come under some heat lately for not enrolling low-income students in great numbers.  In fact, the initial press release, coming from a public relations consultant, rather than one of the members only adds to my suspicion.

In my presentation yesterday, during which I touched briefly on a few of these points, I mentioned this group was acting more like a cartel (designed to limit competition and fix price) than a coalition.

I might be 100% wrong about all of this, of course.  But I’d like to see something other than aphorisms before admitting it.

When Harvard becomes a purple giraffe

One of the very first posts I put on my other blog–the one focused on higher education data–was about the Claremont McKenna test score reporting scandal.  You can take a look at it here if you’d like a summary of the data.  At the time, I thought the difference between the actual scores (which many colleges would love to be able to report) and the reported scores (which even more would want to report) was pretty tiny.  Hardly worth it.

But I think one of the reasons people obsess over things like average test scores and admission rates is precisely because they have something Robert Sternberg has called, “The illusion of precision.”  This gets exacerbated when, in the case of CMC, for instance, the perception that tiny changes in the numbers can cause you to fall out of the top 10 into the god-forsaken land of 11 or 12.

It’s just one of the things that adds to confusion and, probably, stress, among everyone associated with college admissions.  That includes parents, students, admissions officers, and high school and independent counselors.

What’s really interesting, though, is that we’re doing it all wrong, at least in the case of test scores.  This is not a post about the value of standardized testing in admissions; I’ve already expressed my opinion about that.  Instead, this is a little bit about numbers, and the types of numbers used in research as variables. You may remember these as nominal, ordinal, interval, and ratio.

A nominal variable is not really a number: It just looks like one.  Mickey Mantle’s 7, or ZIP Code 90210.  You can’t really do anything with these “numbers.”  For instance, if the Cubs infielders word 10, 11, 18, and 14 (Santo, Kessinger, Beckert, and Banks), you can’t really say the average infielder wore number 13.25. And if you had a million records in a census file, trying to average ZIP codes might give you a number, but it wouldn’t mean anything: ZIP codes are just labels that look like numbers.

Then, there are ordinal numbers, used to rank things.  “The Cubs finished 4th in the Division.”  “Mary was the (1st) tallest girl in the class.”

 

People get into trouble all the time using ordinal numbers, because there is some sense to them.  A team that finishes first is better than one who finishes third.  In a room of 19 men, the tallest man in the room is taller than the fourth-tallest man in the room.  But if you try to average 1st, 2nd, 3rd….19th, you’ll always get 10.  And it doesn’t matter if you have the Chicago Bulls in one room and the Wizard of Oz Munchkins in another.  The average of ranks will always be 10 in a room of 19.

This also happens with survey data.  Suppose you ask two questions, and ask people to respond on a scale of 1-to-5, where 1 means “not at all,” and 5 means “a whole lot.” :

  • How much do you like cupcakes?
  • How much do you like sardines?

You might find that the average response is 4.8 for cupcakes and 2.4 for sardines.  But despite those results, it does not mean that people like cupcakes twice as much as sardines. The numbers are just ordinal, essentially meaningless for precise comparisons (it is safe to say, however, that people would, in this example, like cupcakes more than sardines.  I know I do.)

Interval variables and ratio variables are more like the numbers you think of all the time.  Getting four hits in a baseball game means you had four times the number of hits of the guy who got one; it also means you have three more hits than he got.  Buying six bananas means you bought twice as many as the person who bought three.

 

Here’s the shocker, though: People average test scores all the time, even though they’re ordinal values.

Go to this link and look at the table.  These are percentile scores for each possible composite score.  You’ll see that a 30 represents a score in the 95th percentile, which means 95% of all test takers scored at or below 30.  And that a 20 represents a score in the 49th percentile, and so on.  If you average the 30 and the 20, you get 25, which is the 79th percentile, not the average of the 95th and the 49th percentile (which would be 72.5).

The point, of course, is that a 30 has a higher score than a 20, but that it’s meaningless to say it’s “10 better.”  And moving your score from a 19 to a 29 is a much bigger percentile gain than from a 25 to a 35.  They’re numbers we expect to make sense as numbers, but they don’t.

What does all this mean? In short, colleges and students are focused on numbers that are not nearly as meaningful as we might think they are.  The same can be said of admission rates, which can be manipulated in a variety of ways, so much so that people obsess over differences that are essentially meaningless.

Could we have a simple solution? Maybe (assuming you can’t wave a magic wand to make the crazy go away.)

What if test score ranges and admit rates were renamed and grouped into categories?  We could name the categories by letter, animal, color, or anything, even a number if we wanted to.  So Harvard is now a Green on Test Scores and an A on selectivity; DePaul is now a Blue on test scores and a G on selectivity; or Lafayette is a Purple D; Columbia College (there are a lot of Columbia Colleges, so no one’s going to get mad at me here) is a Yellow M.

We’d still have a hierarchy, of course, because that isn’t going away any time soon.  And colleges who are right on the cusp of moving up or down are probably still going to focus on attempting to move up or avoiding moving down.  Of course, the obsessive will always be with us, and will want to know whether your admission rate was 9.8% or 10.1%.  But possibly, some of the bad stuff will go away, or least begin to be less important in the larger educational context.

But if we change some of our language, if we admit that these numbers are not as precise as they seem, we might make a small step toward a more rational, reasonable, discussion of ranking colleges and universities.

What do you think?