Ok, I suspect you’re not surprised.
Over the past few years, as more colleges do research on the value of standardized tests in college admissions, using their own data and thinking about their own missions against the context of access for students, more of them eliminate the requirement of standardized testing in the admissions process. Fairtest lists almost 1,000 of them, but many are, to be fair, non-selective institutions that have never required tests. Others, however, are among the US News and World Report Top 100, and include names like Middlebury, Hamilton, Wake Forest, Agnes Scott, and Lake Forest. Of course, many large public institutions don’t use tests for a large percentage of their incoming class who automatically qualify for admission based on class rank.
There are many reasons why colleges do go test-optional, and, to be fair, not all are completely benevolent. I had the provost of one mid-sized, east coast public university call me and ask how much I thought they could raise the SAT mean of the freshman class by going test-optional. I said I didn’t know, because we reported scores for all students who had tests (even those who were admitted without them), and also told him his motive wasn’t the real reason he should pursue it. That university still requires tests.
Almost every institution that goes test optional has a faculty full of researchers who need to be convinced before changing a policy on admissions, and most places I know that have done this (going back to the California study in 2002) suggest that tests uniquely explain about 2-4% of the variance in freshman grades, and not much of anything beyond that. In short, we don’t need tests to make good admissions decisions. Period. A lot of colleges and universities believe the same thing. If students have them, and they want to send them, we’ll put them in the file along with anything else they think is important.
(Uniquely is an important word. Tests, by themselves, explain much more of the variance than that. But tests and high school GPA–the best predictor–are strongly correlated. Once you eliminate that correlation, you discover the tests are mostly very low value. This makes sense to most researchers, of course, and the people at the testing agencies are aware of this (Wayne Camara of ACT even agreed with me on the 2% point when we talked face-to-face), so they present the data differently, by talking about things like “chances for a grade of B or better.” Everything, it seems, is linear and continuous until it isn’t.
“But,” people say, “why would you throw away information, even if it is of little value? It’s extra information!! And extra information is better!!”
This is like one of those arguments your meshuganah uncle Sherman brings up at Thanksgiving, after he has a few drinks and starts bitching about foreign aid, or food stamps, or the fakakta tax policy. If you only listen for a second, it seems to make sense. But when you think about it, you realize it’s mostly uninformed BS.
The reason you don’t want this extra information is the cost of acquiring it. Suppose, for instance, we didn’t have standardized tests, but someone created one, and told you this:
- The test will eventually be used by journalists, school districts, and politicians in ways the test makers never wanted it to be used, like comparing how well school districts use resources, or how good a college is.
- The kids who are already going to go to college are going to score higher on this test, because it measures not just academic preparation (kind of) but also social capital, like parental attainment, income, and ethnicity. In short, it measures opportunity and gives opportunity to those with the most opportunity.
- The test will cost hundreds of millions of dollars, and will also take away from instructional time, as teachers (see above) are pressured to teach to the test. English classes will give multiple choice tests in the interest of preparing students for high-stakes testing.
- After all that, it really won’t help you predict very much of anything academic. Non-standardized grades from thousands of teachers at 40,000 high schools are still way better.)
Would you be enthusiastic about it? I suspect I know your answer.
For a while, test-optional was seen as a fringe movement. I don’t think the big testing agencies, The College Board and ACT, thought too much about it. It was that little pimple that no one saw because it was covered up by your underwear.
Things have changed, however. The first volley was this ACT report suggesting test-optional policies were not only not good: They were bad. I followed up with this reply. ACT wasn’t finished, of course, and has since published papers suggesting tests were a vital part of admissions, and that my response was unfounded. (You’ll have to go here and find these articles, as the fakakta ACT website doesn’t allow direct article links). The author of the article called me “One Individual” and suggested I didn’t know how to read a data chart. People who frequent my other blog might find this claim amusing. I did. And don’t tell Wayne Camara this, but ACT actually invited me to spend time on their campus in Iowa City to talk to their data people about Tableau, which I did. I can read a chart.
As an aside, part of my criticism of the ACT report was the way they called out students with a 10 Composite Score. Here was the chart in the original report, which I screenshotted and which appears to have been removed. More on why this is important in a minute.
This has stepped up a bit more with a session at NACAC, (slides here) promoting a new book sponsored by, well, either the College Board and/or ACT (no one knows, and the people soliciting interviews wouldn’t be upfront about it). The session description is below in blue and I’ve added italics, which I respond to below:
Despite widespread media coverage, underlying claims about the benefits of “going test-optional” have largely escaped empirical scrutiny, and support for these claims tends to be of limited generalizability and/or fail to adequately control for student selectivity and other factors. Hear a rigorous and balanced approach to the contemporary debate on standardized testing and the test-optional movement. Explore how test-optional practices emerged and expanded, how widespread grade inflation has made it increasingly difficult for institutions to solely rely on students’ prior achievement, and how standardized admission tests can be used in predicting student retention, achievement, and graduation.
The second point, about rigorous and balanced: It’s as rigorous and balanced (and impartial) as this, where seven tobacco company executives testify before Congress that nicotine is not addictive. As Upton Sinclair once said, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
The third point, about grade inflation: It may be caused by the College Board itself.
The fourth point, how standardized tests can be used to predict retention, performance, and graduation, basically boils down to this: Rich kids have better test scores and graduate at higher rates, so it must be the tests that predict things (more later on this.)
I probably wouldn’t have cared too much about the research or opinions presented, as I understand how business works. They’re going to tell you what they want you to hear. We don’t know Sal Tessio’s last words, but the almost last words were, “Tell Michael it was only business.” Michael, we’re assured, understood this. And we all do. It’s only business.
But I got angry when a former EM practitioner, who now takes sponsorship from ACT and the College Board at the Symposium he runs each January, suggested test-optional policies were just a publicity stunt, designed solely to increase selectivity or test scores, and the panel nodded in approval as though they were Tito, Marlon, Jackie and Jermaine backing up Michael. It was a shameful attempt to superimpose the endorsement of the profession on an opinion not shared by a substantial percentage of professionals (I’m tempted to say majority, but I don’t have any hard statistics to back up that claim).
So a few points: First, this chart. It’s just one of the many fakakta charts used in the presentation. Go ahead, tell me what it says. I’ll wait.
The worst part of this is not just how confusing it is: Look at the dual y-axes. Note how truncated they are. This is what a bad chart designer does when she is told to prove a point even though a point might not be there. If the deceptiveness of the truncated y-axis is lost on you, you might want to read How to Lie with Data Visualizations.
Second, this heatmap, which is in the Chapter about “When grades and test scores disagree.”
It shows three apparent areas of focus: The oval covering the red section shows when test scores and grades largely agree (the red indicates high concentrations of numbers). And of course, this is prima facie evidence that tests are simply, for large numbers of students, redundant. The second oval at top left shows students with high test scores relative to GPA. These students are not really the focus of test-optional policies; if there were a policy of grade optional, well, maybe we’d want to look at them. But the third oval at the bottom is the kicker, in the eyes of test fans. Those are students with ACT scores of 11 and below. It purports to show–via even colors–that their grades are all over the place! Some of them even have a 4.0!
Well, yeah, about that. As I pointed out above, ACT just loves to scare people into thinking test optional admissions policies are all about admitting kids with an ACT Composite Score of 10 or 11. (Test prep experts tell me if you simply sat down and bubbled in random answers without reading the test booklet, you’d get about an 11 Composite score on the ACT; so these students score at or below the guessing threshold).
Before looking at the chart below, take a guess at what percentage of testers score at 11 or below. Got a number? It’s that little green slice of these 100% stacked bars, or about 1% in an average year. It’s gone up as mandatory state testing has increased; most of these students would probably not take the test if left to their own devices.
In raw numbers, that’s about 124,000 over 14 years, most of them poor, and most of them African-American and Hispanic. In 2015, it was about 13,000 out of 1.3 million test takers. These are not kids applying to moderately, let alone highly, selective institutions. Don’t worry, Yale. You’re safe. If each of those boxes in the bottom oval were in fact equal, it would be about 1,000 students per year per box. Statistical noise.
In case you’re interested in the role income plays in testing, here’s the same data, broken out on the x-axis by income instead of over time. I claim no responsibility for you breaking your eyes by trying to find the little green slivers among the wealthy populations.
If you’re still reading, take a gander at this one (keep the chart right above in mind). It shows that students with lower test scores are less likely to persist. Given what you now know about tests and income, are you surprised that students with low scores have lower persistence? Our testing agency researchers are. It’s almost like money has something to do with college enrollment. (On this chart, H, M and L refer to high medium and low ACT and GPA).
And finally, the continued pronouncement that test prep doesn’t work. Really? This contention is something people at the conference literally laugh at: The absurdity of the notion that the only test prep that works in our test prep. It’s the hill some people want to die on, I guess, and they have that right.
What You Are Dying To Know
Here’s the big thing: I do not give a damn if a college wants to use tests or not; if they want to do video interviews in place of high school transcripts; if they want to require students to do backflips and handstands for the admissions committee; if they want to measure shoe size and research how it affects academic performance, or if they want to admit every applicant with ability to benefit, the way community colleges do. It’s their decision.
I do care, and I get annoyed, when testing agencies lie and distort in order to save their bacon. And when they do, I’ll write about it.
PS: Thanks to Brad Weiner for the challenge: