This post is longer than most; if you know the history and the background and just want the results, skip to the ~4~ mark.
In February, 2011, DePaul announced that it would become the largest private, not-for-profit university in the nation to adopt a test optional policy. There was ample precedent for this: Many other colleges had offered test-optional policies for a long time, and the results had been positive at all of them, as far as we could tell. In fact, Bates had published a 20-year analysis that effectively demonstrated the tests did not help much in predicting academic success for their students.
There were two motives behind our move to test optional: The first was the statistical research we did that suggested standardized admissions tests widely used today explained almost no variance in freshman performance (a combination of GPA and credits earned), once you eliminated the effect of co-variance with high school performance. (In other words, test scores and grades tend to move in the same direction, so when you’re looking at a student with high test scores, you’re usually looking at a student with high grades. Usually. And grades explain freshman performance better, albeit with still a lot of need for other factors to help make sense of it all.)
Second, we knew that students from certain groups: Women, low-income students, and students of color, especially, scored lower on standardized tests, and that when used alone, scores can under-predicted first-year performance. Our own anecdotal evidence and our discussions with high school counselors told us that students often ruled themselves out from applying to certain colleges based solely on a test score. And research at the University of Chicago on Chicago Public Schools (CPS) students suggested the same thing; it also pointed out that CPS students who took a strong high school program graduated from DePaul at the same rate as other students, despite test score profiles that suggested they were “at risk.”
So, we took the plunge.
Robert Sternberg, who was then the Provost at Oklahoma State University, offered several compelling reasons about why we as a nation are so wedded to test scores. He starts his essay with, “Many educators believe that standardized tests, such as the SAT and the ACT, do not fully measure the spectrum of skills that are relevant for college and life success,” and continues to outline factors such as the illusion of precision (a number sounds precise, so it must be); familiarity (lots of smart people in academia are good testers and that’s how they got to be where they are, so they’re not wont to challenge one thing that confirms their intelligence); and the fact that tests are free to the college.
I would offer another: That standardized tests do, in fact, measure a certain type of intelligence, and super-selective institutions have the luxury for selecting on both academic performance and this more limited skill. All things being equal, if you can require high test scores for admission, why wouldn’t you?
But there are only a handful of the 4,000 institutions who have that market position, and who can command their students present very high scores. I often wonder about what type of research has been done at the other 3,900: Perhaps there are many colleges and universities that have shown a strong value to standardized admissions tests. However, I suspect it’s just as likely that tests–and average test scores–serve as benchmarks, or a type of shorthand, for an industry that has been unable to measure what it does or how it affects students.
In some sense, the admissions office really does not know, and can’t define, what it’s looking for in candidates for admission precisely because we’ve never been able to predict with much precision where students will end up. It could be intelligence, if we could define that. Or maybe insight. Or wisdom, or drive, or motivation. More likely, some combination of them that recombines into the elusive “it.” Oh, we know a successful college student when we see one, but it’s harder than you think to pick those who will succeed ahead of time based on their high school records and tests: Apparent shoo-ins flunk out despite sterling records; marginal admits make the dean’s list; and middle-of-the-road students can go either way.
We still believe there is that “it:” That one thing that will tell us all we need to know. If only we could measure–or even define–it.
We get lazy. If enough students with high scores have “it” our confirmation bias goes into high gear, despite evidence to the contrary. We suppose it’s a kind of intelligence, and we embrace logical fallacies as we celebrate our ersatz discovery. It’s what I call the Poodle Fallacy: If you have a poodle, you know you have a dog; but not having a poodle doesn’t prove you don’t have one. We believe that high testers have “it” and we forget that low testers might have it too. Frequently, of course, they do.
Never mind that the inventor of the Multiple Response Test, Frederick Kelly, called them tests to measure lower order thinking skills. Never mind that in real life–in business, medicine, law, education, engineering–the right answer is never presented for you to choose. And never mind that often it’s difficult to ascertain the right question to ask, let alone having the luxury of a 25% chance of guessing the right answer (or improving your chances by eliminating obviously wrong options.) If school is not like life, it may be doubly true that neither are tests.
University of Maryland professor William Sedlacek, who has done a considerable amount of research on the role of non-cognitive variables in college success, recognizes that tests seek to measure cognitive and verbal ability (both important, of course), but that doing well in college also depends on students’ adjustments, motivations, and perceptions of themselves. And Sternberg also touts skills related to creativity, wisdom, and practicality–not analytic and memory skills measured by tests–as necessary for leadership and citizenship.
Despite this, we knew a lot about how people were wedded to the idea and the practice of test scores. Having two children in high school in a test-obsessed school district, I’ve seen it first hand: Even English literature tests are multiple choice, and parents receive notices about how we need to encourage our children to do well on tests because a lot–an awful lot, like taxpayer satisfaction and state and federal funding–might be riding on them.
These tests, created by someone who’s never met our children, never taught them a class, and who can’t be sure they’ve even covered the material on those tests, carry a lot of weight. And lots of people are heavily invested in them, some for reasons we don’t know and can’t figure out.
The headline still stung: DePaul dumbs down by dropping exams. A writer who had never spoken to me or anyone at DePaul, never gathered any feedback and who had apparently only read an article in the paper she was writing for, (and that article was done as a filler on a deadline), took some uninformed and irresponsible swings at us. As you can imagine, it sent ripple effects through our offices, and sent me on a sort of tour of key constituencies: High School Counselors, Alumni, the President’s Cabinet, the Deans, the Associate Deans, College Advisory Groups, our own Student Government, and even our own division.
I spent a lot of time that spring explaining that we didn’t adopt a test-optional policy to a) get more selective, b) raise median test scores we report, c) garner publicity, d) increase diversity or e) ruin the university. And I demonstrated why none of those was plausible, anyway. People inside and outside DePaul were very receptive, and supportive, of our initiative.
My one regret is that we didn’t anticipate or prepare for the backlash, especially the hardest type to respond to: The opinions of the uninformed. To anyone who is thinking of doing this, I would only advise that you get ready for a lot of weak opinion masquerading as knowledge.
I’d be remiss if I didn’t mention two important things here: One is that I am not opposed to standardized tests, but rather the weight they carry in many important discussions and analysis of our educational systems. For the very many students that don’t test as well as their native intelligence suggests they might, the tests can be the thing that kill dreams, even when those students have worked hard, taken everything their school offers, and excelled. And even, many times, when they have “it.” (And, I have to admit, on occasion a test also serves as a “ticket out” for some kids.)
The second is that I know many good people at agencies that conduct standardized testing. Unlike some, I don’t think they’re evil or wrong-headed or driven by impure motives. I believe most of them are working hard, and trying to do a really hard job: Measuring the capacity to handle college work, and to thrive in it. It ain’t easy; I just find it hard to believe that we can sum up anything in a single number. To a person, everyone at the agencies I’ve talked to has understood why we did this–why DePaul’s mission makes us a good candidate for it–and they’ve been nothing but professional and collegial. I continued to serve on the College Board Regional Council and DePaul staff have been asked to speak at the ACT Enrollment Planners Conference.
The results of our first class, after one year at DePaul are in. And while the students who completed their freshmen year have a long way to go before we pronounce test-optional an unqualified success, the results are encouraging. As a reminder, we collect the scores for every student post-admission, as a part of the research studies we’ll be conducting, but we didn’t know those scores at the point of admission.
And now that we’ve presented the results to our Faculty Council, I can share them more widely.
After one year, the entering class in 2102 at DePaul shows the following:
- Freshman-to-sophomore retention was virtually identical, at 84% for Test-optional and 85% for testers.
- GPA for testers was .07 of a grade point higher (not statistically significant) despite a median ACT score that was 5.5 points lower for test-optional students.
- Believing that income has a big effect on academic performance, our analysis split the class into Pell grant recipients and those who did not receive Pell. Not surprisingly, Pell status means a lot more than people think; Pell testers and non-testers were identical to each other, as were non-Pell testers and non-testers. The resultant effects of poverty are meaningful.
- In two of our colleges, test-optional students had higher GPAs than testers.
- None of the test-optional students started the second year on academic probation, compared to 1.7% of testers.
We noticed a few things we’ll research further: The first-year GPA discrepancy was higher in the College of Science and Health than any other college, at .25 of a point. Testers earned slightly more credits than test-optional students, but again this difference disappeared when we split by Pell Grant status.
We have a long way to go to put any research questions to rest; a thorough analysis is an integral part of our agreement with Faculty Council at DePaul as we move through the four-year pilot program. But for now, we’re moving ahead with our Pilot Program, buoyed by the results so far.
Test-optional applications dropped in our second year, much to our chagrin. Colleagues at other institutions predicted this would happen, as students learn that “test-optional” does not mean “ability optional.” And while we have officially been agnostic about whether students apply with or without tests, we do hope that well prepared students from rigorous high school programs will continue to consider DePaul, regardless of their standardized test scores.
What do you think?
P.S. A special thanks to my DePaul colleague Carla Cortes for helping with editing and checking my facts.