A Look at Test Optional Results: Year 1


This post is longer than most; if you know the history and the background and just want the results, skip to the ~4~ mark.

~Some History~

In February, 2011, DePaul announced that it would become the largest private, not-for-profit university in the nation to adopt a test optional policy. There was ample precedent for this: Many other colleges had offered test-optional policies for a long time, and the results had been positive at all of them, as far as we could tell. In fact, Bates had published a 20-year analysis that effectively demonstrated the tests did not help much in predicting academic success for their students.

There were two motives behind our move to test optional: The first was the statistical research we did that suggested standardized admissions tests widely used today explained almost no variance in freshman performance (a combination of GPA and credits earned), once you eliminated the effect of co-variance with high school performance. (In other words, test scores and grades tend to move in the same direction, so when you’re looking at a student with high test scores, you’re usually looking at a student with high grades. Usually. And grades explain freshman performance better, albeit with still a lot of need for other factors to help make sense of it all.)

Second, we knew that students from certain groups: Women, low-income students, and students of color, especially, scored lower on standardized tests, and that when used alone, scores can under-predicted first-year performance. Our own anecdotal evidence and our discussions with high school counselors told us that students often ruled themselves out from applying to certain colleges based solely on a test score.  And research at the University of Chicago on Chicago Public Schools (CPS) students suggested the same thing; it also pointed out that CPS students who took a strong high school program graduated from DePaul at the same rate as other students, despite test score profiles that suggested they were “at risk.”

So, we took the plunge.

~2~

Robert Sternberg, who was then the Provost at Oklahoma State University, offered several compelling reasons about why we as a nation are so wedded to test scores. He starts his essay with, “Many educators believe that standardized tests, such as the SAT and the ACT, do not fully measure the spectrum of skills that are relevant for college and life success,” and continues to outline factors such as the illusion of precision (a number sounds precise, so it must be); familiarity (lots of smart people in academia are good testers and that’s how they got to be where they are, so they’re not wont to challenge one thing that confirms their intelligence); and the fact that tests are free to the college.

I would offer another: That standardized tests do, in fact, measure a certain type of intelligence, and super-selective institutions have the luxury for selecting on both academic performance and this more limited skill.  All things being equal, if you can require high test scores for admission, why wouldn’t you?

But there are only a handful of the 4,000 institutions who have that market position, and who can command their students present very high scores.  I often wonder about what type of research has been done at the other 3,900: Perhaps there are many colleges and universities that have shown a strong value to standardized admissions tests. However, I suspect it’s just as likely that tests–and average test scores–serve as benchmarks, or a type of shorthand, for an industry that has been unable to measure what it does or how it affects students.

In some sense, the admissions office really does not know, and can’t define, what it’s looking for in candidates for admission precisely because we’ve never been able to predict with much precision where students will end up.  It could be intelligence, if we could define that.  Or maybe insight.  Or wisdom, or drive, or motivation.  More likely, some combination of them that recombines into the elusive “it.”  Oh, we know a successful college student when we see one, but it’s harder than you think to pick those who will succeed ahead of time based on their high school records and tests: Apparent shoo-ins flunk out despite sterling records; marginal admits make the dean’s list; and middle-of-the-road students can go either way.

We still believe there is that “it:” That one thing that will tell us all we need to know. If only we could measure–or even define–it.

We get lazy. If enough students with high scores have “it” our confirmation bias goes into high gear, despite evidence to the contrary.  We suppose it’s a kind of intelligence, and we embrace logical fallacies as we celebrate our ersatz discovery.  It’s what I call the Poodle Fallacy: If you have a poodle, you know you have a dog; but not having a poodle doesn’t prove you don’t have one. We believe that high testers have “it” and we forget that low testers might have it too.  Frequently, of course, they do.

Never mind that the inventor of the Multiple Response Test, Frederick Kelly, called them tests to measure lower order thinking skills.  Never mind that in real life–in business, medicine, law, education, engineering–the right answer is never presented for you to choose.  And never mind that often it’s difficult to ascertain the right question to ask, let alone having the luxury of a 25% chance of guessing the right answer (or improving your chances by eliminating obviously wrong options.) If school is not like life, it may be doubly true that neither are tests.

University of Maryland professor William Sedlacek, who has done a considerable amount of research on the role of non-cognitive variables in college success, recognizes that tests seek to measure cognitive and verbal ability (both important, of course), but that doing well in college also depends on students’ adjustments, motivations, and perceptions of themselves.  And Sternberg also touts skills related to creativity, wisdom, and practicality–not analytic and memory skills measured by tests–as necessary for leadership and citizenship.

Despite this, we knew a lot about how people were wedded to the idea and the practice of test scores. Having two children in high school in a test-obsessed school district, I’ve seen it first hand: Even English literature tests are multiple choice, and parents receive notices about how we need to encourage our children to do well on tests because a lot–an awful lot, like taxpayer satisfaction and state and federal funding–might be riding on them.

These tests, created by someone who’s never met our children, never taught them a class, and who can’t be sure they’ve even covered the material on those tests, carry a lot of weight.  And lots of people are heavily invested in them, some for reasons we don’t know and can’t figure out.

~3~

The headline still stung: DePaul dumbs down by dropping exams. A writer who had never spoken to me or anyone at DePaul, never gathered any feedback and who had apparently only read an article in the paper she was writing for, (and that article was done as a filler on a deadline), took some uninformed and irresponsible swings at us.  As you can imagine, it sent ripple effects through our offices, and sent me on a sort of tour of key constituencies: High School Counselors, Alumni, the President’s Cabinet, the Deans, the Associate Deans, College Advisory Groups, our own Student Government, and even our own division.

I spent a lot of time that spring explaining that we didn’t adopt a test-optional policy to a) get more selective, b) raise median test scores we report, c) garner publicity, d) increase diversity or e) ruin the university.  And I demonstrated why none of those was plausible, anyway.  People inside and outside DePaul were very receptive, and supportive, of our initiative.

My one regret is that we didn’t anticipate or prepare for the backlash, especially the hardest type to respond to: The opinions of the uninformed. To anyone who is thinking of doing this, I would only advise that you get ready for a lot of weak opinion masquerading as knowledge.

I’d be remiss if I didn’t mention two important things here: One is that I am not opposed to standardized tests, but rather the weight they carry in many important discussions and analysis of our educational systems.  For the very many students that don’t test as well as their native intelligence suggests they might, the tests can be the thing that kill dreams, even when those students have worked hard, taken everything their school offers, and excelled. And even, many times, when they have “it.”  (And, I have to admit, on occasion a test also serves as a “ticket out” for some kids.)

The second is that I know many good people at agencies that conduct standardized testing.  Unlike some, I don’t think they’re evil or wrong-headed or driven by impure motives.  I believe most of them are working hard, and trying to do a really hard job: Measuring the capacity to handle college work, and to thrive in it.  It ain’t easy; I just find it hard to believe that we can sum up anything in a single number.  To a person, everyone at the agencies I’ve talked to has understood why we did this–why DePaul’s mission makes us a good candidate for it–and they’ve been nothing but professional and collegial.   I continued to serve on the College Board Regional Council and DePaul staff have been asked to speak at the ACT Enrollment Planners Conference.

~4~

The results of our first class, after one year at DePaul are in.  And while the students who completed their freshmen year have a long way to go before we pronounce test-optional an unqualified success, the results are encouraging.  As a reminder, we collect the scores for every student post-admission, as a part of the research studies we’ll be conducting, but we didn’t know those scores at the point of admission.

And now that we’ve presented the results to our Faculty Council, I can share them more widely.

After one year, the entering class in 2102 at DePaul shows the following:

  • Freshman-to-sophomore retention was virtually identical, at 84% for Test-optional and 85% for testers.  
  • GPA for testers was .07 of a grade point higher (not statistically significant) despite a median ACT score that was 5.5 points lower for test-optional students.  
  • Believing that income has a big effect on academic performance, our analysis split the class into Pell grant recipients and those who did not receive Pell.  Not surprisingly, Pell status means a lot more than people think; Pell testers and non-testers were identical to each other, as were non-Pell testers and non-testers. The resultant effects of poverty are meaningful.
  • In two of our colleges, test-optional students had higher GPAs than testers.
  • None of the test-optional students started the second year on academic probation, compared to 1.7% of testers.

We noticed a few things we’ll research further: The first-year GPA discrepancy was higher in the College of Science and Health than any other college, at .25 of a point. Testers earned slightly more credits than test-optional students, but again this difference disappeared when we split by Pell Grant status.

We have a long way to go to put any research questions to rest; a thorough analysis is an integral part of our agreement with Faculty Council at DePaul as we move through the four-year pilot program. But for now, we’re moving ahead with our Pilot Program, buoyed by the results so far.

Test-optional applications dropped in our second year, much to our chagrin.  Colleagues at other institutions predicted this would happen, as students learn that “test-optional” does not mean “ability optional.”  And while we have officially been agnostic about whether students apply with or without tests, we do hope that well prepared students from rigorous high school programs will continue to consider DePaul, regardless of their standardized test scores.

What do you think?

 

P.S. A special thanks to my DePaul colleague Carla Cortes for helping with editing and checking my facts.

Advertisements

11 thoughts on “A Look at Test Optional Results: Year 1

  1. I think it’s important to underline our dependence on test scores and how limiting they can be in the college admissions process. At my institution, we use an entrance exam and the same company that created this exam provides other testing materials that we use during the program. Essentially, we are saying to our students- “pass this exam and you will be successful in this program, and will successfully pass the professional credentialing exam that you sit for at the end of the program”. It says very little about how successful we are as an institution, even though over 90 percent of our graduates do pass the exam on the first try. I see so many applicants struggle with an exam that was created for native English speakers/readers- (it is timed), and gives too much attention to general knowledge questions rather than critical thinking skills. I applaud this fresh approach from DePaul, and wish you much success!

    Like

    Reply
  2. What a great summary of your first round of assessments! It is heartening to know that your data – and student experiences — support DePaul’s front end assumptions. But, you expected they would. Congratulations to all who were involved in this decision and process at DePaul. Your careful, data-driven process was just as important as the outcome – perhaps more so, in fact. Thanks for sharing this good information, Jon.

    Like

    Reply
  3. Jon — Thanks for sharing this interesting background and discussion. I admire your and DePaul’s initiative and courage in going test-optional and sharing your experience for the benefit of the wider community. I am happily finished, in general, with educational work and counseling (other than occasional, pro bono help for interesting students who are not getting the assistance and support they need to figure out what is important to them and where to find the institutional/community matches that might serve them well). The “it” question has always fascinated me, and I am encouraged that admission committees appear to continue devoting lots of time and effort to identifying its nature and to looking for signs of it in each packet of application and supporting materials. Bates College and DePaul University and their peers who have embraced being test-optional have helped us all by risking derision from journalists and commentators and the resultant negative feedback that can come as a result of bad press. Your first-year results, though, will encourage other institutions to look more open-mindedly at their applicants and to move away from over-reliance on test scores that enable US News and others to sell a simplistic, wrong-headed concept that higher scores mean better institutions and better education.

    Dick Steele once told me about coming out of his office at the end of a day, chatting with a colleague who taught chemistry about the quality of their institution’s (Carlton College’s) program, and hearing in response that it was good, but neighboring St. Olaf College’s Chemistry Department was even better. I was intrigued by the story’s implication that even for an experienced insider, ranking one’s own institution comprehensively was extraordinarily difficult — so the prospect of someone’s accurately ranking many evolving institutions would be laughably out-of-reach. I wish you well in your valuable work at De Paul University that benefits us all.

    Like

    Reply
  4. Congratulations on doing an excellent job with test-optional. Wake Forest University is proud to stand beside you in the growing ranks of test-optional universities. We are approximately one-third of all four-year degree granting institutions in the US, and each semester our numbers increase.
    Professor Joseph A Soares

    Like

    Reply
  5. Pingback: An Early Look at DePaul U.’s Test-Optional Policy – Head Count - Blogs - The Chronicle of Higher Education

  6. I was enchanted by the style of the blog post, its openness and depth. I think DePaul did the right thing, given its mission. It would seem fair to conclude that other than for the hyper-elite schools the search for the ‘it’ is not one in which test scores are particularly helpful. I suppose this makes admissions life difficult but interesting.

    Like

    Reply
  7. Jon, Great article with excellent insight based on your new data. You have always impressed me with your data based decision making and this seems like another winner based on accurate reporting and analysis. I am going to share this with my Faculty and Staff as I know it will create quality discussions on campus about “why we do what we do” in the enrollment process.
    Dave Armstrong, President, Thomas More College, Crestview Hills, KY

    Like

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s