The College Board and The Catholic Church

I had another title all picked out for this post: It was going to be “An Interesting Week for the College Board” or something like that.  But after I typed the title, I realized that the College Board and the Catholic Church share a lot in common, at least from where I sit. (And this is a good time to reiterate that the opinions on this blog are mine, and may be influenced by my work, but are in no way an official or unofficial representation of the views of my employer.)

Both the College Board and the Catholic Church:

  • Claim me as a member, even though I’ve never really bought into most of the dogma
  • Are fairly inflexible–some might say intransigent–about what they believe to be the one right way to do things
  • Have new leaders who have brought considerable change
  • Get criticized for being big cash-laden organizations who spend lavishly in light of their purported concern about the poor
  • Have supporters who are fervent, dedicated believers; and those who despise the very thing they stand for
  • Invoke wrath when they presumptuously try to impose their beliefs on others
  • Have mostly good people in their employ who are, I believe, trying to do good, despite the opinion some people hold of the institution as a whole
  • Seem to be having some trouble making both their core product and their core message resonate with people these days

But this week, The College Board got most of the attention, at least if you work in education.  President David Coleman announced sweeping changes in the way the SAT is designed and administered.  You can read about them all over the web, but start here if you haven’t heard.

Reaction was swift, and came from all corners: Some thought the changes were a response to the ever-growing dominance of the ACT; there were the inevitable articles about “dumbing down” the test or American education in general; The New York Times appeared to trade private access for a flattering story; some expressed great faith that the new SAT will be all it claims to be, while praising the changes; while many simply said, “meh.”  The most strongly worded response came from Leon Botstein, the President of Bard College, who wrote in Time Magazine, calling the SAT “Part Hoax; Part Fraud.”

My own feelings are probably well known to anyone who reads this blog, and are admittedly a combination of hard research and personal observation:

  • Any test created by someone who never taught the material to the students tested is inherently lacking
  • SAT and ACT do explain freshman performance, but since the tests and High School GPA covary so strongly, it simply duplicates the effect of GPA, but does not do it better.  As an incremental measure over and above High School GPA, the benefit is negligible at best.
  • GPA–even compressed GPAs from 35,000 different high schools–explains more about freshman performance than the SAT or ACT (no one from either organization disputes this, by the way).
  • Both tests do, in fact, measure a certain type of intelligence: Picking the “right” answer from four given. And the fact that the tests might get it right 40% of the time seems good enough for many. However, this is not necessarily the way students “do” college.  In life as well as in many classes, sometimes you don’t even get the question; when you do, oftentimes the answer fails to be described in a few words.
  • The tests have a very high “false negative” and a very low “false positive” for whatever it is (and we can’t always even define what it is) they purport to measure.
  • Insecure people who have high standardized test scores are often the ones touting the value of standardized tests
  • Super-selective institutions like the tests, even though they know it doesn’t predict much of anything academic, because: a) high numbers equate with “smart” and equate with “high quality” and b) they don’t need to, nor do they want to, take any risk on students, and c) they one thing they do measure really well–wealth–is important to many of them colleges.  It also gives them a convenient excuse to enroll fewer poor students.

But as I sat at my desk watching the webinar preview for representatives of college and university admissions offices, several things struck me as interesting:

  • The implicit acknowledgement that test-prep works, and that it favors the wealthy
  • The tacit admission that the SAT is not the great predictor of value colleges have been led to believe it is
  • The opening remarks about transparency, coupled with the immediate request that I not Tweet about the presentation details, seemed to be at odds with one another.  (It’s normal to tell people information will be embargoed before they accept an invitation to hear or read it.)

And especially, this tidbit, from a slide in the presentation, which I’m saying is Fair Use, in case any lawyers are reading this (click on it to enlarge):

SAT06

This may or may not strike you as new, but at the very least it should raise some questions: Who invited the College Board into 6th Grade?  And if they begin developing curricula for 6th grade, how will we know if it’s effective? A test, you say? Developed by whom?  Don’t hurt your brain trying to figure out this puzzle; the answer is obvious.

Thus, there is one more way the College Board and the Catholic Church seem to be alike: Even on those times you’re tempted to believe they’re doing something for the right reason, you always have to wonder what motive lies underneath it all.

Meet The New Boss, Same as the Old Boss

Advertisements

Bloody Monday: Not just for the NFL

It started sort of innocently enough: A post of a Facebook page for college admissions officers.  It was one of those questions that high school or independent counselors post asking/complaining (often, with just cause) about some college practice.

It was there, buried in a longer response: ” I wish I understood why yield was still a concern now that US News has downplayed that rubric in their rankings.”  I pointed out–more politely than usual, I might add–that yield (the percentage of admitted students who enroll) has a huge effect on enrollment.  Suppose you admit 12,000 students, and you project a 20% yield rate to enroll 2,400.  If yield goes up to 21%, you have 120 extra students in your class, or 5% too many.  If it goes down to 19%, you come in 120 students short, at 2,280.  In other words, if one out of every 80 students behaves unlike  you’d expect, you can be in big–or really big–trouble.

And that was that.

Until today. I heard about another colleague who has lost an admissions or enrollment job, for reasons that are all too obvious.  That’s a half-dozen this year already, and we’re still a long way from spring; these are just people I know. And from what I hear, it’s just beginning.  This may be the bloodiest year in a long time: Maybe the bloodiest I’ve ever seen.

Are admissions people getting less and less competent? I suppose you might think so; maybe you’d be right. But maybe this will change your mind.  It’s a summary of some data presented on my other blog, Higher Ed Data Stories.

And back on the ranch, the expectation is pretty straightforward: More. Better. With less need for aid.

I’ve been in this business for 30 years, and I recall few days when “The Number” hasn’t crossed my mind.  If this year’s class is in, you start worrying about the next one. Even so, we know it’s part of the job, and the rewards of what we do, as evidenced by the lives of the students we affect, are high.  But over time, we’ve seen “The Number” turn into “The Numbers,”  with the complexity of the expectations increasing even as reality pulls strongly in the wrong direction on all accounts.

It’s true of course, that someone on campus knows about faculty productivity; someone knows about how much financial aid we spent last year; someone knows how much money the Advancement Office has raised; and lots of people know how many games the basketball team has won.

But everyone knows the freshman class size (and the average test scores.) Or so it seems.

In some sense, my colleagues are like NFL Coaches: Success, a finite commodity based on the nature of the game, is parceled out by the whims of the gods, and your hard work and good fortune bless you with it on occasion.  But the organizational appetite never goes away, and when it’s not fed sufficiently, good people are shown the door, and often replaced with someone who–in many ways–is just like the person leaving.  Only different.  The NFL has its Bloody Monday, the day after the season ends and coaches get fired.  In enrollment, we have bloody springs.

Having done this for so long, I’m grateful that I’ve been able to stay in one place as long as I wanted, but I’m also surprised when the pressures and the issues and the expectations we deal with are not obvious to those who don’t do it every day.  Maybe the same could be said of most professions. But for as much fun as this profession is, and for all the rewards it brings, I do wish we could bring a little more sanity to the continual upward spiral of expectations.

Don’t Write About Teachers

It was a Sunday, a day like any other Sunday.  I went to look at the NCES Digest of Education Statistics to see if any more tables from the 2013 version had been released.  To my delight, I found some interesting stuff; but most of the NCES Tables are designed to be printed as reports, and are in no shape to be pulled into the software I typically use, Tableau.

But this one on teacher salaries was in pretty good shape, even though I almost always focus on higher education data.  A couple clicks, and I was ready to visualize.  I did so and put it up on my other blog, Higher Education Data stories, here. One of the meta-reasons for doing so is to show how much more understanding of an issue you can impart with a picture as opposed to a table of data. I hope you agree.

I sent it off to some groups, and posted it to the NACAC e-list, an email group of college admissions professionals and independent and high school counselors.  It’s an open list, and Valerie Strauss from the Washington Post asked if she could share it.  It’s a blog and it’s public, so I happily agreed. It was up that afternoon, and you can read it here.

In addition to the hundreds of comments this has drawn on the WaPo site (which could be a post in themselves), I’ve received lots of emails and posts about the visualization.  They fall into several groups:

  • I’m trying to hurt teachers by showing how high salaries are
  • I’m trying to help teachers by showing how low salaries are
  • The data can’t be trusted because it’s from the Feds
  • The data doesn’t account for costs of living
  • The data doesn’t account for average service
  • The data isn’t split by union/non-union states
  • The data can’t be right because someone’s cousin makes way less than this
  • The data can’t be right because someone’s cousin makes way more than this
  • I shouldn’t have used red-green scales (and this person was right; I should know better).

Lessons learned, but good to repeat:

  • You can only viz the data you have
  • The limits of means as a measure of central tendency are not widely understood
  • Everyone’s an expert
  • I’m an idiot for stepping into this without understanding what a political landmine teacher pay is.

Lessons learned, internalized, and acted upon.  Stick to higher education.

And for those of you still reading, I had no political agenda at all; I simply thought the data was interesting, and that it would make a good visualization.

A Look at Test Optional Results: Year 1

This post is longer than most; if you know the history and the background and just want the results, skip to the ~4~ mark.

~Some History~

In February, 2011, DePaul announced that it would become the largest private, not-for-profit university in the nation to adopt a test optional policy. There was ample precedent for this: Many other colleges had offered test-optional policies for a long time, and the results had been positive at all of them, as far as we could tell. In fact, Bates had published a 20-year analysis that effectively demonstrated the tests did not help much in predicting academic success for their students.

There were two motives behind our move to test optional: The first was the statistical research we did that suggested standardized admissions tests widely used today explained almost no variance in freshman performance (a combination of GPA and credits earned), once you eliminated the effect of co-variance with high school performance. (In other words, test scores and grades tend to move in the same direction, so when you’re looking at a student with high test scores, you’re usually looking at a student with high grades. Usually. And grades explain freshman performance better, albeit with still a lot of need for other factors to help make sense of it all.)

Second, we knew that students from certain groups: Women, low-income students, and students of color, especially, scored lower on standardized tests, and that when used alone, scores can under-predicted first-year performance. Our own anecdotal evidence and our discussions with high school counselors told us that students often ruled themselves out from applying to certain colleges based solely on a test score.  And research at the University of Chicago on Chicago Public Schools (CPS) students suggested the same thing; it also pointed out that CPS students who took a strong high school program graduated from DePaul at the same rate as other students, despite test score profiles that suggested they were “at risk.”

So, we took the plunge.

~2~

Robert Sternberg, who was then the Provost at Oklahoma State University, offered several compelling reasons about why we as a nation are so wedded to test scores. He starts his essay with, “Many educators believe that standardized tests, such as the SAT and the ACT, do not fully measure the spectrum of skills that are relevant for college and life success,” and continues to outline factors such as the illusion of precision (a number sounds precise, so it must be); familiarity (lots of smart people in academia are good testers and that’s how they got to be where they are, so they’re not wont to challenge one thing that confirms their intelligence); and the fact that tests are free to the college.

I would offer another: That standardized tests do, in fact, measure a certain type of intelligence, and super-selective institutions have the luxury for selecting on both academic performance and this more limited skill.  All things being equal, if you can require high test scores for admission, why wouldn’t you?

But there are only a handful of the 4,000 institutions who have that market position, and who can command their students present very high scores.  I often wonder about what type of research has been done at the other 3,900: Perhaps there are many colleges and universities that have shown a strong value to standardized admissions tests. However, I suspect it’s just as likely that tests–and average test scores–serve as benchmarks, or a type of shorthand, for an industry that has been unable to measure what it does or how it affects students.

In some sense, the admissions office really does not know, and can’t define, what it’s looking for in candidates for admission precisely because we’ve never been able to predict with much precision where students will end up.  It could be intelligence, if we could define that.  Or maybe insight.  Or wisdom, or drive, or motivation.  More likely, some combination of them that recombines into the elusive “it.”  Oh, we know a successful college student when we see one, but it’s harder than you think to pick those who will succeed ahead of time based on their high school records and tests: Apparent shoo-ins flunk out despite sterling records; marginal admits make the dean’s list; and middle-of-the-road students can go either way.

We still believe there is that “it:” That one thing that will tell us all we need to know. If only we could measure–or even define–it.

We get lazy. If enough students with high scores have “it” our confirmation bias goes into high gear, despite evidence to the contrary.  We suppose it’s a kind of intelligence, and we embrace logical fallacies as we celebrate our ersatz discovery.  It’s what I call the Poodle Fallacy: If you have a poodle, you know you have a dog; but not having a poodle doesn’t prove you don’t have one. We believe that high testers have “it” and we forget that low testers might have it too.  Frequently, of course, they do.

Never mind that the inventor of the Multiple Response Test, Frederick Kelly, called them tests to measure lower order thinking skills.  Never mind that in real life–in business, medicine, law, education, engineering–the right answer is never presented for you to choose.  And never mind that often it’s difficult to ascertain the right question to ask, let alone having the luxury of a 25% chance of guessing the right answer (or improving your chances by eliminating obviously wrong options.) If school is not like life, it may be doubly true that neither are tests.

University of Maryland professor William Sedlacek, who has done a considerable amount of research on the role of non-cognitive variables in college success, recognizes that tests seek to measure cognitive and verbal ability (both important, of course), but that doing well in college also depends on students’ adjustments, motivations, and perceptions of themselves.  And Sternberg also touts skills related to creativity, wisdom, and practicality–not analytic and memory skills measured by tests–as necessary for leadership and citizenship.

Despite this, we knew a lot about how people were wedded to the idea and the practice of test scores. Having two children in high school in a test-obsessed school district, I’ve seen it first hand: Even English literature tests are multiple choice, and parents receive notices about how we need to encourage our children to do well on tests because a lot–an awful lot, like taxpayer satisfaction and state and federal funding–might be riding on them.

These tests, created by someone who’s never met our children, never taught them a class, and who can’t be sure they’ve even covered the material on those tests, carry a lot of weight.  And lots of people are heavily invested in them, some for reasons we don’t know and can’t figure out.

~3~

The headline still stung: DePaul dumbs down by dropping exams. A writer who had never spoken to me or anyone at DePaul, never gathered any feedback and who had apparently only read an article in the paper she was writing for, (and that article was done as a filler on a deadline), took some uninformed and irresponsible swings at us.  As you can imagine, it sent ripple effects through our offices, and sent me on a sort of tour of key constituencies: High School Counselors, Alumni, the President’s Cabinet, the Deans, the Associate Deans, College Advisory Groups, our own Student Government, and even our own division.

I spent a lot of time that spring explaining that we didn’t adopt a test-optional policy to a) get more selective, b) raise median test scores we report, c) garner publicity, d) increase diversity or e) ruin the university.  And I demonstrated why none of those was plausible, anyway.  People inside and outside DePaul were very receptive, and supportive, of our initiative.

My one regret is that we didn’t anticipate or prepare for the backlash, especially the hardest type to respond to: The opinions of the uninformed. To anyone who is thinking of doing this, I would only advise that you get ready for a lot of weak opinion masquerading as knowledge.

I’d be remiss if I didn’t mention two important things here: One is that I am not opposed to standardized tests, but rather the weight they carry in many important discussions and analysis of our educational systems.  For the very many students that don’t test as well as their native intelligence suggests they might, the tests can be the thing that kill dreams, even when those students have worked hard, taken everything their school offers, and excelled. And even, many times, when they have “it.”  (And, I have to admit, on occasion a test also serves as a “ticket out” for some kids.)

The second is that I know many good people at agencies that conduct standardized testing.  Unlike some, I don’t think they’re evil or wrong-headed or driven by impure motives.  I believe most of them are working hard, and trying to do a really hard job: Measuring the capacity to handle college work, and to thrive in it.  It ain’t easy; I just find it hard to believe that we can sum up anything in a single number.  To a person, everyone at the agencies I’ve talked to has understood why we did this–why DePaul’s mission makes us a good candidate for it–and they’ve been nothing but professional and collegial.   I continued to serve on the College Board Regional Council and DePaul staff have been asked to speak at the ACT Enrollment Planners Conference.

~4~

The results of our first class, after one year at DePaul are in.  And while the students who completed their freshmen year have a long way to go before we pronounce test-optional an unqualified success, the results are encouraging.  As a reminder, we collect the scores for every student post-admission, as a part of the research studies we’ll be conducting, but we didn’t know those scores at the point of admission.

And now that we’ve presented the results to our Faculty Council, I can share them more widely.

After one year, the entering class in 2102 at DePaul shows the following:

  • Freshman-to-sophomore retention was virtually identical, at 84% for Test-optional and 85% for testers.  
  • GPA for testers was .07 of a grade point higher (not statistically significant) despite a median ACT score that was 5.5 points lower for test-optional students.  
  • Believing that income has a big effect on academic performance, our analysis split the class into Pell grant recipients and those who did not receive Pell.  Not surprisingly, Pell status means a lot more than people think; Pell testers and non-testers were identical to each other, as were non-Pell testers and non-testers. The resultant effects of poverty are meaningful.
  • In two of our colleges, test-optional students had higher GPAs than testers.
  • None of the test-optional students started the second year on academic probation, compared to 1.7% of testers.

We noticed a few things we’ll research further: The first-year GPA discrepancy was higher in the College of Science and Health than any other college, at .25 of a point. Testers earned slightly more credits than test-optional students, but again this difference disappeared when we split by Pell Grant status.

We have a long way to go to put any research questions to rest; a thorough analysis is an integral part of our agreement with Faculty Council at DePaul as we move through the four-year pilot program. But for now, we’re moving ahead with our Pilot Program, buoyed by the results so far.

Test-optional applications dropped in our second year, much to our chagrin.  Colleagues at other institutions predicted this would happen, as students learn that “test-optional” does not mean “ability optional.”  And while we have officially been agnostic about whether students apply with or without tests, we do hope that well prepared students from rigorous high school programs will continue to consider DePaul, regardless of their standardized test scores.

What do you think?

 

P.S. A special thanks to my DePaul colleague Carla Cortes for helping with editing and checking my facts.

When you question everything you think you know

Once in a while, something comes along that can make you think deeply about everything you have held to be true.  This is not one of those times, but it does support a lot of things I’ve believed to be true, even though I was in the minority in thinking so. So, there is a similar elation to being vindicated.

Warning: You’re going to have to watch an 18-minute video to put this in context.

You may already know Malcolm Gladwell.  He’s an interesting guy, and he makes his points by telling good narratives, which makes him a great teacher.

In this one video, he addresses several things people in higher education, and, to a lesser extent, society in general tend to debate:

  • The choice of “big fish, small pond” vs. “small fish, big pond.”
  • The wisdom of admissions officers trying to figure out how much of a break to give to a kid from a “good school.”
  • The self-image people develop based on their surroundings, in the context of “Relative Deprivation Theory.  (This also may explain why students write about community service as such a profound experience.)
  • Why firms that hire only from the “best schools” are probably making a huge mistake.

Watch, and prepare to get astonished.

 

On watching my daughter head off to take the PSAT

It’s Saturday, October 19, exactly 167 years to-the-day after the first of my ancestors to come to America arrived in the United States at the port of New Orleans, on a ship from Bremen, Germany called Manco.  In fact, it was so long ago, Germany wasn’t even a country then.  If you’re interested, here’s the copy of the form that was filled out as they entered the United States.  It’s unlikely the clerk at the point of entry could understand them well enough to spell their names correctly; and given that it had been spelled so many ways in the records before 1846, it’s likely they didn’t think it mattered too much anyway.

We don’t know exactly why they left Germany to come here, but we think it was to escape military drafts; or perhaps they were being persecuted for their good looks:

Elizabeth and Franz

I thought of my ancestors as we got up early today to see my daughter, aged 15 for a few more days, off to take the PSAT, and wondered what they’d think if we could explain to them the way things operate these days.  And the more things unfolded, the more I thought about it.

Emily couldn’t find her calculator this morning, which is of course an important thing to have for this test. Normally, I’d stress out about things like this, but I instantly realized that it’s not really a big deal, so I shrugged my shoulders and let it slide.

For her the stakes are very low; she’s always done very well on standardized tests, but is probably five percentile points outside the range of National Merit cutoffs.  In other words, this is about as low-stakes as this high-stakes test gets.  Right now, the extraordinarily selective schools where scores really matter aren’t on her radar, and I doubt they will be when all is said and done.  Her outstanding academic performance should speak for itself when she applies to college, and will be supported by scores commensurate with her achievement in the classroom, for the schools who really care about such things.

But I can say that because I know what I know.  I thought about all the kids who don’t have parents who have worked in admissions and enrollment for 30 years.  For them, all testing is seen as very high-stakes.

Emily, like her brother before her, is a very bright kid, and yet by any measurements–standardized or not–she and her brother are very different people. Both are quiet and thoughtful, with varying degrees of genetic cynicism like their father, and healthy doses of warmth and affection like their mother. One is punctual and the other chronically late; one takes things as they come, while the other is focused and organized and always looking ahead; both have messy rooms, and interest in music; one is a whiz at math, while the other is facile with language and has been bitten by the acting bug. Multiply this by millions of kids, and you have a portrait of high school students in American and around the world.

And yet, we send all of them–future artists and chemists and actors and financial analysts and doctors and engineers–off to take a single test written by someone who has never met them; someone who has never taught them a class; someone who has never had the opportunity to see how they excel.  It is the most sterile, and in some ways, the most inappropriate use of the word assessment one might encounter.

We send them to take a test that purports to measure something important about them and all the other kids who are in, or who have gone through, high school.  Perhaps it does measure something important, if you consider the ability to pick the right–or most correct-answer from four given.  We all know that’s not at all like real life (where you’re not even always sure of the question, let alone given the range of discrete answers), but if given the choice, we’d probably all want to have that skill just in case; if nothing else, being able to eliminate wrong answers might come in handy if you’re ever on Let’s Make a Deal.

It’s important, however, to remember that the creator of the multiple-choice assessment considered the tests to be measures of lower-order thinking skills, despite their current reputation to the contrary. Now these tests are used not just to evaluate students but even entire school districts, as this email from my kids’ school points out.  And tests created by someone who’s never taught my kids drives how my kids are taught:

ScreenHunter_02 Oct. 19 12.03

When the scores come in, we’ll look at them, of course.  And when the mail arrives in bushel baskets, we’ll sort through it all, lingering fondly over some, and sending some unopened to the recycle bin.  But we’ll never define a complex human being by scores on a three-hour test on a Saturday morning; my earnest hope is that our kids don’t allow that to happen to themselves.