How much is a Rejected Application Worth?

I used to have lunch almost every day with a group of faculty who were from the sciences, economics, and finance.  Occasionally someone from English or Political Science would join the table, but the types of discussions that went on scared some people and seemed to repulse others.  Although I’m a long way from Libertarian, I found the talks amusing and intellectually stimulating, and appreciated seeing how others view the world.  It was Freakonomics before anyone had heard of Freakonomics, and I will never think of brown snakes in New Guinea the same way again.

One of the finance professors loved telling the story of his introductory course in Finance for beginning MBA students.  He’d love to go over valuation of projects when there was a potential cost of human life with one alternative.  Someone would invariably pop up with the observation, “But you can’t put a value on a human life.”

The answer was swift, in the form of an example: “Suppose we could develop a drug to save a thousand humans, but it would cost a million dollars.  Should we do it?”  The answer would come back immediately: “Of course we should.”  Then, the professor would ask, “What if we could develop a drug to save just one person if it cost a trillion dollars.  Should we do that?”  To which the student would reply, “Probably not.”  “So,” the professor would gloat, “maybe we can’t pin down the price of a human life , but we’ve just narrowed the possibilities.”  The lesson: Everything has a monetary value in the world of finance.  This is why, for instance, railroads won’t protect every rural crossing with gates; it’s actually cheaper to pay off the lawsuits when deaths happen than it is to construct crossing gates everywhere a track and a rural road intersect.

The faculty were naturally interested in my work, and one day I pondered about calculating the value of a rejected application for admission.  It’s clear that it’s worth something: The more you reject, the greater the public seems to value what you offer; the more demand for what you offer, the higher the price you can charge.  And this is before we calculate the other benefits of selectivity.  Of course, there is frequently a cost to generating more applications so you can reject more students, but this is always a factor in the discussion.

I still wonder.

Part of what I do is to collect data from a group of about 30 other colleges and universities each month.  After having done this a while, you notice that applications at one institution can swing wildly from one year to the next. Take a look for instance, at the most recent month’s report, made anonymous and sorted randomly.  This shows the increase in one year, measured at January 1, for freshmen for the upcoming fall term:

ScreenHunter_17 Jan. 24 10.31

Pretty wild, huh?  We all read these reports with a grain of salt.  Some colleges had a weak year last year, so an increase can mean things are just returning to normal.  Some have special missions that means their markets fluctuate quickly.  Others had great years last year, and are just coming back down to earth.  But more often than not, colleges are using artificial means to generate large numbers of phantom or “soft” applications.  If you’re a high school counselor, you may know some of these tactics as Fast Apps, Snap Apps, VIP Apps, or you may know them by some other name.  Essentially, they make it easier and faster to apply.  More important, they make it easy for colleges to appear to be more selective than they really are.  (For the record, we at DePaul don’t do use any of those techniques, although we are a member of The Common Application, and have seen increases in applications since we joined.)

As has been pointed out, sometimes students apply just because they can, not because they want to.  And that’s the rub for me: While I believe we should make it easier to apply to college (think back to typing apps for each college on an IBM Selectric, or filling them out by hand), I don’t believe applying to college should be like an impulse purchase of chewing gum in the grocery store checkout aisle.

So, I read with interest this article in the Chronicle of Higher Education about Boston College’s dramatic drop in applications this year.  The reason cited? They made it harder to apply, by adding a specific essay prompt to the application.  Applications fell by 26%, from about 34,000 to 25,000.

In many ways, this is good: Those who do apply will be far more serious about BC, I’m sure, and yield will likely go up.  But in the end, so will admit rates.  And we all know that the whole industry of evaluating colleges and universities is based mostly on inputs, including selectivity.

You see the problem? BC will admit a lower number, but probably a higher percentage of applications.  They’ll spend less time evaluating applications from students with lower affinity in the first place, and more time on students with more interest.  This is good, right?  Right?

Will the fact that BC’s admit rate is going up  cause students to not apply for admission next year because they tie the perception of quality to the often-times manipulated statistic of admission rates?  Or will the perception (and I stress “perception”) that BC is now slightly easier to get into actually drive more students to apply next year: Students who in previous years might have thought they didn’t have a chance?

We’ll see.  The extent to which a prominent, non HYP university can make a move like this and pull it off is speculative.  I wish BC a great deal of luck.  Ultimately, I hope to find out the value of a rejected application, or more precisely, the value of not rejecting applications.

Advertisements

More on Non-Cognitive Variables

I’m back from the USC CERPP Conference on non-cognitive variables in the college admissions process, where I gave a presentation on the results we’ve seen at DePaul, and offered some commentary on the whats and whys of what we as a profession are doing.

In a nutshell, here’s what I said in my presentation:

  • Most admissions people know that the things we collect at the point of admission do only a so-so job of predicting how well students will do in college.
  • We also find lots of kids who have “something” that we think will help them despite academic records that might suggest otherwise.
  • Many of the things we do look at tend to correlate with income.  Correlation does not imply causation, of course, but regardless, if we consider them important, some students benefit while others don’t.
  • The world’s great thinkers have struggled with cause and effect, and especially with prediction inside a complex system.  This is especially true when we suspect there are variables yet undiscovered that might lead to insight.
  • We may have the “poodle problem.”  That is, if we have a poodle, we know it’s a dog; if we have a dog, we can’t be sure it’s a poodle.  In the same way, if we say that students who succeed have certain traits, that does not mean that all students with those traits will succeed.
  • Non-cognitive variables, as we currently understand them, make our jobs a little, but not a lot, easier.  But a) that doesn’t mean they won’t eventually be more helpful especially if we find better ways to measure them, and b) it certainly doesn’t mean we shouldn’t stop trying.

Some other things that stick in my mind as I get over jet-lag:

  • We all owe a great deal to Bill Sedlacek, for his amazing pioneering work in this field.
  • The Morehead-Cain Scholarship Program seems to be way ahead of the rest of us in understanding this stuff.
  • We saw research that suggested that the LSAT may predict who will be a good law student, but actually is a negative predictor of who will be a good lawyer.  Really amazing stuff.
  • Challenging the ridiculous premise that  “GPA + test scores = merit”  is perhaps our greatest challenge as a profession.  We have a long way to go in effecting the collective belief.
  • As is almost always the case, the people I meet at conferences like this are interesting, smart, and really dedicated to their profession and in making the world a better place.
  • It was good to hear David Coleman speak about change in the College Board, and I’m glad they’ll be releasing data to colleges to support our work, but I didn’t hear enough to convince me over a talk at dinner.
  • In my opinion, the premise upon with Richard Sander based his research for “Mismatch” could have and should have been vetted by talking to someone who can help him understand things more clearly.  (He’s both an economist and a lawyer, two professions, I’m afraid, that tend to believe they can look at anything and figure it out with just a rigorous intellect.)

On that last point, let me give you an example:  Sander posted a chart (a really horrible chart, by the way) that showed widely varying completion rates for students who selected STEM (Science, Technology, Engineering, Math) majors.  But he fails to see his own fallacy of equivocation: If we lump all “students who select a STEM major as a freshman” into a group, we already fail to recognize an important concept: That wealthier students, from more affluent schools, with college educated parents, and access to better college guidance, have selected STEM majors after a fairly rigorous sorting process.  The advantaged ones who probably should not be in STEM fields have weeded themselves out; for poorer kids with fewer advantages, the freshman courses serve the purpose good counseling could have done.  But I don’t see any control for that fact, a fact that would have been obvious to anyone who’s done this job for a while. (Note: I’ve not read his book, and it’s possible that somewhere in a footnote this is addressed, but he made no reference to it in the presentation.)

Overall, this was one of the best conferences I’ve been to.  It’s great to talk about and hear about something other than the same issues we hash and re-hash on an annual basis.  I hope next year’s is equally good.

Explaining Test-optional with (almost) no statistics

I’ve been enjoying the holiday break, one of the nicest parts about working at a university.  It’s normally a time to sit back and take stock of the year while looking forward to getting back to work soon after the first of January.

As I checked my Twitter feed this morning, I noticed NACAC had posted a link to an opinion article in US News and World Report, written by Kathryn Juric of the College Board.  I know that the author of an article seldom writes the headline or creates the link of the article off the homepage, but this one grabbed me:  Colleges Must Keep the SAT Requirement.

OK, I think. I’m uniquely capable of responding, based on two things: First, I’m a member of the Midwest Regional Council of the College Board, and also help plan the Midwest Regional Forum.  I like and respect the people I come in contact with there. And, I work at a Test-optional University.

The article sounds a bit defensive, at least to this English major who was fairly good at reading subtext amid context.  I can handle that, and I understand it.  As Upton Sinclair famously wrote,  “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”  If you feel your livelihood is under attack, you might take offense.  I would.  I am.

But the attempt to lump the whole test optional approach into one collective movement that “scapegoats” all standardized exams is at least immature reasoning, and at worst sophistry, using the logical fallacies of Begging the Question, and The Fallacy of Composition.  There certainly are people who are militantly anti-test.  And there are people who point out that there are issues of non-equal performance based gender, income, and ethnicity, and who have concerns about access to college based on such an exam.  Those people hardly account for the whole test-optional movement, although those points are good ones that should not be dismissed.

For me, the move to test-optional was really two-fold: First, there was and is the research conducted by people outside the testing industry.  That research is,  for the most part, pretty conclusive and unanimous: Standardized tests don’t really tell us a lot we don’t already know, at least not at the 85% of the universities in the country who are brave enough to acknowledge they’ll never run admissions like Princeton does.

Sidebar: A little bit of statistics without formal statistics talk: In our internal studies, standardized test scores uniquely explain very little of a student’s performance at DePaul.  (This is important: Other colleges may have different results, something the pro-testing people never seen to want to admit, which, I think, makes their argument much weaker.)  Standardized tests may appear to explain performance because they tend to co-vary with grades in college-prep classes, which are the most important predictor in every study I’ve ever seen, by a substantial measure.  (In other words, most students with high grades in a given school have higher test scores, and vice versa.  So test scores simply repeat and amplify the high school grade signals.)  What was most surprising to me was that this was true across all schools, with the possible exception of the very lowest schools on the socio-economic and college-bound scales.  We know poor kids and/or kids from families without college-educated parents don’t go to college.  I suspect the most overlooked factor in this is the very things colleges think those students need to be successful in the first place.  As I said in my post “The Myth of Need-blind Admissions,”  here:

It’s true that these institutions do a great job of funding poor students they admit.  The problem appears to be that they don’t admit many of them in the first place.  This is the myth of need-blind admissions: All these institutions (I think) claim to be need blind, but when they make admissions decisions, they only pay attention to the income part of low-income, not the residual effects.  If you use SAT or ACT; if you favor students who have lots of AP courses; if you effectively reward expensive test-prep programs; or even if you prize activities that can only be mastered if you have lots of time because you don’t have to work, you’re overlooking a lot of things that come with being poor, or even middle-class.  Need blind admissions is a nice, noble-sounding term. It’s not so pretty in reality.

The research and statistics part is important, of course, as we don’t do much in higher education without them.

But for many, the test-optional movement is based on a different approach and philosophy.  So, without statistics, a few observations:

  • There is no doubt that standardized tests measure some type of intelligence.  The ability to quickly choose the correct answer from four given is a skill, and it’s fairly important skill: Separating the wheat from the chaff is often very important in logic, for instance, or even mathematics.  (The creator of the “bubble test,” however, Frederick Kelly, called this  “lower-order thinking.”
  • And as I’ve written before, selective colleges really like this: When you have thousands of well qualified applicants, you can select from those who have both proven academic success and that special skill that comes with doing well on standardized tests. It adds a small additional measure of precision, they think, to an inherently imprecise decision.  It’s unlikely any university in the Ivy League is going test-optional.  Despite their lofty reputations, they have too much riding on the SAT Arms race.  It’s good for them to cite standardized test scores that are off the charts. If you want to look up your favorite, you can do so here, with 2011 IPEDS data.
  • However, college work is really not very much like a standardized test.  And neither is life.  It’s just not that often when someone comes to you and gives you a problem, tells you the answer and three wrong ones, and then requires you to pick the correct one out of the bunch.  Usually, you find, the number of possible answers are quite numerous, there is no single answer that’s perfect, and often, the problem presented can’t always even be placed in the form of a proper short question.
  • Nor does a standardized test tell us whether you’re going to be capable of sticking with a subject for a semester, adept at getting an assignment and spending hours researching it and writing a paper, or contributing to class discussions.  But you know what does?  Your record of doing just that for four years in high school.
  • As a parent, I’m concerned by the extent to which standardized testing has taken over much of what is done in schools today.  Make no mistake:  People in charge of schools are held accountable for the outcomes on these tests, and the result is multiple choice tests in almost every class, including English literature, and history. Taking a standardized or multiple choice test is a skill that can be honed over time, and if performance on the test is the measure, kids are going to be tested this way.
  • I’m also concerned by the ways in which standardized testing have come to be the focus of the junior year.  I attended a college program with my son, and three quarters of it was about testing: Where, when, how often, which test prep,etc.  I’ve known parents who made kids give up activities they love to prepare for college entrance exams.  Exams, that in the end, may not tell us much about anything important. And I think what is lost in the race is time to develop thinking, writing, problem solving, aimless exploration that leads to discovery, creativity, and other important skills teenagers should be developing.

Tests may help us distinguish between dogs and poodles. We can posit of course, that

  • All poodles are dogs (or) All good testers are smart
  • Not all dogs are poodles (or) Not all smart people are good testers
  • You can be a dog who is not a poodle (or) You can be smart with being a good tester
  • And we must admit that all non-dogs cannot be poodles (or) Let’s admit that if you’re not very smart, you won’t score well on standardized tests.  That’s not the point.

Robert Sternberg, the Provost of Oklahoma State University wrote a most eloquent piece about some of the conceptual problems with standardized testing.  I hope people at the College Board and people who are proponents of standardized testing read it and consider that maybe everything they’ve come to believe about the value of tests might be wrong–not for everyone, not for every college, not in every situation–but for a substantial percentage of our students.  These students are right, capable, talented, motivated, eager to learn, accomplished as students..but maybe not especially proficient at picking out the right answer.  And they shouldn’t be measured by a test created by someone who’s never taught them.

These students and the colleges who want them to become productive, educated people should really pose no threat to the College Board.  If you want to lead educational reform, start by acting educated.

Do Notre Dame Football Graduation Rates Prove the Value of Non-cognitive Variables in College Admission?

A recent article in USA Today lauded the ways in which Notre Dame football is Number 1 in graduation rates of its players. And of course, they’re now Number 1 in the AP Poll and BCS rankings for College Football, too, a rare accomplishment that seems to make the always-proud alumni base even more sanctimonious than usual. (Note: This link will be obsolete as games are played.  Here’s a screenshot of it as of November 28, 2012.)

But this is not about those people who allow cult-like pride and slavish devotion to a non-existent ideal to interfere with reality. And it’s not really about Notre Dame, either. It’s more about selective college admissions, graduation rates, and what “admissions standards” really mean.

It’s widely acknowledgedeven by people within an institution–that the academic profile of athletes as we traditionally measure such things is lower than that of the non-athletes in the freshman class. It’s true everywhere.  No news here.

But does it seem at all odd to you? If a highly selective institution (and I include the likes of Northwestern, Stanford, Duke, and other places that combine big-time athletics with high levels of selectivity and the accompanying graduation rates) publicly opine (either overtly in their words or covertly in their actions) that only the best students as measured by SAT and GPA can succeed, how is it possible that so many students who are at least one–or maybe two–standard deviations below the mean manage to do so well? And, you might ask, how can they manage to do so well while committing to what must be the equivalent of a 40-hour work week?  Conversely, if many of the low-income students who don’t measure up on traditional measures promised to spend an extra 40 hours per week studying, could they graduate too? (Many of the most selective places in the country don’t admit poor students, largely, I believe because poor students usually score lower on the SAT or ACT.)

It’s true, of course, that we don’t see final GPA’s of the athletic students (I’ve never liked the term “student athletes”), so maybe this is where the disparity comes into play. But assuming that graduation is the real threshold, one of several possibilities might occur to more cynical readers:

  • Support services for athletes are extraordinary
  • Someone else is doing the work
  • Athletes take easy classes sanctioned by the university
  • The university is really not as rigorous as it claims, and anyone could graduate

But being the cheerful optimist that I am, something else has occurred to me:

What if the thing that really gets you through college and through life is not just intelligence in the way we traditionally measure it? What if it has to do with things like leadership, drive, determination, motivation, goal setting, moral support, and dozens of other non-cognitive things we can’t even describe?  What if that intangible “it’ that admissions officers see that makes them take a risk on a candidate means a lot more than we give it credit for?  It’s almost like Bill Sedlacek was right.

That’s what I’ve come to believe. Academic intelligence and cognitive ability are important, of course. But if you believe Al Maguire’s “The world is run by C-students,” or Woody Allen’s “Eighty percent of life is just showing up,” you begin to wonder whether any long-held belief about the way we do college admissions is meaningful.

I’ve been called an instigator. It’s also been said I love to stir the pot. Tell me what you think.

What if You Threw a Scandal, and No One Cared?

I’m not even going to link to the stories, as they’re so abundant and common: Over the past several years, many colleges have been caught (or sometimes admitted without getting caught) doing things to inflate the profile of the incoming class.  Usually, this is the freshman class and SAT manipulation (Emory, Baylor, Claremont McKenna); sometimes it’s class rank or academic accomplishments of freshmen (George Washington University and Iona); and occasionally, it’s been law schools and their first-year class (Villanova and The University of Illinois.)  These are pretty clearly unethical, and almost always frowned upon, except if you’re  the former president of GWU Stephen Joel Trachtenburg and you don’t understand what all the fuss is about.  Because, apparently, reporting actual historical data is just like forecasting whether it will rain on Alanis Morissette’s wedding day. Neither of which, by the way, is ironic.

This doesn’t even begin to take into account things colleges do to inflate scores that are generally accepted: Superscoring ACT’s in order to create composite scores that don’t exist; requiring all instances of the SAT a student has taken, but reporting only the highest of each subsection; deferring wealthy but weaker students to spring admission, where no one ever looks at the freshman profile, or giving preferences in admission to applicants from schools that don’t rank because, hey, if a student doesn’t even have a rank, how can I report it?  And, does anyone know what an applicant really is these days?  Some places count as an applicant anyone who clicks on a special “Priority” application link in an email; most of those who don’t get admitted (thus lowering the admission rate) are in this group.

Even though we don’t do any of those things, I’m not being judgemental.  I happen to work for a place that takes internal measures of academic quality very seriously, but doesn’t worry too much about inputs.  In other words, I’m very lucky.  But if my president or trustees told me to do those things, and if I wanted to keep paying my mortgage, I’d have to think long and hard.

US News and World Report, the oft-cited villain in the admissions arms race, just took some unprecedented action, removing GWU from its online rankings. But what if, come a year from now, none of this mattered?  What is students and parents still considered GWU and Emory and CMC and all those places for what they are, not for what a magazine says they are?  What if people cared more about what happened in the classroom than what happened in the four years before a student was admitted? What if we went back to 1977 (the year I graduated from high school) to a time when no one really knew or cared about all those numbers? Or, when no one cared about what you wore for your senior portrait, either.

The proverbial Genie is out of the bottle, of course, so none of this will happen.  But would the whole admissions profession be better if it did?  Would students behave differently?  Would parents behave at all?

What do you think?

Idea Block, Twitter, Traffic Lights, and Other Signals

I had a boss who once talked about “sending ideas out to the universe.”  It’s poppycock, of course, if for no other reason than the universe doesn’t care.  But that doesn’t mean it won’t work sometimes, I suppose. Yesterday, I tweeted this:

And this morning, the Twitter-verse unknowingly responded with this:

Which gave me–as only Twitter can–the ability to legally eavesdrop on three colleagues: John Lawlor, Deb Maue, and Chris Lydon.  I know exactly what the conversation means: Measuring the quality of the educational products of a college by measuring the freshman class is like measuring how good a basketball team is by evaluating the average height of its players.  It might work, on occasion, but that doesn’t mean it’s right.  However, I responded to all three, suggesting that in fact graduation rates were inputs, in a weird sort of way.

That probably would have been the end of it, but it got connected to something I frequently see on my walk between Union Station and my office, about a mile each way.  And it is especially common as it gets colder and people are eager to get indoors.

Here’s what happens: You’re standing at a corner with twenty other people waiting for the light to change, and someone notices no traffic coming, so they start to cross against the light.  Others follow, even though the sign is clearly “Do Not Walk.”  People come from the other direction, and, seeing the throngs of people crossing, assume the light says “Walk” and proceed to do so.  But by this time, a car or truck or two has come barreling down the street, and I almost witness a pedestrian hit by a vehicle.  What happened?  It’s clear that the pedestrians have equated large groups of people crossing the street with a “Walk” sign.  And usually, that signal is right; occasionally it’s not, and trouble ensues.

Thus, a blog post. And the end of Idea Block. And the head knocking resumes.

If we attempt to measure the value of an institution by its outputs (graduation rates) we are really just confusing signals, like pedestrians on Jackson Boulevard in Chicago on a cold morning. We’ve become so used to the mix up of inputs and outputs that we forget to look at the real signals; because in a way, inputs and outputs are the same things.  Don’t believe me?

Take a look at this screen shot of IPEDS Data visualized.  It shows test scores on the x-axis, and graduation rates on the y-axis. Note how they line up?

That alone would be enough to make the point, I think.  But play with this visualization by clicking here.  See if you can find any type of institution where it doesn’t hold: Urban, rural; any region; any religious affiliation;  public or private.  And notice two more things:  The color of the dot (which represents a single institution) shows the percentage of freshmen with a Pell Grant.  And the size of the dot shows the rejection rate: More selective institutions have bigger dots.  Then, just for fun, filter to institutions that don’t’ rely on student tuition to manage the budget.  Just pull the top slider down to 50%.  See who’s left.

Then ask yourself: Is a high graduation rate a function of what goes on inside the institution? Or could it be a function of selectivity, test scores and family income (which are pretty much the same thing) and resources?

Would love to hear what you think.

Life, and EM: A Series of Trade-offs.

There are very few people who understand that Enrollment Management is, at some level, an exercise in managing trade-offs.  Even though the old Michelob Light commercial suggests that you can have it all, in fact, you can’t.  And in reality, if you could, you wouldn’t be working in higher education. (Those of you who know me also know I like good German and American beer, but I’ll keep my comments about Michelob Light to myself for now.)

So, helping people understand trade-offs is a critical component of working in Enrollment Management:  If you want to push up or down on quality, quantity, diversity, or net revenue, the market is going to be more than happy to push back on you, often harder and more dramatically.  But very few people outside of Enrollment Management understand this; those who do grasp the concept of trade-offs probably don’t have the time or the inclination to dive into the details to see the nuances.

My job is essentially trying to hit a sweet spot: Managing to generate enough net revenue to pay the professors, heat the buildings, buy computers, and keep the library stocked with academic journals and important books; keeping quality as high as we can in light of the need to pay the bills; and not ever giving up on a critical component of our mission to educate those whose economic situation might not normally assume a private university education, because offering low-income students anything less than a top-quality education only adds insult to injury. Keeping these things in balance is vital to accomplishing what we set out to do.  And we re-invent the way we do it every year, because the number of students in the world is fixed, and competition is pretty fierce.  On top of it all, every university has a different recipe for success.

Historically, we’ve managed this delicate and ever-shifting balance by using SPSS and Excel to examine the relationships between and among the variables we are interested in; typically, we spend several days a year doing nothing else, and it often involves Powerpoint decks of literally hundreds of slides.  When your attention span is as short as mine, I guarantee you lose something important while day dreaming.

So, for internal use and to illuminate the balancing act, this year I took four years of data and rolled it into my favorite visualization tool, Tableau Software. In the interest of full disclosure, I’ve served on their Customer Advocacy Board, because I’m a fan, not because of the free T-shirts, or the beer I’ve been promised by my former account manager for five years now!  I have no financial interest in the company.

The data is confidential, of course, so I can only show you a screen shot, which has been sanitized by removing the values and the axis labels.  But look at this: With just a click or two, you can make a shallow or a deep dive to see the give-and-take between and among a handful of variables: Which students have the highest GPA?  How much do test scores vary by financial need?  Are men or women better students (as if we don’t already know the answer to that one!)  What percentage of our first-generation college students are from Illinois? Who helps us accomplish our mission? Who helps us pay the bills?  Which college has the most attractive students? OK, perhaps that last one is not in the data set.  But you get the idea.

If you work in EM, you owe it to yourself to explain to your campus community the ins and outs of your profession; if you work in higher ed but not in EM, you owe it to yourself to educate yourself about how these important variables relate to each other.  How you do that is up to you, but I strongly recommend against a 247-slide presentation.  You can do better.