Demographics: Numbers Behind the Numbers Matter

We hear it all the time: Demographics are changing.  And of course, they are.  But turmoil in higher education goes way beyond just the numbers of people who might be graduating from high school.

I’ve been doing a presentation on our campus to talk to people about how things are changing: Important economic and societal shifts that we in higher education cannot control.  These changes over time have buoyed, and now threaten to weaken or sink, many institutions of higher education in the US.

Let’s look at a few of them.  If you want these slides, they’re available for download via a link at the bottom. If you want to see the whole presentation, you’ll have to come work at DePaul.  Most of the charts come from data visualizations on my other blog, where you can see the sources of the data and other notes.

First, High School Graduates:

The light blue bars show WICHE data with the number of high school graduates in the US over time, from 1996 to 2027.  Years after 2012 are projected.  The lines show the composition of that number, broken out by ethnicity; purple for Caucasian, red for Hispanic; orange for African-American, green for Asian, and blue for Native American. As you can see, numbers are at a low point, which is bad.  They get better, which is good.  But the composition changes, which, from a purely statistical standpoint, is bad.

Click on any of the charts for a larger view

ScreenHunter_02 Mar. 24 13.46


Why does the composition matter? Because race and ethnicity still matter in the US, for a lot of reasons. But statistically, different ethnic groups go to college at very different rates.

ScreenHunter_05 Mar. 24 13.47

In case you were not aware, income also matters in the US.  To no one’s surprise, students from wealthier families also attend college at greater-than-average rates.

ScreenHunter_07 Mar. 24 13.48

Do we have data on income? Yes, of course, if you look at PUMS Data from the American Community Survey (a 1% sample of the population, conducted annually).  This is children by age group for families in Chicago with incomes of $150,000 or more.  Note that these families have fewer younger children in (gold bars), and that parental educational attainment may be the single strongest predictor of a student’s propensity to go to and graduate from college :

ScreenHunter_12 Mar. 24 14.51


Now look at families in Chicago who will need aid–a lot of aid–to graduate: Based on parental attainment, income, and ethnicity, these students are far less likely to pursue any post-secondary education. And their populations are generally getting bigger.  This is driving the “rebound” in high school graduates in coming years.  Of course, this is just Chicago, but my visualization has all metro areas in the US (it’s too large to upload to the Tableau Public Server), and most areas look something like this, with local variance, of course.

ScreenHunter_10 Mar. 24 13.49

So, if you thought things were tough now, it’s not going to get any better in the foreseeable future, especially if universities continue to do things the same way they always have.

These slides are available for download here.


Sorry, everyone: Students are not customers

Most dumb ideas are hatched by men, so I’m going to assume that whoever it was that first uttered “Students are our customers,” was probably a guy.  I’d like to kick him on his butt, although since I’m generally a pacifist, I probably wouldn’t given the chance.  Still…

The idea has been floating around as long as I can remember, probably from the very beginning of my career.  When the dumb guy first spoke the words that shall not be named, he probably meant something like, “We owe the people who pay for their education some respect; let’s not serve crappy food in the cafeteria, or make them wait needlessly in line to register (this was a long time ago, remember), and let’s make sure we keep class content up-to-date with current research and thinking.  They deserve that from us.” And, of course, it’s hard to argue with that.  If that’s what you meant, dumb guy, I apologize for calling you a knucklehead all these years.  But I do wish you would have used a different word.

The debate,cicada-like, comes back to life every so often, usually by someone who thinks they’ve discovered a new concept; and it gets kicked around, only to die off quickly and leave a bunch of bug carcasses around your back yard.  Here are the remnants of the most recent iteration.

The most obvious problem, of course, is that students don’t really know or understand what they’re “buying” in the first place.  They come to us precisely because they’re generally ignorant of what they need.  If you’re feeling bad and you go to the doctor and say, “I have a virus; give me an antibiotic,”  the doctor has an obligation to first try to figure out what’s wrong, and then, should your self-diagnosis be correct, inform you that antibiotics don’t work on a virus.”  In a similar way, we say, “If you want to be educated, this is what we say you need to do.”  And of course, no university and no doctor is ever completely right about that 100% of the time. But neither treats the people who come to them as customers.

If students were customers, and they said, “We want a keg of beer on the floor at all times,” we’d oblige.  If they wanted to get an “A” by paying extra, we’d offer that for sale.  We do neither.

Mostly, though, my criticism stems for one undeniable fact: The transactional model of business/customer just does not hold up under closer scrutiny. Consider these three scenarios:

  • You walk onto the lot of the Toyota dealer, cash in hand, and point to the model you want.  The Toyota dealer says, “Hey, not so fast.  I need to make sure you’re capable of driving first, and that you’d benefit from owning this car.”
  • You do actually buy the car, but after two months, the dealer calls you to take it back, because you’re not keeping it up to their standards.
  • Or, ten years after you buy the Toyota, someone from the “Office of Proud Toyota Owners” calls you, asks you to remember the good times you had driving your Toyota, and asks you to write a check so that others can similarly benefit from owning their own Toyota.

Is there a better model, or maybe just a better word?  I don’t think we really need a new word; we have one: It’s student.  But if you somehow feel you can’t inspire people by saying, “We should treat our students like students,” how about member?

If we treat students like members, we retain the right to refuse their money if we don’t think they can benefit from what we offer.  We can insist they live up to certain requirements to keep their membership current and in good standing. We can strive to make their experiences in the cafeteria, or the registration (electronic) lines, or the classroom as good as they can be.  And, years after they’ve “purchased” their final goods and services from us, we can ask them to renew their membership at an affiliate level.

And I’ll never have to disparage the dumb guy with my silly rants again.


The College Board and The Catholic Church

I had another title all picked out for this post: It was going to be “An Interesting Week for the College Board” or something like that.  But after I typed the title, I realized that the College Board and the Catholic Church share a lot in common, at least from where I sit. (And this is a good time to reiterate that the opinions on this blog are mine, and may be influenced by my work, but are in no way an official or unofficial representation of the views of my employer.)

Both the College Board and the Catholic Church:

  • Claim me as a member, even though I’ve never really bought into most of the dogma
  • Are fairly inflexible–some might say intransigent–about what they believe to be the one right way to do things
  • Have new leaders who have brought considerable change
  • Get criticized for being big cash-laden organizations who spend lavishly in light of their purported concern about the poor
  • Have supporters who are fervent, dedicated believers; and those who despise the very thing they stand for
  • Invoke wrath when they presumptuously try to impose their beliefs on others
  • Have mostly good people in their employ who are, I believe, trying to do good, despite the opinion some people hold of the institution as a whole
  • Seem to be having some trouble making both their core product and their core message resonate with people these days

But this week, The College Board got most of the attention, at least if you work in education.  President David Coleman announced sweeping changes in the way the SAT is designed and administered.  You can read about them all over the web, but start here if you haven’t heard.

Reaction was swift, and came from all corners: Some thought the changes were a response to the ever-growing dominance of the ACT; there were the inevitable articles about “dumbing down” the test or American education in general; The New York Times appeared to trade private access for a flattering story; some expressed great faith that the new SAT will be all it claims to be, while praising the changes; while many simply said, “meh.”  The most strongly worded response came from Leon Botstein, the President of Bard College, who wrote in Time Magazine, calling the SAT “Part Hoax; Part Fraud.”

My own feelings are probably well known to anyone who reads this blog, and are admittedly a combination of hard research and personal observation:

  • Any test created by someone who never taught the material to the students tested is inherently lacking
  • SAT and ACT do explain freshman performance, but since the tests and High School GPA covary so strongly, it simply duplicates the effect of GPA, but does not do it better.  As an incremental measure over and above High School GPA, the benefit is negligible at best.
  • GPA–even compressed GPAs from 35,000 different high schools–explains more about freshman performance than the SAT or ACT (no one from either organization disputes this, by the way).
  • Both tests do, in fact, measure a certain type of intelligence: Picking the “right” answer from four given. And the fact that the tests might get it right 40% of the time seems good enough for many. However, this is not necessarily the way students “do” college.  In life as well as in many classes, sometimes you don’t even get the question; when you do, oftentimes the answer fails to be described in a few words.
  • The tests have a very high “false negative” and a very low “false positive” for whatever it is (and we can’t always even define what it is) they purport to measure.
  • Insecure people who have high standardized test scores are often the ones touting the value of standardized tests
  • Super-selective institutions like the tests, even though they know it doesn’t predict much of anything academic, because: a) high numbers equate with “smart” and equate with “high quality” and b) they don’t need to, nor do they want to, take any risk on students, and c) they one thing they do measure really well–wealth–is important to many of them colleges.  It also gives them a convenient excuse to enroll fewer poor students.

But as I sat at my desk watching the webinar preview for representatives of college and university admissions offices, several things struck me as interesting:

  • The implicit acknowledgement that test-prep works, and that it favors the wealthy
  • The tacit admission that the SAT is not the great predictor of value colleges have been led to believe it is
  • The opening remarks about transparency, coupled with the immediate request that I not Tweet about the presentation details, seemed to be at odds with one another.  (It’s normal to tell people information will be embargoed before they accept an invitation to hear or read it.)

And especially, this tidbit, from a slide in the presentation, which I’m saying is Fair Use, in case any lawyers are reading this (click on it to enlarge):


This may or may not strike you as new, but at the very least it should raise some questions: Who invited the College Board into 6th Grade?  And if they begin developing curricula for 6th grade, how will we know if it’s effective? A test, you say? Developed by whom?  Don’t hurt your brain trying to figure out this puzzle; the answer is obvious.

Thus, there is one more way the College Board and the Catholic Church seem to be alike: Even on those times you’re tempted to believe they’re doing something for the right reason, you always have to wonder what motive lies underneath it all.

Meet The New Boss, Same as the Old Boss

Bloody Monday: Not just for the NFL

It started sort of innocently enough: A post of a Facebook page for college admissions officers.  It was one of those questions that high school or independent counselors post asking/complaining (often, with just cause) about some college practice.

It was there, buried in a longer response: ” I wish I understood why yield was still a concern now that US News has downplayed that rubric in their rankings.”  I pointed out–more politely than usual, I might add–that yield (the percentage of admitted students who enroll) has a huge effect on enrollment.  Suppose you admit 12,000 students, and you project a 20% yield rate to enroll 2,400.  If yield goes up to 21%, you have 120 extra students in your class, or 5% too many.  If it goes down to 19%, you come in 120 students short, at 2,280.  In other words, if one out of every 80 students behaves unlike  you’d expect, you can be in big–or really big–trouble.

And that was that.

Until today. I heard about another colleague who has lost an admissions or enrollment job, for reasons that are all too obvious.  That’s a half-dozen this year already, and we’re still a long way from spring; these are just people I know. And from what I hear, it’s just beginning.  This may be the bloodiest year in a long time: Maybe the bloodiest I’ve ever seen.

Are admissions people getting less and less competent? I suppose you might think so; maybe you’d be right. But maybe this will change your mind.  It’s a summary of some data presented on my other blog, Higher Ed Data Stories.

And back on the ranch, the expectation is pretty straightforward: More. Better. With less need for aid.

I’ve been in this business for 30 years, and I recall few days when “The Number” hasn’t crossed my mind.  If this year’s class is in, you start worrying about the next one. Even so, we know it’s part of the job, and the rewards of what we do, as evidenced by the lives of the students we affect, are high.  But over time, we’ve seen “The Number” turn into “The Numbers,”  with the complexity of the expectations increasing even as reality pulls strongly in the wrong direction on all accounts.

It’s true of course, that someone on campus knows about faculty productivity; someone knows about how much financial aid we spent last year; someone knows how much money the Advancement Office has raised; and lots of people know how many games the basketball team has won.

But everyone knows the freshman class size (and the average test scores.) Or so it seems.

In some sense, my colleagues are like NFL Coaches: Success, a finite commodity based on the nature of the game, is parceled out by the whims of the gods, and your hard work and good fortune bless you with it on occasion.  But the organizational appetite never goes away, and when it’s not fed sufficiently, good people are shown the door, and often replaced with someone who–in many ways–is just like the person leaving.  Only different.  The NFL has its Bloody Monday, the day after the season ends and coaches get fired.  In enrollment, we have bloody springs.

Having done this for so long, I’m grateful that I’ve been able to stay in one place as long as I wanted, but I’m also surprised when the pressures and the issues and the expectations we deal with are not obvious to those who don’t do it every day.  Maybe the same could be said of most professions. But for as much fun as this profession is, and for all the rewards it brings, I do wish we could bring a little more sanity to the continual upward spiral of expectations.

Don’t Write About Teachers

It was a Sunday, a day like any other Sunday.  I went to look at the NCES Digest of Education Statistics to see if any more tables from the 2013 version had been released.  To my delight, I found some interesting stuff; but most of the NCES Tables are designed to be printed as reports, and are in no shape to be pulled into the software I typically use, Tableau.

But this one on teacher salaries was in pretty good shape, even though I almost always focus on higher education data.  A couple clicks, and I was ready to visualize.  I did so and put it up on my other blog, Higher Education Data stories, here. One of the meta-reasons for doing so is to show how much more understanding of an issue you can impart with a picture as opposed to a table of data. I hope you agree.

I sent it off to some groups, and posted it to the NACAC e-list, an email group of college admissions professionals and independent and high school counselors.  It’s an open list, and Valerie Strauss from the Washington Post asked if she could share it.  It’s a blog and it’s public, so I happily agreed. It was up that afternoon, and you can read it here.

In addition to the hundreds of comments this has drawn on the WaPo site (which could be a post in themselves), I’ve received lots of emails and posts about the visualization.  They fall into several groups:

  • I’m trying to hurt teachers by showing how high salaries are
  • I’m trying to help teachers by showing how low salaries are
  • The data can’t be trusted because it’s from the Feds
  • The data doesn’t account for costs of living
  • The data doesn’t account for average service
  • The data isn’t split by union/non-union states
  • The data can’t be right because someone’s cousin makes way less than this
  • The data can’t be right because someone’s cousin makes way more than this
  • I shouldn’t have used red-green scales (and this person was right; I should know better).

Lessons learned, but good to repeat:

  • You can only viz the data you have
  • The limits of means as a measure of central tendency are not widely understood
  • Everyone’s an expert
  • I’m an idiot for stepping into this without understanding what a political landmine teacher pay is.

Lessons learned, internalized, and acted upon.  Stick to higher education.

And for those of you still reading, I had no political agenda at all; I simply thought the data was interesting, and that it would make a good visualization.

A Look at Test Optional Results: Year 1

This post is longer than most; if you know the history and the background and just want the results, skip to the ~4~ mark.

~Some History~

In February, 2011, DePaul announced that it would become the largest private, not-for-profit university in the nation to adopt a test optional policy. There was ample precedent for this: Many other colleges had offered test-optional policies for a long time, and the results had been positive at all of them, as far as we could tell. In fact, Bates had published a 20-year analysis that effectively demonstrated the tests did not help much in predicting academic success for their students.

There were two motives behind our move to test optional: The first was the statistical research we did that suggested standardized admissions tests widely used today explained almost no variance in freshman performance (a combination of GPA and credits earned), once you eliminated the effect of co-variance with high school performance. (In other words, test scores and grades tend to move in the same direction, so when you’re looking at a student with high test scores, you’re usually looking at a student with high grades. Usually. And grades explain freshman performance better, albeit with still a lot of need for other factors to help make sense of it all.)

Second, we knew that students from certain groups: Women, low-income students, and students of color, especially, scored lower on standardized tests, and that when used alone, scores can under-predicted first-year performance. Our own anecdotal evidence and our discussions with high school counselors told us that students often ruled themselves out from applying to certain colleges based solely on a test score.  And research at the University of Chicago on Chicago Public Schools (CPS) students suggested the same thing; it also pointed out that CPS students who took a strong high school program graduated from DePaul at the same rate as other students, despite test score profiles that suggested they were “at risk.”

So, we took the plunge.


Robert Sternberg, who was then the Provost at Oklahoma State University, offered several compelling reasons about why we as a nation are so wedded to test scores. He starts his essay with, “Many educators believe that standardized tests, such as the SAT and the ACT, do not fully measure the spectrum of skills that are relevant for college and life success,” and continues to outline factors such as the illusion of precision (a number sounds precise, so it must be); familiarity (lots of smart people in academia are good testers and that’s how they got to be where they are, so they’re not wont to challenge one thing that confirms their intelligence); and the fact that tests are free to the college.

I would offer another: That standardized tests do, in fact, measure a certain type of intelligence, and super-selective institutions have the luxury for selecting on both academic performance and this more limited skill.  All things being equal, if you can require high test scores for admission, why wouldn’t you?

But there are only a handful of the 4,000 institutions who have that market position, and who can command their students present very high scores.  I often wonder about what type of research has been done at the other 3,900: Perhaps there are many colleges and universities that have shown a strong value to standardized admissions tests. However, I suspect it’s just as likely that tests–and average test scores–serve as benchmarks, or a type of shorthand, for an industry that has been unable to measure what it does or how it affects students.

In some sense, the admissions office really does not know, and can’t define, what it’s looking for in candidates for admission precisely because we’ve never been able to predict with much precision where students will end up.  It could be intelligence, if we could define that.  Or maybe insight.  Or wisdom, or drive, or motivation.  More likely, some combination of them that recombines into the elusive “it.”  Oh, we know a successful college student when we see one, but it’s harder than you think to pick those who will succeed ahead of time based on their high school records and tests: Apparent shoo-ins flunk out despite sterling records; marginal admits make the dean’s list; and middle-of-the-road students can go either way.

We still believe there is that “it:” That one thing that will tell us all we need to know. If only we could measure–or even define–it.

We get lazy. If enough students with high scores have “it” our confirmation bias goes into high gear, despite evidence to the contrary.  We suppose it’s a kind of intelligence, and we embrace logical fallacies as we celebrate our ersatz discovery.  It’s what I call the Poodle Fallacy: If you have a poodle, you know you have a dog; but not having a poodle doesn’t prove you don’t have one. We believe that high testers have “it” and we forget that low testers might have it too.  Frequently, of course, they do.

Never mind that the inventor of the Multiple Response Test, Frederick Kelly, called them tests to measure lower order thinking skills.  Never mind that in real life–in business, medicine, law, education, engineering–the right answer is never presented for you to choose.  And never mind that often it’s difficult to ascertain the right question to ask, let alone having the luxury of a 25% chance of guessing the right answer (or improving your chances by eliminating obviously wrong options.) If school is not like life, it may be doubly true that neither are tests.

University of Maryland professor William Sedlacek, who has done a considerable amount of research on the role of non-cognitive variables in college success, recognizes that tests seek to measure cognitive and verbal ability (both important, of course), but that doing well in college also depends on students’ adjustments, motivations, and perceptions of themselves.  And Sternberg also touts skills related to creativity, wisdom, and practicality–not analytic and memory skills measured by tests–as necessary for leadership and citizenship.

Despite this, we knew a lot about how people were wedded to the idea and the practice of test scores. Having two children in high school in a test-obsessed school district, I’ve seen it first hand: Even English literature tests are multiple choice, and parents receive notices about how we need to encourage our children to do well on tests because a lot–an awful lot, like taxpayer satisfaction and state and federal funding–might be riding on them.

These tests, created by someone who’s never met our children, never taught them a class, and who can’t be sure they’ve even covered the material on those tests, carry a lot of weight.  And lots of people are heavily invested in them, some for reasons we don’t know and can’t figure out.


The headline still stung: DePaul dumbs down by dropping exams. A writer who had never spoken to me or anyone at DePaul, never gathered any feedback and who had apparently only read an article in the paper she was writing for, (and that article was done as a filler on a deadline), took some uninformed and irresponsible swings at us.  As you can imagine, it sent ripple effects through our offices, and sent me on a sort of tour of key constituencies: High School Counselors, Alumni, the President’s Cabinet, the Deans, the Associate Deans, College Advisory Groups, our own Student Government, and even our own division.

I spent a lot of time that spring explaining that we didn’t adopt a test-optional policy to a) get more selective, b) raise median test scores we report, c) garner publicity, d) increase diversity or e) ruin the university.  And I demonstrated why none of those was plausible, anyway.  People inside and outside DePaul were very receptive, and supportive, of our initiative.

My one regret is that we didn’t anticipate or prepare for the backlash, especially the hardest type to respond to: The opinions of the uninformed. To anyone who is thinking of doing this, I would only advise that you get ready for a lot of weak opinion masquerading as knowledge.

I’d be remiss if I didn’t mention two important things here: One is that I am not opposed to standardized tests, but rather the weight they carry in many important discussions and analysis of our educational systems.  For the very many students that don’t test as well as their native intelligence suggests they might, the tests can be the thing that kill dreams, even when those students have worked hard, taken everything their school offers, and excelled. And even, many times, when they have “it.”  (And, I have to admit, on occasion a test also serves as a “ticket out” for some kids.)

The second is that I know many good people at agencies that conduct standardized testing.  Unlike some, I don’t think they’re evil or wrong-headed or driven by impure motives.  I believe most of them are working hard, and trying to do a really hard job: Measuring the capacity to handle college work, and to thrive in it.  It ain’t easy; I just find it hard to believe that we can sum up anything in a single number.  To a person, everyone at the agencies I’ve talked to has understood why we did this–why DePaul’s mission makes us a good candidate for it–and they’ve been nothing but professional and collegial.   I continued to serve on the College Board Regional Council and DePaul staff have been asked to speak at the ACT Enrollment Planners Conference.


The results of our first class, after one year at DePaul are in.  And while the students who completed their freshmen year have a long way to go before we pronounce test-optional an unqualified success, the results are encouraging.  As a reminder, we collect the scores for every student post-admission, as a part of the research studies we’ll be conducting, but we didn’t know those scores at the point of admission.

And now that we’ve presented the results to our Faculty Council, I can share them more widely.

After one year, the entering class in 2102 at DePaul shows the following:

  • Freshman-to-sophomore retention was virtually identical, at 84% for Test-optional and 85% for testers.  
  • GPA for testers was .07 of a grade point higher (not statistically significant) despite a median ACT score that was 5.5 points lower for test-optional students.  
  • Believing that income has a big effect on academic performance, our analysis split the class into Pell grant recipients and those who did not receive Pell.  Not surprisingly, Pell status means a lot more than people think; Pell testers and non-testers were identical to each other, as were non-Pell testers and non-testers. The resultant effects of poverty are meaningful.
  • In two of our colleges, test-optional students had higher GPAs than testers.
  • None of the test-optional students started the second year on academic probation, compared to 1.7% of testers.

We noticed a few things we’ll research further: The first-year GPA discrepancy was higher in the College of Science and Health than any other college, at .25 of a point. Testers earned slightly more credits than test-optional students, but again this difference disappeared when we split by Pell Grant status.

We have a long way to go to put any research questions to rest; a thorough analysis is an integral part of our agreement with Faculty Council at DePaul as we move through the four-year pilot program. But for now, we’re moving ahead with our Pilot Program, buoyed by the results so far.

Test-optional applications dropped in our second year, much to our chagrin.  Colleagues at other institutions predicted this would happen, as students learn that “test-optional” does not mean “ability optional.”  And while we have officially been agnostic about whether students apply with or without tests, we do hope that well prepared students from rigorous high school programs will continue to consider DePaul, regardless of their standardized test scores.

What do you think?


P.S. A special thanks to my DePaul colleague Carla Cortes for helping with editing and checking my facts.