Don’t Write About Teachers

It was a Sunday, a day like any other Sunday.  I went to look at the NCES Digest of Education Statistics to see if any more tables from the 2013 version had been released.  To my delight, I found some interesting stuff; but most of the NCES Tables are designed to be printed as reports, and are in no shape to be pulled into the software I typically use, Tableau.

But this one on teacher salaries was in pretty good shape, even though I almost always focus on higher education data.  A couple clicks, and I was ready to visualize.  I did so and put it up on my other blog, Higher Education Data stories, here. One of the meta-reasons for doing so is to show how much more understanding of an issue you can impart with a picture as opposed to a table of data. I hope you agree.

I sent it off to some groups, and posted it to the NACAC e-list, an email group of college admissions professionals and independent and high school counselors.  It’s an open list, and Valerie Strauss from the Washington Post asked if she could share it.  It’s a blog and it’s public, so I happily agreed. It was up that afternoon, and you can read it here.

In addition to the hundreds of comments this has drawn on the WaPo site (which could be a post in themselves), I’ve received lots of emails and posts about the visualization.  They fall into several groups:

  • I’m trying to hurt teachers by showing how high salaries are
  • I’m trying to help teachers by showing how low salaries are
  • The data can’t be trusted because it’s from the Feds
  • The data doesn’t account for costs of living
  • The data doesn’t account for average service
  • The data isn’t split by union/non-union states
  • The data can’t be right because someone’s cousin makes way less than this
  • The data can’t be right because someone’s cousin makes way more than this
  • I shouldn’t have used red-green scales (and this person was right; I should know better).

Lessons learned, but good to repeat:

  • You can only viz the data you have
  • The limits of means as a measure of central tendency are not widely understood
  • Everyone’s an expert
  • I’m an idiot for stepping into this without understanding what a political landmine teacher pay is.

Lessons learned, internalized, and acted upon.  Stick to higher education.

And for those of you still reading, I had no political agenda at all; I simply thought the data was interesting, and that it would make a good visualization.

A Look at Test Optional Results: Year 1

This post is longer than most; if you know the history and the background and just want the results, skip to the ~4~ mark.

~Some History~

In February, 2011, DePaul announced that it would become the largest private, not-for-profit university in the nation to adopt a test optional policy. There was ample precedent for this: Many other colleges had offered test-optional policies for a long time, and the results had been positive at all of them, as far as we could tell. In fact, Bates had published a 20-year analysis that effectively demonstrated the tests did not help much in predicting academic success for their students.

There were two motives behind our move to test optional: The first was the statistical research we did that suggested standardized admissions tests widely used today explained almost no variance in freshman performance (a combination of GPA and credits earned), once you eliminated the effect of co-variance with high school performance. (In other words, test scores and grades tend to move in the same direction, so when you’re looking at a student with high test scores, you’re usually looking at a student with high grades. Usually. And grades explain freshman performance better, albeit with still a lot of need for other factors to help make sense of it all.)

Second, we knew that students from certain groups: Women, low-income students, and students of color, especially, scored lower on standardized tests, and that when used alone, scores can under-predicted first-year performance. Our own anecdotal evidence and our discussions with high school counselors told us that students often ruled themselves out from applying to certain colleges based solely on a test score.  And research at the University of Chicago on Chicago Public Schools (CPS) students suggested the same thing; it also pointed out that CPS students who took a strong high school program graduated from DePaul at the same rate as other students, despite test score profiles that suggested they were “at risk.”

So, we took the plunge.


Robert Sternberg, who was then the Provost at Oklahoma State University, offered several compelling reasons about why we as a nation are so wedded to test scores. He starts his essay with, “Many educators believe that standardized tests, such as the SAT and the ACT, do not fully measure the spectrum of skills that are relevant for college and life success,” and continues to outline factors such as the illusion of precision (a number sounds precise, so it must be); familiarity (lots of smart people in academia are good testers and that’s how they got to be where they are, so they’re not wont to challenge one thing that confirms their intelligence); and the fact that tests are free to the college.

I would offer another: That standardized tests do, in fact, measure a certain type of intelligence, and super-selective institutions have the luxury for selecting on both academic performance and this more limited skill.  All things being equal, if you can require high test scores for admission, why wouldn’t you?

But there are only a handful of the 4,000 institutions who have that market position, and who can command their students present very high scores.  I often wonder about what type of research has been done at the other 3,900: Perhaps there are many colleges and universities that have shown a strong value to standardized admissions tests. However, I suspect it’s just as likely that tests–and average test scores–serve as benchmarks, or a type of shorthand, for an industry that has been unable to measure what it does or how it affects students.

In some sense, the admissions office really does not know, and can’t define, what it’s looking for in candidates for admission precisely because we’ve never been able to predict with much precision where students will end up.  It could be intelligence, if we could define that.  Or maybe insight.  Or wisdom, or drive, or motivation.  More likely, some combination of them that recombines into the elusive “it.”  Oh, we know a successful college student when we see one, but it’s harder than you think to pick those who will succeed ahead of time based on their high school records and tests: Apparent shoo-ins flunk out despite sterling records; marginal admits make the dean’s list; and middle-of-the-road students can go either way.

We still believe there is that “it:” That one thing that will tell us all we need to know. If only we could measure–or even define–it.

We get lazy. If enough students with high scores have “it” our confirmation bias goes into high gear, despite evidence to the contrary.  We suppose it’s a kind of intelligence, and we embrace logical fallacies as we celebrate our ersatz discovery.  It’s what I call the Poodle Fallacy: If you have a poodle, you know you have a dog; but not having a poodle doesn’t prove you don’t have one. We believe that high testers have “it” and we forget that low testers might have it too.  Frequently, of course, they do.

Never mind that the inventor of the Multiple Response Test, Frederick Kelly, called them tests to measure lower order thinking skills.  Never mind that in real life–in business, medicine, law, education, engineering–the right answer is never presented for you to choose.  And never mind that often it’s difficult to ascertain the right question to ask, let alone having the luxury of a 25% chance of guessing the right answer (or improving your chances by eliminating obviously wrong options.) If school is not like life, it may be doubly true that neither are tests.

University of Maryland professor William Sedlacek, who has done a considerable amount of research on the role of non-cognitive variables in college success, recognizes that tests seek to measure cognitive and verbal ability (both important, of course), but that doing well in college also depends on students’ adjustments, motivations, and perceptions of themselves.  And Sternberg also touts skills related to creativity, wisdom, and practicality–not analytic and memory skills measured by tests–as necessary for leadership and citizenship.

Despite this, we knew a lot about how people were wedded to the idea and the practice of test scores. Having two children in high school in a test-obsessed school district, I’ve seen it first hand: Even English literature tests are multiple choice, and parents receive notices about how we need to encourage our children to do well on tests because a lot–an awful lot, like taxpayer satisfaction and state and federal funding–might be riding on them.

These tests, created by someone who’s never met our children, never taught them a class, and who can’t be sure they’ve even covered the material on those tests, carry a lot of weight.  And lots of people are heavily invested in them, some for reasons we don’t know and can’t figure out.


The headline still stung: DePaul dumbs down by dropping exams. A writer who had never spoken to me or anyone at DePaul, never gathered any feedback and who had apparently only read an article in the paper she was writing for, (and that article was done as a filler on a deadline), took some uninformed and irresponsible swings at us.  As you can imagine, it sent ripple effects through our offices, and sent me on a sort of tour of key constituencies: High School Counselors, Alumni, the President’s Cabinet, the Deans, the Associate Deans, College Advisory Groups, our own Student Government, and even our own division.

I spent a lot of time that spring explaining that we didn’t adopt a test-optional policy to a) get more selective, b) raise median test scores we report, c) garner publicity, d) increase diversity or e) ruin the university.  And I demonstrated why none of those was plausible, anyway.  People inside and outside DePaul were very receptive, and supportive, of our initiative.

My one regret is that we didn’t anticipate or prepare for the backlash, especially the hardest type to respond to: The opinions of the uninformed. To anyone who is thinking of doing this, I would only advise that you get ready for a lot of weak opinion masquerading as knowledge.

I’d be remiss if I didn’t mention two important things here: One is that I am not opposed to standardized tests, but rather the weight they carry in many important discussions and analysis of our educational systems.  For the very many students that don’t test as well as their native intelligence suggests they might, the tests can be the thing that kill dreams, even when those students have worked hard, taken everything their school offers, and excelled. And even, many times, when they have “it.”  (And, I have to admit, on occasion a test also serves as a “ticket out” for some kids.)

The second is that I know many good people at agencies that conduct standardized testing.  Unlike some, I don’t think they’re evil or wrong-headed or driven by impure motives.  I believe most of them are working hard, and trying to do a really hard job: Measuring the capacity to handle college work, and to thrive in it.  It ain’t easy; I just find it hard to believe that we can sum up anything in a single number.  To a person, everyone at the agencies I’ve talked to has understood why we did this–why DePaul’s mission makes us a good candidate for it–and they’ve been nothing but professional and collegial.   I continued to serve on the College Board Regional Council and DePaul staff have been asked to speak at the ACT Enrollment Planners Conference.


The results of our first class, after one year at DePaul are in.  And while the students who completed their freshmen year have a long way to go before we pronounce test-optional an unqualified success, the results are encouraging.  As a reminder, we collect the scores for every student post-admission, as a part of the research studies we’ll be conducting, but we didn’t know those scores at the point of admission.

And now that we’ve presented the results to our Faculty Council, I can share them more widely.

After one year, the entering class in 2102 at DePaul shows the following:

  • Freshman-to-sophomore retention was virtually identical, at 84% for Test-optional and 85% for testers.  
  • GPA for testers was .07 of a grade point higher (not statistically significant) despite a median ACT score that was 5.5 points lower for test-optional students.  
  • Believing that income has a big effect on academic performance, our analysis split the class into Pell grant recipients and those who did not receive Pell.  Not surprisingly, Pell status means a lot more than people think; Pell testers and non-testers were identical to each other, as were non-Pell testers and non-testers. The resultant effects of poverty are meaningful.
  • In two of our colleges, test-optional students had higher GPAs than testers.
  • None of the test-optional students started the second year on academic probation, compared to 1.7% of testers.

We noticed a few things we’ll research further: The first-year GPA discrepancy was higher in the College of Science and Health than any other college, at .25 of a point. Testers earned slightly more credits than test-optional students, but again this difference disappeared when we split by Pell Grant status.

We have a long way to go to put any research questions to rest; a thorough analysis is an integral part of our agreement with Faculty Council at DePaul as we move through the four-year pilot program. But for now, we’re moving ahead with our Pilot Program, buoyed by the results so far.

Test-optional applications dropped in our second year, much to our chagrin.  Colleagues at other institutions predicted this would happen, as students learn that “test-optional” does not mean “ability optional.”  And while we have officially been agnostic about whether students apply with or without tests, we do hope that well prepared students from rigorous high school programs will continue to consider DePaul, regardless of their standardized test scores.

What do you think?


P.S. A special thanks to my DePaul colleague Carla Cortes for helping with editing and checking my facts.

When you question everything you think you know

Once in a while, something comes along that can make you think deeply about everything you have held to be true.  This is not one of those times, but it does support a lot of things I’ve believed to be true, even though I was in the minority in thinking so. So, there is a similar elation to being vindicated.

Warning: You’re going to have to watch an 18-minute video to put this in context.

You may already know Malcolm Gladwell.  He’s an interesting guy, and he makes his points by telling good narratives, which makes him a great teacher.

In this one video, he addresses several things people in higher education, and, to a lesser extent, society in general tend to debate:

  • The choice of “big fish, small pond” vs. “small fish, big pond.”
  • The wisdom of admissions officers trying to figure out how much of a break to give to a kid from a “good school.”
  • The self-image people develop based on their surroundings, in the context of “Relative Deprivation Theory.  (This also may explain why students write about community service as such a profound experience.)
  • Why firms that hire only from the “best schools” are probably making a huge mistake.

Watch, and prepare to get astonished.


On watching my daughter head off to take the PSAT

It’s Saturday, October 19, exactly 167 years to-the-day after the first of my ancestors to come to America arrived in the United States at the port of New Orleans, on a ship from Bremen, Germany called Manco.  In fact, it was so long ago, Germany wasn’t even a country then.  If you’re interested, here’s the copy of the form that was filled out as they entered the United States.  It’s unlikely the clerk at the point of entry could understand them well enough to spell their names correctly; and given that it had been spelled so many ways in the records before 1846, it’s likely they didn’t think it mattered too much anyway.

We don’t know exactly why they left Germany to come here, but we think it was to escape military drafts; or perhaps they were being persecuted for their good looks:

Elizabeth and Franz

I thought of my ancestors as we got up early today to see my daughter, aged 15 for a few more days, off to take the PSAT, and wondered what they’d think if we could explain to them the way things operate these days.  And the more things unfolded, the more I thought about it.

Emily couldn’t find her calculator this morning, which is of course an important thing to have for this test. Normally, I’d stress out about things like this, but I instantly realized that it’s not really a big deal, so I shrugged my shoulders and let it slide.

For her the stakes are very low; she’s always done very well on standardized tests, but is probably five percentile points outside the range of National Merit cutoffs.  In other words, this is about as low-stakes as this high-stakes test gets.  Right now, the extraordinarily selective schools where scores really matter aren’t on her radar, and I doubt they will be when all is said and done.  Her outstanding academic performance should speak for itself when she applies to college, and will be supported by scores commensurate with her achievement in the classroom, for the schools who really care about such things.

But I can say that because I know what I know.  I thought about all the kids who don’t have parents who have worked in admissions and enrollment for 30 years.  For them, all testing is seen as very high-stakes.

Emily, like her brother before her, is a very bright kid, and yet by any measurements–standardized or not–she and her brother are very different people. Both are quiet and thoughtful, with varying degrees of genetic cynicism like their father, and healthy doses of warmth and affection like their mother. One is punctual and the other chronically late; one takes things as they come, while the other is focused and organized and always looking ahead; both have messy rooms, and interest in music; one is a whiz at math, while the other is facile with language and has been bitten by the acting bug. Multiply this by millions of kids, and you have a portrait of high school students in American and around the world.

And yet, we send all of them–future artists and chemists and actors and financial analysts and doctors and engineers–off to take a single test written by someone who has never met them; someone who has never taught them a class; someone who has never had the opportunity to see how they excel.  It is the most sterile, and in some ways, the most inappropriate use of the word assessment one might encounter.

We send them to take a test that purports to measure something important about them and all the other kids who are in, or who have gone through, high school.  Perhaps it does measure something important, if you consider the ability to pick the right–or most correct-answer from four given.  We all know that’s not at all like real life (where you’re not even always sure of the question, let alone given the range of discrete answers), but if given the choice, we’d probably all want to have that skill just in case; if nothing else, being able to eliminate wrong answers might come in handy if you’re ever on Let’s Make a Deal.

It’s important, however, to remember that the creator of the multiple-choice assessment considered the tests to be measures of lower-order thinking skills, despite their current reputation to the contrary. Now these tests are used not just to evaluate students but even entire school districts, as this email from my kids’ school points out.  And tests created by someone who’s never taught my kids drives how my kids are taught:

ScreenHunter_02 Oct. 19 12.03

When the scores come in, we’ll look at them, of course.  And when the mail arrives in bushel baskets, we’ll sort through it all, lingering fondly over some, and sending some unopened to the recycle bin.  But we’ll never define a complex human being by scores on a three-hour test on a Saturday morning; my earnest hope is that our kids don’t allow that to happen to themselves.

Is College Tuition Too Low?

I admit it: At times I like provocative headlines to shake things up a bit. But I think this one is worth thinking about.

Recently, on my other blog, Higher Ed Data Stories, I published a re-formatted Tableau visualization from the Chronicle of Higher Education. You can look at it here. The takeaway is that a) about 80% of college students in America go to public institutions, and b) about three-quarters of those that do pay less than $12,000 per year to do so.  So, for now, my headline applies mostly to public higher education.  (As you can see, in order to be able to afford the cost of the typical private, four-year college and university, you need to be somewhere in the top 5% of all family incomes in America. I don’t think anyone thinks private college costs are too low.) Thanks to Royall and Company, from whom I shamelessly copied this graphic.

But the fact that they pay anything at all is interesting.  By charging tuition to state residents, we are effectively saying that college education is not an entitlement.  If it were, it should be free to those who qualify and who wish to pursue a bachelor’s degree.

As a result, the low tuition allowed by state subsidies for this non-entitlement means that everyone gets a break, even if they don’t need it.  In other words, if a child of Bill and Melinda Gates decides to go to the University of Washington, people who are worth far less than they are (in other words, everyone else in Washington) end up subsidizing the education of their children.  Bill and Melinda seem like nice people, and I’m sure they’ve raised very nice children; this is nothing personal.  But it seems a bit unfair, don’t you think?

When a public university keeps its in-state tuition artificially low (measured against the fully loaded cost of instruction) so that even the wealthiest citizens of the state get a subsidy, guess what happens? Wealthy people tend to take advantage of it. And this squeezes out kids with fewer options, mostly those further down the economic food chain.

Every state I know of that has done a study (like this one from Minnesota, in 2008; chart on p.9) has found that median family incomes at state-supported institutions are higher than at the state’s private colleges and universities.  This was true even before the huge economic shakeout; with the push to enroll more out-of-state students (read: generate more revenue) happening at most flagships, state residents are likely to get squeezed via greater selectivity.  And, as I’ve ranted about before, greater selectivity and the race for prestige puts more pressure on low-income kids, because the things more selective colleges favor (test scores and the money to get coached, access to AP classes, well-crafted essays that have been professionally scrubbed, and activities you can only do if you don’t have to work to help with family expenses) skew wealthy.

The fix is not easy: First, a real tool to measure a family’s ability to pay for college.  Second, state and federal programs to fill the gaps.  Then, and only then, should state universities raise their tuition dramatically, to cover the full cost of instruction, so that those that deserve it can get it, and those that can afford it pay for it.  Sorry Bill. Sorry Melinda.

Another Blog of Mine You Might Like

I plan to keep writing using words, but I’ve also become interested in telling stories with data; and I think the stories I’m most qualified to tell are about higher education.

So, I’ve launced a new blog called Higher Education Data Stories.  I hope you’ll check it out, and I hope you find it interesting.

If there are stories you think should be told and could be told with data, let me know.  If a picture is worth a thousand words, it should save me a lot of time.

There are no Stealth Applicants. But there are a lot of Stealth Colleges

Lots of my colleagues are back from Toronto, where they attended the NACAC Conference.  Lots of discussion this year, as in previous years, has focused on “stealth applicants.”  I’ve never liked this term from the very first time I heard it. In fact, I find it offensive.

For those of you who don’t know, stealth applicants are those students who apply out of the blue; that is, they haven’t requested information from the admissions office in a formal way; as far as the admissions office can tell, they’ve never visited campus during a formal tour.  Never sat in a presentation at the high school.  Never even walked up to a table at the college fair.

Who do these kids think they are?

There are two problems with this: First is a problem of definition.  Many places farm Student Search and EOS mailings out to third-party vendors, and never load the names into their student database.  They’ve sent two or four or six or more emails, letters, or brochures to these students; the students assume they’re on the mailing list, and don’t take the time to request more information.  (Really, you’ve mailed them five pieces; do you think they’d automatically understand they should fill out a card to request more?)  The colleges thus don’t know these students were on the list of names they purchased from College Board or ACT, because they only put names in the database when students actively inquire via traditional means.  When students like this show up later as applicants, the admissions office is flummoxed.   So much for personal attention.

If you think about it, you could use this to teach Alanis Morissette the real definition of irony: Many of the colleges complaining about stealth applicants are the same ones who bombard students with emails that contain links to “Fast Apps” encouraging kids to apply to a school they’ve never heard of because a) it’s free and b) it’s easy.  The students should be referring to Stealth Colleges: They come out of nowhere, previously unseen, and drop weapons of class construction on the poor unsuspecting children (and yes, most of them are children.)

That’s the small problem.

The big problem is the arrogance of colleges: The assumption is that students should know they need to express some interest, because it’s so much easier for us.  And we think this way because fifty years ago, the only way you could find out about a college in the next state was to write and request information–often just a catalog.  (You could call, of course, but that was expensive.) Colleges got used to that: Everything was orderly and predictable.  No more: It’s now possible to find out almost everything you need to know about a place by looking online, not just on our websites, but dozens or hundreds of others, like College Confidential, College Prowler, Peterson’s, USNWR, etc.

And this, we seem to suggest, is the fault of the students.  They were born too late to figure out how things should be done.

Instead of blaming them, how about we understand the world has changed and deal with it?  Isn’t that one of the things you’re supposed to learn in college?  And isn’t part of the reason people laugh at colleges because we make so many assumptions about the way things should be, based on the way we feel the most comfortable?

It’s time for the profession to grow up.  These kids could teach you a thing or two.