Beyond university rankings: promoting transparency and accountability
24. Mai 2022 | 8 Min. zur Lektüre
Von Christopher L Eisgruber
Exploring alternative metrics and information sources as an alternative to a reliance on university rankings.
Last fall, I published a Washington Post op-ed Wird in neuem Tab/Fenster geöffnet arguing that the business of ranking colleges, as done most prominently by US News & World Report but also by a growing plethora of competitors, was “a bit of mishegoss — a slightly daft obsession that does harm when colleges, parents, or students take it too seriously.”
My argument was newsworthy not because it was novel but because Princeton, the university I lead, has sat atop the US News rankings for the last 11 years. As I said in the column, I take great pride in Princeton’s teaching and research — but I also admire many other colleges and universities, and I think it is silly to suppose that one can rank them as though they were competing athletic teams.
This post is from the Not Alone newsletter, a monthly publication that showcases new perspectives on global issues directly from research and academic leaders.
I will not repeat my case against the rankings here. It seems unnecessary given the audience. I have yet to meet the academic administrator who thinks that ranking colleges is a smart idea (though I suppose there must be one out there who does). The notes and letters I received in response to my op-ed were uniformly positive, which is a first in my 30-plus years as a professor and administrator. Moreover, in the short time since I published the column, indictments of the rankings game have continued to accumulate. Colin Diver, for example, recently published a lucid and comprehensive critique of the “rankings industry” in his book Breaking Ranks: How the Rankings Industry Rules Higher Education and What to Do About It Wird in neuem Tab/Fenster geöffnet. His treatment of the topic is superb, and I recommend it to any readers who remain undecided about whether ranking colleges is a good idea.
Nevertheless, rankings seem to be, like much of human behavior, indefensible in theory but inevitable in practice. Despite the fallacious premises embedded in them, and the damage they do, the rankings are, as Diver notes, “here to stay.” The question is how best to live with them.
The answer is no doubt multifaceted. I want to emphasize one particular idea. If college and university leaders want to diminish unreasonable reliance on rankings, we need to offer or support alternative metrics and information sources that students or parents can use to choose among schools.
Higher education leaders have too often resisted that obligation, pivoting quickly from criticism of rankings to assertions that because colleges and universities serve multiple missions, they are more or less incommensurable. That claim is almost as indefensible as the rankings themselves. I would reckon that few college or university administrators behave accordingly when (for example) advising their own children about where to go to school.
So what metrics matter? Graduation rates are crucial. A college that does not graduate its students is like a car with a bad maintenance record. It costs money — lots of it — without getting you anywhere.
Students, their families, and policymakers deserve to have the most complete information about graduation rates, including data about how students with various characteristics fare at different schools and programs.
That is one reason why I strongly believe that individual colleges and their professional organizations should support the bipartisan College Transparency Act Wird in neuem Tab/Fenster geöffnet, which would authorize the federal government to collect student-level institutional data (commonly referred to as “student unit record data”) and report it in ways useful to prospective students and their families. For example, the Act would enable a military veteran to view the graduation rate for veterans throughout schools in New Jersey. The House of Representatives passed a version of the Act Wird in neuem Tab/Fenster geöffnet in February.
Public institutions already face extensive disclosure requirements, but private colleges and universities have sometimes resisted the College Transparency Act and predecessor legislation. Opponents of the legislation usually highlight student privacy concerns related to government collection of individual student unit record data. The general concern is legitimate, but the appropriate response should not be to scrap the legislation but to ensure that it contains adequate protection for data collected. In my view, the College Transparency Act contains effective mechanisms to address this concern.
A second concern relates to data collection burdens on universities. The bill’s Advisory Committee will work to minimize any additional burdens. In my view, the benefits to students would far outweigh any new obligations.
Another issue raised about graduation rates pertains to interpretation of the data. People point out that an institution’s graduation rates depend in part upon the pool of students that it serves, so that highly selective schools like mine have an advantage. Colleges and universities provide a valuable public good by educating students who may be less thoroughly prepared for post-secondary education. We should recognize that serving this population will diminish a school’s graduation rate.
True — but the solution should be to account for risk rather than to abandon the entire project. Higher education leaders need to help regulators and reporters say what counts as a good graduation rate for different student profiles. Doing so is important: if students from disadvantaged backgrounds emerge from college with debt but no degree, they will often be worse off than if they never attended at all. Indeed, most loan defaults Wird in neuem Tab/Fenster geöffnet involve students with small debt levels and no degree — students who get the degree have a better shot a repaying even large loans.
One way to solve the problem is to compare graduation rates within sub-sectors of higher education. That was the approach taken, for example, by the bipartisan ASPIRE Act, which would have provided incentives for institutions to raise lagging graduation rates, and authorized funding for schools with the lowest graduation rates to develop completion improvement plans. The bill would also have compelled institutions to do their fair share to educate Pell Grant recipients Wird in neuem Tab/Fenster geöffnet.
The mechanisms in that Act were imperfect, but the basic principles reflected a sound understanding of how to hold colleges accountable for educating and graduating students while also making government resources available to enable colleges to comply. President Biden’s Build Back Better proposals Wird in neuem Tab/Fenster geöffnet likewise included a completion fund to pay for evidence-based approaches to increase completion and retention rates at colleges and universities that serve high numbers of low-income students.
Graduation rates, of course, are not the whole story. In my Washington Post column, I suggested a partial list of other information that applicants should consider when choosing a four-year undergraduate degree program: some measure of post-graduation outcomes; net cost; levels of faculty quality and student-faculty interaction; and indicia of a learning culture with high standards where a diverse group of students study hard and educate one another (for example, data about how much the average student studies outside of class).
A good starting point is the Department of Education’s College Scorecard Wird in neuem Tab/Fenster geöffnet, which has benefited from the attention of James Kvaal Wird in neuem Tab/Fenster geöffnet, who is now Under Secretary of Education in the Biden administration after serving previously in the Obama administration. The Scorecard makes it easy for applicants to compare institutions on the basis of graduation rates, net cost and median earnings 10 years post-attendance. Readers can drill down into each institution’s profile to obtain more detailed information about a wide variety of elements, including the diversity of the student population.
The data is, of course, imperfect. The numbers reflect averages across students and degree programs. Earnings data often do not include all programs. Even when they do, the reporting may omit some — or many — students. “Individual results will vary,” as they say in commercials! In addition, people may exaggerate the importance of small differences in Scorecard data even where it is wholly accurate. Salary is not the only post-graduation outcome that matters. For some, it is not the most important one. And so on.
It is, however, the rare (and privileged) family that can afford to ignore post-graduation salary or net cost. The averages in the College Scorecard provide at least prima facie evidence of real differences. If a college has a relatively low graduation rate, high net cost and comparatively low post-graduation salaries, applicants should proceed with caution — even if some reasonably conclude that the school is nevertheless right for them.
Colin Diver Wird in neuem Tab/Fenster geöffnet is skeptical about the Scorecard, describing it in his book Breaking Ranks Wird in neuem Tab/Fenster geöffnet as reflecting a “homogenizing mentality.” I appreciate his point: I would not advise anybody to choose a college solely by maximizing Scorecard variables. But I think that characterization underestimates the value of having easily understandable information about the basic issues that matter to most families.
And if the Scorecard is flawed, then all of us in higher education leadership positions have an obligation to help improve it or produce something better. We should help to create comprehensible metrics that students and families can use when making choices, and that regulators — and we ourselves — can use to hold colleges and universities accountable to the goals and the people that higher education should serve.
The rankings have power because they meet a real need. If we must live with them, we certainly don’t have to let them be the only game in town. It is up to us not simply to resist the rankings, but to cultivate and support better alternatives.