New Approaches to Assessing Institutional Effectiveness
Each year, in its annual performance and accountability report, the City University of New York discloses the gap between the actual and predicted graduation rates of full-time, first-time freshman in baccalaureate programs at its eleven senior colleges.
The report, which controls for the socio-economic and academic preparation of the students each college serves, reveals that two institutions within the CUNY system stood out, doing a much better job, relative to their student populations, than the other nine senior colleges.
The disparities within the CUNY system are not unique. A growing body of research has laid bare a troubling reality: Institutions with similar student populations and characteristics produce very different outcomes in terms of graduation rates and post-graduation student success.
As Ross Chetty and his associates, Anthony P. Carnevale, Stephen J. Rose, and B. Cheah, and Jorge Klor de Alva and Cody Christensen have shown, even after controlling for a wide range of demographic and institutional variables, some colleges and universities consistently graduate students and move them up the economic ladder at rates far above what would be expected of them.
The reasons for these disparities are complex, and differences in a campus’ geographical location, senior leadership, curricular focus, and students’ preferred majors certainly contribute to inequalities in outcomes.
Still, we must ask: Are there steps we can take to incentivize other institutions to follow the example of these pacesetting institutions?
Currently, higher education relies heavily on program reviews and the re-accreditation process to drive institutional improvement. But both approaches have been subjected to withering criticism from all sides.
External program reviews are rife with conflicts of interest since they rely on colleagues within a particular discipline, while accrediting agencies stand accused of imposing excessive costs and administrative burdens on institutions and stifling innovation and experimentation, while failing to require sufficient evidence of student learning and post-graduation outcomes or doing enough to advance equity and diversity, improve instruction, or hold institutions accountable for their retention and graduation rates and students’ post-graduation outcomes.
Noting that the re-accreditation process tends to be opaque and chronic underperformance too often goes unpunished, former Secretary of Education Arne Duncan called accreditors “the watchdogs that don’t bark.”
To be sure, the accrediting agencies have taken many of the criticisms to heart and now require colleges and universities to provide measurable evidence of institutional effectiveness: whether a college or university is in fact fulfilling its mission, meeting its strategic goals, and promoting a culture that values continuous improvement in teaching and student services.
Rather than add to the debate about whether accreditation and program review need to be radically reformed, I would like to examine three approaches that might embolden institutions to take meaningful steps toward improving educational quality, promoting equity, and enhancing student outcomes.
Note: Each of these approaches respects institutional autonomy and faculty governance and recognizes that institutions have distinct missions and goals.
All of the major accrediting agencies require institutions to systematically collect data in order to evaluate educational quality and effectiveness and use this research to inform institutional planning, priorities, resource allocation, and campus policies and practices.
However, institutional effectiveness is not well defined. There are, for example, currently no widely agreed upon, reliable measures of student learning or instructional quality. Instead, institutions rely on indicators that are easier to measure, but which only indirectly reflect educational quality: retention and graduation rates, time to degree, average class size, student-to-faculty ratios, student course evaluations, student accomplishments and honors, participation in study abroad and undergraduate research, student and alumni satisfaction, and, in some instances, post-graduation employment or education.
So what might be some new ways to assess institutional effectiveness? Here are three.
A Peer Benchmarking / Best Practices Approach
Why not require institutions to benchmark their performance against peer institutions? Under this “management-based” approach, institutions would identify outcome goals, based on their peers’ performance, devise plans to achieve these goals, and monitor and regularly report progress toward achieving those goals in terms of implementation and impact.
This comparative approach would require campuses to familiarize themselves with best practices elsewhere, evaluate those practices’ effectiveness, and consider whether to adopt or refine those approaches. We all have much to learn from our peers in terms of policies, practices, procedures, organization, and technology tools.
Considering how peer institutions handle shared challenges can provide just the jolt institutions need if they are to overcome inertia and complacency.
An Equity Approach
Why not require every institution to conduct an academic equity audit — a curriculum-wide analysis of disparities in DFW rates by class and course sections, access to high demand majors, and the course registration process. Then, couple this with an in-depth examination of student satisfaction and exit surveys, and a detailed study of the experiences of transfer students.
This analysis, then, needs to be followed by department and program-level plans to address the inequities, for example, through course redesign or establishment of supplemental instruction sections, bridge programs, or supervised research experiences.
A Rubric-Driven Approach
Rubrics spell out our priorities.
Why not require, as part of the reaccreditation process, each institution to describe, and provide data on, the steps it is taking to improve the quality of instruction and student learning in the following areas.
- The proportion of faculty who receive ongoing professional development training in pedagogy and course design.
- The proportion of courses redesigned each year with the assistance of instructional designers, educational technologists, and assessment specialists.
- Trends in the use of academic support services. Including tutoring, supplemental instruction, the writing center, and the impact on student grades.
In addition, institutions could be asked to spell out how they are integrating experiential learning, active learning, and career preparation across their curriculum and how they assessing gains in student learning in the areas spelled out in the college or university’s requirements (for example, proficiency in written and oral communication, cross-cultural competence, global awareness, and numeracy).
You are what you measure.
Here are four principles that might guide the way we think about institutional effectiveness:
1. The measure of who we are is what we do.
Every reputable institution claims to value diversity and inclusion, but only through careful analysis can we verify that we are actually treating all of our students equitably.
2. What’s measured gets done.
The public and the state and federal governments demanded that colleges and universities demonstrate higher education’s return on investment, and, voila!, as if by magic, completion rates rose.
3. What you don’t measure can’t be improved.
If we want institutions to place a higher priority on high quality teaching, make high impact learning experiences a bigger part of an undergraduate education, and prove that their students actually acquire the knowledge and skills we value, we need to put in place measures and proxies.
4. What gets measured gets managed.
How does your institution schedule classes? By tradition? Inertia? Or measures of student demand? Without measures that include registration requests and wait lists, class scheduling is, at best, guess work.
Measurement is often difficult. But that doesn’t require us to resort to standardized testing or impose intrusive classroom observations or diminish academic rigor and standards. We can use a variety of direct measures – quizzes and exams, reports and research papers, performances, and assignments that mimic professional practice – and indirect measures, including performance in more advanced classes.
Steven Mintz is professor of history at the University of Texas at Austin.