Institution Type Is Not Destiny

Why student-success strategy should follow the pathway, not the category


This is Part 2 of a two-part series on rethinking student success as a pathway. Part 1 asks where the pathway breaks. Part 2, this essay, asks what that means for institutional assumptions, benchmarking, and strategy.

Return to Part 1: Success Is Not a Graduation Rate

Student success is usually measured at the end of the pathway.

Graduation rates tell us who completes. They are essential, but they are also retrospective. By the time a completion rate appears, students have already moved through several institutional choke points: entry, first-year retention, continued enrollment, transfer, persistence, and eventual degree completion.

That means the more strategic question is not only how many students graduate, but where the pathway first begins to narrow.

In the first article in this series, I introduced a student-success pathway framework that separates institutions into four broad patterns: Early-loss institutions, where students are lost disproportionately in the first year; Slow-leak institutions, where the pathway narrows gradually after the first year; Mobility institutions, where transfer plays a larger role in student movement; and High-performing institutions with a narrow choke point, where overall outcomes are strong but one point in the pathway still warrants attention.

This second article asks whether familiar institutional categories explain those patterns.

They help.

But they are not enough.

Sector, degree environment, selectivity, size, and resources all matter. They shape institutional context, student composition, academic structure, and the capacity to support students. But none of them, on its own, tells us where the student-success pathway narrows.

That distinction matters because higher education carries a long list of comfortable assumptions about student success.

Small institutions should do better because students receive more individual attention. Large public research institutions should struggle because students can get lost in scale. Private institutions should perform better because they often have more resources. Highly selective institutions should have stronger outcomes because they enroll students who are already more likely to complete.

There is some truth in those assumptions.

But not enough.

Institutional type is not destiny.

What the Usual Categories Mean

Small does not automatically mean student-centered in ways that show up in student-success outcomes. Large does not automatically mean impersonal. Private does not automatically mean better resourced or, if better resourced, in ways that students experience. Research-intensive does not automatically mean undergraduate success is neglected.

The data complicate those assumptions.

Bachelor’s-only institutions, despite their undergraduate focus, showed elevated Year 1 loss in the baseline pathway outcomes. Bachelor’s + Master’s institutions also showed elevated Year 1 loss and post-Year 1 loss to the institution. That does not mean smaller or less complex institutions are ineffective. It means that size and degree mix do not automatically translate into stronger pathway outcomes.

The research-doctorate assumption is also too simple. Research environments are often presumed to be less attentive to undergraduates, but institutions with research-doctorate environments were strongly represented among the high-performing pathway group. That does not prove research intensity causes student success. It suggests that research environments may also carry structural advantages: broader academic infrastructure, stronger resource bases, more program pathways, and greater capacity to support progression.

Sector matters, but it does not determine the pathway pattern. Public and private not-for-profit institutions both appear across the four pathway types. Private institutions are not automatically high-performing. Public institutions are not automatically slow-moving or weak on student success. Both sectors contain different pathway signatures.

Resources matter, but even that story needs care. High-performing institutions had the strongest resource profile, including the highest instruction spending proxy and lower student-faculty ratios. Early-loss institutions had the lowest instruction spending proxy and higher Pell share. That pattern is important. But it should not be flattened into “more money equals success.” The spending measures are proxies, not direct measures of student-level investment or service quality. They point to capacity, not causality.

The more useful conclusion is this: institutional categories provide context, but they are not diagnosis.

A small college can still have a pathway problem.

A large research institution can still support strong progression.

A private institution can still lose students early or later.

A public institution can still perform well.

A low student-faculty ratio does not guarantee success if students face financial, academic, or structural friction that the institution has not addressed.

The pathway view helps avoid both prestige bias and deficit thinking. It does not assume that selective, wealthy, or research-intensive institutions are automatically better. It also does not assume that access-oriented, public, or less selective institutions are simply failing.

Instead, it asks a more useful question:

Where does the pathway narrow, and what institutional conditions make that point of loss more likely?

Why This Changes Strategy

A pathway view changes the strategic question.

Instead of asking only, “How do we improve completion?” institutions should ask:

Where does the pathway first narrow?

For Early-loss institutions, the first place to look is the transition into and through the first year: onboarding, academic momentum, advising, financial friction, course-taking patterns, and belonging.

For Slow-leak institutions, the question shifts to what happens after initial persistence: progression, major fit, credit accumulation, course availability, financial persistence, and completion momentum.

For Mobility institutions, the issue is not simply retention or completion. It is the meaning of transfer. Are students moving by design, by choice, by constraint, or by institutional mismatch?

For High-performing institutions, the risk is complacency. Strong outcomes can make a remaining choke point easier to ignore.

The point is not that each pathway type comes with a fixed intervention.

The point is that each type changes where inquiry should begin.

If the pathway signature is …Look first at …Strategic risk if misread
Early-lossThe transition into and through the first year: onboarding, first-semester course-taking, gateway courses, advising, financial friction, registration barriers, belonging, early alerts, and whether students become anchored before the second fall.Treating early attrition as a student-preparation problem rather than asking whether the institution is creating too much friction at the point when students have the least margin for error.
Slow-leakWhat happens after initial retention: credit accumulation, major fit, academic progression, advising continuity, course availability, financial persistence after year one, degree planning, and timely completion structures.Mistaking first-year retention for success. Students may return for year two but still be drifting away from completion.
MobilityThe meaning of transfer: transfer destinations, credit loss, cost, program fit, advising before departure, articulation pathways, student intent, and whether students continue and complete elsewhere.Treating all transfer-out as institutional failure, or the opposite mistake, treating all transfer as positive mobility without evidence. Transfer may be productive movement, institutional mismatch, or constrained departure.
High-performing with a narrow choke pointThe localized barrier hidden by strong aggregate outcomes: gateway courses, major-entry requirements, limited clinical or internship capacity, transfer-student progression, financial holds, four-year completion, or subgroup equity gaps.Assuming strong overall outcomes mean the student-success system is working for everyone. High averages can hide concentrated barriers for particular students, programs, or pathway stages.
The pathway type does not prescribe a universal intervention. It changes where institutional inquiry should begin.

The Benchmarking Problem

Bad benchmarking leads to bad strategy.

Institutions often compare themselves with peers on retention or graduation rates. Those comparisons can be useful, but they are incomplete if they do not show where the pathway differs.

Two institutions may have similar graduation rates but very different patterns of early attrition, transfer-out, no-longer-enrolled status, and delayed completion.

The reverse is also true. Two institutions may have similar first-year retention rates but very different completion outcomes.

Benchmarking only the endpoint rewards and penalizes institutions without explaining the underlying pattern.

A better approach compares institutions across the pathway:

  • first-year retention
  • post-Year 1 institutional loss
  • transfer-out
  • no-longer-enrolled status
  • four-year completion
  • completion within 150 percent of normal time

Context still matters. Sector, selectivity, degree environment, student composition, and resources all shape interpretation.

But context should inform diagnosis, not replace it.

This is also where public reporting and institutional analysis need to meet. Public reporting has raised hard questions about the way higher education normalizes six-year completion as “success,” even though many students enter expecting a four-year degree. Transfer reporting has also shown that students may lose substantial credit when they move between institutions, turning mobility into delay or loss. These public-facing accounts reinforce the same underlying point: the pathway matters. Time, credit, transfer, aid, course access, and institutional process all shape whether students complete.

Sources, Methods, and Limits

This essay draws on public institutional data and on a wider public conversation about student success, persistence, completion, transfer, delayed completion, and institutional friction. The technical analysis uses IPEDS to examine pathway outcomes across public and private not-for-profit bachelor’s-granting institutions. Public reporting, federal data sources, sector research, and selected peer-reviewed studies help explain why those patterns matter in practice.

The core data source is the National Center for Education Statistics’ Integrated Postsecondary Education Data System, especially graduation-rate, retention, transfer-out, financial-aid, admissions, institutional-characteristics, and finance measures. IPEDS is useful because it provides comparable institutional data. It is limited because it cannot fully explain why students leave, transfer, delay completion, or stop out.

National Student Clearinghouse data are especially useful because they distinguish retention at the starting institution from persistence anywhere in higher education and track year-by-year progress across retention, persistence, transfer, completion, and stop-out. That distinction directly supports the pathway view: leaving an institution is not always the same as leaving higher education.

The broader retention literature also matters. Work on student departure and institutional retention climates reinforces a central point of this essay: persistence is shaped not only by student characteristics but also by the environments, expectations, resources, and structures through which students encounter the institution.

Public reporting and applied research help make the mechanisms visible. Reporting on course bottlenecks, transfer-credit loss, delayed completion, and satisfactory academic progress rules shows how institutional process can become student attrition. Research on course scarcity similarly shows how course availability can lengthen time to degree. Federal Student Aid rules on satisfactory academic progress show how academic difficulty can become a financial-aid eligibility issue. GAO reporting on transfer-credit loss shows how mobility can become delay when credits do not count.

There are limits to what public institutional data can tell us. IPEDS cannot show student intent. It cannot fully explain advising quality, belonging, financial strain, academic confidence, course bottlenecks, family obligations, or whether a transfer improved a student’s trajectory. It also cannot prove causality.

But it can show something important: student loss is not evenly distributed across the pathway.

The Bottom Line

Institutional categories are useful. They are not enough.

Sector, size, selectivity, degree environment, and resource profile help interpret student-success patterns. But they do not, by themselves, diagnose where the pathway breaks.

A student-success strategy built only around broad categories will miss the specific point where students lose momentum.

The better question is not whether an institution is public or private, small or large, selective or access-oriented, research-intensive or undergraduate-focused.

The better question is where the student-success pathway narrows, and what institutional conditions make that narrowing more likely.

Context matters.

Diagnosis matters more.

Additional readings and data sources

Get in Touch

If something here connects, feel free to reach out.

robyn.e.hannigan@gmail.com