Why Disaggregated DFWI Rates Matter for Equitable Learning
Understanding and addressing DFWI rates is critical to the cause of equity in higher education. DFWI rates identify the percentage of students who finish a course with a D grade, an F grade, a (W)ithdrawal, or an (I)ncomplete. (Some institutions refer to it as the DFW rate and don’t include incompletes in the calculation.)
But DFWI rates disaggregated by race and ethnicity are poorly documented. DFWI rates lie at the intersection of gateway courses, digital learning, and equity. The correlation between gateway courses and high DFWI rates is well understood; gateway courses are where courseware is most widely adopted; and courseware and other digital learning products are offered with the promise of closing gaps in academic achievement for racially and ethnically minoritized students.
That means DFWI rates should be a powerful early warning indicator of whether a digital learning initiative is delivering on its equity promises. DFWI rates in introductory courses can tell institutional leaders and instructors, long before graduation day, the location of major barriers to persistence and completion. However, there is very little publicly available data about DFWI rates that are disaggregated by race and ethnicity (or, for that matter, by any other important identifying category such as first-generation or low-income status).
This is part of a general tendency in higher education not to work with disaggregated data at the programmatic or course level. U.S. higher education has some data-informed glimmers of equity gaps in the inputs (high school experiences and college admissions) and in the outputs (graduation rates and employment). But higher education has almost no disaggregated data about what goes on between first-year orientation and graduation day. The particular experiences of racially and ethnically minoritized students are obscured in the data coming from the academic affairs divisions of most U.S. colleges and universities.
For example, I have identified two excellent examples of nationwide disaggregated data. One is the 2019 American Council on Education report Race and Ethnicity in Higher Education and its 2020 Supplement. Collectively, they include over 200 indicators, disaggregated by race, ethnicity, and income, on pre-college academic preparation, admissions, financial aid, student borrowing, family income, degree completion, graduation rates, and employment outcomes. The other example is a report from the National Center for Education Statistics, Status and Trends in the Education of Racial and Ethnic Groups 2018, which summarizes pre-primary, K–12, and higher education progress data disaggregated by race and ethnicity.
Yet disaggregated DFWI rates aren't included in either of these examples or in any other source of national data I can find. The disaggregated indicator closest to the DFWI rate is first-year persistence, but that still doesn’t illuminate the role of particular gateway courses in equity gaps.
It's a reasonable guess that DFWI rates contribute to first-year persistence and ultimately to graduation rates, but in most reports, a disconnect remains between aggregated DFWI data and disaggregated persistence and graduation data. That limits the ability of institutional leaders to analyze what fields and courses—and what teaching practices or elements within a course—are creating barriers to equity for racially and ethnically minoritized students.
In short, most of the data produced in the academic affairs division of a college or university—at the level of courses, programs, departments, schools, and colleges—tends to aggregate all “underserved” or “underrepresented” students into one monolithic group. This unproductive homogenizing limits the ability of higher education leaders to identify and confront the unique equity barriers encountered by Black, Latino/Latina, Indigenous, Asian American and Pacific Islander, low-income, and first-generation students. This includes barriers that may contribute to DFWI rates, so the early warning indicator is not as helpful as it could be.
With or without disaggregated data at the national level, individual institutions must disaggregate their own data, because regionality and institution type matters. Most informal discourse about U.S. higher education tends to prioritize exclusive and flagship institutions at the expense of broadly accessible institutions, and many studies are limited to four-year colleges and therefore ignore the experiences of students at community colleges.
It’s also important to keep in mind that there is significant heterogeneity in all these racial and ethnic categories, with variations in language practices, religion, immigration status, and experiences with colonial institutions. National data about Latino/Latina or Asian American students will be less predictive about those students in some regions of the country than in others. And both national data and local data would still be only one step toward understanding the variety of barriers to equity within those and other racial and ethnic groups.
That’s where the research and reporting practices on individual campuses matter. Faculty, administrators, student affairs professionals, academic support professionals, instructional designers, and institutional research professionals must collaborate to ensure they are working with an accurate picture of their student body; doing so requires not aggregating students into meaningless categories. As Estela Mara Bensimon, at the Rossier School of Education at the University of Southern California, wrote in The Misbegotten URM as a Data Point, bundling underrepresented minority (URM) students together is a form of educational malpractice.
Disaggregating data is not easy, but when educators don’t see it in reference to DFWI rates, they should ask hard questions. The early warning indicator is less powerful without disaggregated data, and progress on equitable teaching and learning will be limited as a result.