Are Accreditors Failing Students?

This seems to be a common claim from today’s USDOE, citing poor oversight of quality and outcomes, misplaced focus on non-core metrics, the cost of the accreditation process, and permitting institutions to offer failing programs. Let’s unpack each of these over the next few posts.

First, poor oversight of quality and outcomes. A focus on quality is important, in fact, if you have been listening recently it has become common for those speaking on behalf of DOE to refer to accreditors as quality assurance organizations. From what I have seen to date this focus on “outcomes” is solely focused on employment and post-graduation wages. Completion must be associated with that somewhere along the way. Current accreditors require institutions to monitor factors related to student achievement and everybody has various measures of retention and completion in that mix. What current accreditors don’t require is monitoring of these measures for each program. If you have worked with me you know that I advocate for that approach. Down here in the region formerly known as SACSCOC Land, we are talking about Core Requirement 8.1. I am a big fan of every 8.1 measure being disaggregated at the program level as applicable. Every year.

What current accreditors don’t do is require specific measures. Beyond the required completion measure where institutions picked one from a list, almost invariably one that cast them in the best light, there is no specified set of measures. This allows institutions to allow their mission to inform the measures. Why are post-graduation outcomes like employment, in-field employment, and wages not commonly listed as measures? Well, that information is difficult to come by. Some states (Alabama I’m looking at you) simply won’t share. If this is to become a required measure, will we see corrections based on socioeconomic factors? Using the south as an example, a graduate in Alabama’s black belt won’t earn as much as a graduate from the same program in Jacksonville, FL. Will such a requirement be adjusted based on the institution’s region or where a student finds a job? Other measures have their own issues.

Warning–this is a soapbox moment. Pass rates on licensure exams absolutely don’t belong among the measures of institutional-level student achievement. They *almost* always don’t. Why not? Most institutions have a small number of programs that involve a licensure or certification exam. This means that you are using a measure involving a very small percentage of students as a measure of institutional success. I worked this out recently with a client. Their institutional-level measure of student success in the form of a licensure pass rate involved less than 0.4% of their student body.

Warning–second soapbox moment. Lagging measures have simply got to be made to lag less and institutions need to rely on measures utilizing internal real-time data as much as data from external sources. I’m talking about sources like IPEDS or the National Student Clearinghouse here. I have worked with many primarily or exclusively two-year degree granting institutions. Almost invariably they have chosen the IPEDS completion measure of 150% of catalog time (three years for a degree meant to be completed in two). So a cohort starts in, say, Fall 2020 and the three year mark is at the end of the summer semester in 2023. Essentially August 2020 through July 2023. But wait, there is the data lag. Those numbers are processed and reported, then washed through IPEDS so they are not available until much later. So if there is a problem with this completion measure, you don’t know until four or five years after the cohort started, and any changes you make seeking improvement in this completion measure won’t be observed for another four or five years. I also see this in institutions in state systems where the lag in data at the system level creates an unworkable system for informing real-time changes when goals are not being met.

So, are accreditors failing institutions when it comes to measures of success? Institutions are required to have measures of student success, monitor the attainment of goals set for those measures, and use all of that information to seek improvement in student performance. Accreditors fail to specify the specific measures. Accreditors fail to set the attainment goals for the institutions. Accreditors allow the institutions to allow their mission to define the measures and set the goals. Accreditors fail to chastise institutions when they fall short, as long as reasonable efforts are being made to seek improvement. I don’t think accreditors are failing institutions in this situation. Institutions should have freedom to operate and evaluate themselves within the context their mission and the students they serve. That being said, institutions sometimes fail to set aggressive goals and fail to respond in meaningful ways when they are not attained.

Next time we will focus on the “misplaced focus on non-core metrics”.

Published by Douglas A. Wymer

Throughout an academic career spanning nearly 20 years, Dr. Wymer participated in many site visits (both substantive change and reaffirmation visits) for the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC) and he has been a visiting team member for the Accrediting Commission for Community and Junior Colleges with the Western Association of Colleges and Schools. In addition to serving as a team member, Dr. Wymer has served as a visiting committee chair for SACSCOC. After earning a B.S. in Biology (with a minor in Chemistry) from what was then Shorter College, an M.S. in Entomology from Clemson University, and a Ph.D. in Environmental Science from Tennessee Technological University, Dr. Wymer started a rewarding career in academia. He earned tenure and achieved the rank of Associate Professor of Environmental Sciences at The University of West Alabama and served in a number of administrative roles at UWA including Department Chair and Assistant Dean. He served as a Department Head at Pensacola State College and, after a year in that position, was promoted to Dean of Baccalaureate Studies and Academic Support. In 2016 he became the Vice President of Academic Affairs at Lake-Sumter State College, where he served for four years before launching Southeastern Accreditation Consultants.

Leave a Reply

Discover more from Southeastern Accreditation Consultants

Subscribe now to keep reading and get access to the full archive.

Continue reading