Last time we wrote about the claim that accreditors were failing students due to poor oversight of quality and outcomes. This time we will look at the other claims that accreditors are failing students due to misplaced focus on non-core metrics, the cost of the accreditation process, and permitting institutions to offer failing programs.
I’m not sure what a non-core metric is, but SACSCOC only specifies one metric that must be used and that metric is completion. I have worked with a wide variety of institutions and found that there are many common measures. Fall to spring persistence, fall to fall retention, gateway math success, gateway English success as well as various credit accumulation milestones and enrollment intensity measures. What do all of these (maybe “non-core”) metrics have in common? They are all early indicators of completion and the associated measures of job placement and wages. Today, when DOE officials say “measures of student success” they mean post-completion wages. If institutions don’t monitor the early metrics, they won’t know there is a problem with completion and wages until too late. DOE wants licensure pass rates in the mix as an institutional measure of student success and that is just not a good idea. Call me up, we can talk about it.
Next is the cost of accreditation. Documenting what you do so you can demonstrate that to your peers does take some time and time is money. Member dues are another expense, but that won’t be going away. The largest part of the expense, in my experience, is when institutions have not been as faithful as they should be to their policies and processes. If you follow Dr. Pruitt’s Law or Lore series, the idea that accreditation is not an event, it is a process sums this up nicely. I spend most of my consulting time working with institutions on a small fraction of the accreditation standards because ongoing processes have not been faithfully and meaningfully executed. Administrative outcomes? Student support outcomes? Student learning outcomes? Strategic plan reporting? Evaluating external credit? Personnel evaluations? I could go on. To quote Dr. Wheelen, “Some problems take longer to fix than others.” Again, time is money.
Finally, my favorite, the focus on allowing institutions to offer failing programs. I agree that this is a problem, but not for the reasons that the DOE may think. Every institution should regularly examine the entire program portfolio. Institutions should know if and how much money it is costing to operate. DOE seems to limit their thoughts to failing programs resulting in low-paying jobs for graduates. Institutions should be carefully watching enrollment, job demand, wages, etc. when evaluating programs. That being said, there is mission to consider. I was once working at an institution with an associate degree nursing program that was losing better than $1 million a year. That was a mission-driven program and the loss was not a deciding factor when it came to keeping the program. The program was mission-driven. Keep your mission in plain view, but that program with five students? You know the one. Why is it still there?