Sometimes great ideas start throwing up red flags very quickly.
I love data. Data is my jam. It’s why I’m currently on the Strategic Enrolment Management group at my university and why I’ve been part of a similar group at two previous institutions. Data and assessment are a great way to enable us to make things better for students. So I’m coming at this as the type of person that the author of this piece want to recruit toward their way of thinking and I need to tell you OH HELL NO.
Higher Education Needs Its Own Version of Moneyball
Lets start with part 1 of the premise:
Higher education needs its own version of moneyball—a set of active, predictive and creative measures that can be deployed to improve student outcomes and fulfill their promise of student success.
Makes a lot of sense, and I fully agree. This is what a culture of assessment and SEM looks like. It’s amazing and I am here for it.
And then in the next sentence the red flag gets waved high:
Postsecondary institutions must be able to collect and instantaneously analyze student progress data and have intentional plans for adjusting in the moment to the needs of their learners.
There’s the oh hell no moment. Moneyball style analysis uses publicly accessible and consensually given information and visible information to work. This asks us to tie every moment of a student into a machine.
Ok, lets dig into the points that support the premise.
- part time learners don’t complete their programs as often so they should be assessed more
- But part time learners often have other things outside of school like work or family responsibilities, which normally means that they are more likely to be financially disadvantaged students. So you’re asking us to have more surveillance of those who we know are already over surveilled and over policed?
- Also there are often reasons why students pick their course load. Supporting them in increasing that course load is a great idea, forcing them to increase their course load without supporting the reasons they didn’t think they could is awful.
- Productive credit hours as a measure – do students take more classes at certain times or days, or are there too many gateway classes preventing moving forward
- This one makes sense and I’m here for it, but this is just proper scheduling and doesn’t require real-time analysis, just semester based analysis which is what SEM already does.
- Predictive metrics
- There isn’t any information given for this one so I’ll have to make assumptions. “This planning starts with insights that enable institutions to identify opportunities for accelerating student progress and predict the efficacy of those interventions on retention and graduation rates.” Predictive metrics mean one of two things:
- Constant surveillance of students (how often do they attend events, library use, computer on campus use, assignment submission times, in-semester grades) which is problematic and sounds like a surveillance state.
- Assumptions of students based on statistical models which often break down when applied to the individual. For example, in the USA, ~1-5% of adults are diagnosed with ADHD, but ~25% of adults in prison in are diagnosed with ADHD. Does that mean ADHD is a predictor of crime, or that people who are institutionalized are more likely to receive a diagnosis? What about when you find out that ~20% of adults in post-secondary education have ADHD? Are the predictions based on data always reliable? They might be in aggregate, but the idea here is to take that aggregate and apply it to the individual. That’s like looking at a normal curve and saying “well, people over 7’4″ don’t exist” when we know for a fact that they do, but they are rare.
- There isn’t any information given for this one so I’ll have to make assumptions. “This planning starts with insights that enable institutions to identify opportunities for accelerating student progress and predict the efficacy of those interventions on retention and graduation rates.” Predictive metrics mean one of two things:
- Make the stats open to all
- Again, statistics in aggregate about the student body, even when looking at relatively small groups, is a great idea. How many students take classed in X department, how many of them pass it, how many are International student, etc. This data is important for SEM to identify gateway classes, problem pathways, programs that are missing something, or departments that are under-enrolled. But using it in specific is dangerous.
- Do we really want a professor to be able to know how long a student takes to complete an assignment in someone else’s class? What about whether or not they use the library? How much data do you need about the specifics of someone? I don’t even like that my LMS lets me know how long ago a student accessed the system.
Where this article is right: SEM is the way forward. Data is important, and needs to be viewed by as many people as possible. Universities and Colleges are filled with brilliant people, getting more eyes on a problem with the relevant data means more potential solutions.
Where this article is scary: implying that we need to feed all data about students in real time into a data analytics system, and then turning that into and using it as predictive metrics of success.
It feels like an article written by someone who isn’t seeing students as people, but as bundles of data that they can access. That way lies teaching machines, but the way forward toward better developmental and lifelong learning outcomes for students (regardless of academic outcomes) is through relational connections on the individual level that are supported by using data on the macro level.
Leave a Reply