Assessment in a Generative Connectivist Future
Hey! Blogging two days in a row! Who would have thunk it? Well, I did tell Dave I'd read his post, and it did move some gears in the ol' noggin' so I guess something is still working upstairs ;-)
I think you should go and read Dave's post, since I'm just going to reflect and react on a few select points. Dave introduced me to an article by Schinske and Tanner (2014) where they describe four purposes for assessments, those purposes being feedback, motivation, student ranking, and the objective evaluation of knowledge.
There were two things that jumped out at me: (1) the objective evaluation of a learner's knowledge and (2) ranking learners.
From a philosophical perspective, I don't think that it's the instructor's job to rank learners. As an instructor, I am not there to indicate whether Tom is marginally better at X than Dick. My job is to help learners go get through a particular corpus of knowledge that they should be able to use to do something with. This type of assessment only really exists on a one-to-one basis. As an undergrad I had a professor, let's call him Hans, who really believed in the bell curve. 🙄 On the first day of class, he announced that there would be so many As and most people would fall in the B/C range. I don't know what his feelings or beliefs were about the other end of the bell curve (the D and F), but I don't know if we ever found out. The knowledge that no matter how well you do, you are ranked against others is demotivating. If I know that my grade is most likely going to be in the B-range, I'll most likely nope out of most things in class and strategically choose what to participate in. If I were a student in an AI-world (assuming the AI generation was worth anything) I'd most likely be tempted to just autogenerate whatever garbage since assessments were more about a belief that anything is actually useful. As an aside, I still, to this day, wonder what a belief in a statistical probability chat is 🤷♂️. As an aside, as an instructional designer, I also must have missed the cID lass where it was my job to help devise assessments to rank people, instead of actually...assessing their knowledge and application of that knowledge 🤣
The other thing that jumped out at me was the objective evaluation bit. The more time I've spent teaching, the more I've come to the conclusion that I cannot objectively evaluate the entire corpus of the class I teach. Well, I could, but it would take a very (very very) long time. Instead, what I've observed happening is that we use snapshots of concept evaluation as a proxy measure for the entirety of the corpus of knowledge that we try to cover in our classes. We pick concepts that may be more "important" than others, or concepts that can be used like key puzzle pieces so that students can fill in that negative space with concepts and knowledge that's adjacent to what we're testing. Ultimately one cohort of CLASS101 is not going to be evaluated the same was as another cohort from CLASS101.
This reminded me a little bit of a conversation I had with one of my doctoral professors at Athabasca. We were discussing comprehensive exams at the Master's level. He was telling me that AU moved from a comp exam to a portfolio because, ultimately, their comp exam was neither comprehensive nor an exam.
In any case, back to course design. Dave writes that the internet (over the past 30 years) has changed the educational landscape. The way I see it, these represent some different eras of the web. Here's what Dave wrote (a bit paraphrased and expanded) - learners have...
- The availability of connections to other people to support learners in their education journey - examples being SMS, group chats, and various IM chat clients (Yahoo, ICQ, MSN, etc.) and so on. I would say that this was my generation of college undergrad. It wasn't everyone who did this (there is a certain amount of access privilege associated with this), but it seemed like classmates were a good source of peer learning whenever we got stuck.
- The availability of pre-created content available through sites like Chegg and also through google searches. Content that can be used to respond to any existing assessments. This is just a digitalized version of the ol' sneaker net of years past where an someone who had taken the course before could share an exam with others. This was the mode of concern up until the ChatGPT paranoia hit in early 2023.
- The availability of generative systems that can create responses to assessments, whether they are "correct" or not. This is where we are now with things like ChatGPT
While reading Dave's original post, I was reminded of conversations about connectivism over the past 10+ years. This is, in fact, an instantiation of connectivism. We connect to human and non-human nodes, "learning" seems to be residing in non-human appliances, and decision-making is itself a learning process. This last point I want to focus a little bit on because I think it has implications for design, teaching, and of course - assessment. If we take decision-making as the center of our learning experience, what kind of content is our sine qua non (SQN) content? These are the minimum elements that we need to begin to make decisions, and also allow us (as learners) to unravel connections to other learning that we need to do. Dave writes that "with the plethora of options for students to circumvent the intent of our assessments this require the rethinking of the way we design our courses." I agree. The question is what is that the core of that learning experience? Not necessarily content (although it is important to some extent), but rather the ability to be lifelong learnings, get new inputs, assessment, and perhaps use them to make decisions for an ever-evolving ill-formed set of problems that come our way in our personal and professional lives.
Whaddyathink?
Comments