Rolleyes... LLM edition
It's the winter break, and now that I have some actual downtime, I decided to do some Microsoft training. I think the last time I had the mental space to do any of this on the Microsoft Education was sometime in 2021 (if the last badge earned is any indication). Anyway, I went through the course offerings to see what's on tap at Microsoft, and I came upon a whole load of AI-related things. Cool. While I've been paying attention to this whole AI thing, I haven't really paid that much attention to what corporate training is saying about their products (and how they might be used).
I've seen some colleagues post their badges on LinkedIn, so I thought I'd also follow the AI for Educators learning path on Microsoft Education to get conversant with what others on my campus are experiencing through these trainings.
Now, AI has been touted as a time saver on a variety of fronts, a claim that I think has yet to pan out. As I was going through the AI for Educators training, the following use case scenario was presented:
An educator in a secondary class needs to create a rubric for an upcoming writing unit in Greek mythology, then write an exemplar response for learners to follow. Facing what could be hours of work to complete, the educator turns to an AI tool and starts to enter in the prompts. The educator starts with the rubric. They turn to Microsoft 365 Copilot Chat and paste in the state standards and description of the upcoming Greek mythology writing unit. Then, they ask Copilot Chat to create a 20-point rubric including all the information they pasted. It’s completed, but it’s not quite what they were expecting. After interacting with Copilot Chat with a few more clarifying prompts like “Make the wording better for a 13-year-old,” the educator has the rubric. Finally, the educator asks the AI to write an exemplar response based on the rubric it created.
It should be pointed out that the typical learner age is in a K-12 setting in the examples that microsoft training gives, but the example above reiterates this. This type of scenario isn't unique in its over-the-top-ness. I've seen other similar use cases given as examples elsewhere. The thing that all of these scenarios have in common is a late-90s or early 2000s infomercial, where the exasperated user tries to do something "simple" (like drain pasta in a colander) only to have the task explode in their face (or have the pasta all over the sink and going down the drain, as was the case with the Pasta Pro Cooker, or whatver is happening with the following tupperware containers 😂
Anyway, I would expect that if someone is preparing lessons and activities based on subject matter that they know, they shouldn't need an LLM to create this stuff for them. Furthermore, why in God's would you outsource the rubric creation? Don't you know what you want out of the activity? And, why would you want the LLM to create an exemplar response? Why should novice learners emulate what an LLM produces? This all just seems highly sus to me 😒.

Comments