ETMOOC Session 1 Ponderings

An image generated with AI generator "nightcafe"
Me in a Star Trek-themed anime AI image

Just as session 2 of #etmooc2 is scheduled for this evening, I just caught up with the first session over the last few days. The recording can be found here, and it's funny that it took me 3 days to complete.  Part of it was because I could only really do 20-minute increments (with notes and reactions), and part of it was because I paused to experiment with things mentioned.

Part of the session was really dedicated to identifying ways in which this kind of technology can help with what we do.  Essentially flipping the script and going from "ZOMG! ChatGPT is used for cheating" getting to "how to use ChatGPT to help us with learning?" 

There were a number of examples used in this brainstorming session which present for red flags for me.  I did think of a few examples of my own that may (or may not) be good examples of what you could use tech like this for.  I'll start with my concerns though.

Example 1: Using ChatGPT to grade. This is a use case of having a kind of machine-human collaboration. It was acknowledged that the machine can't really accurately grade everything, and the instructor should have a look-over to correct or supplement the machine output, but this would be a potentially revolutionary use of this tech.  I'm not convinced. First, I have issues with both feeding student submissions into such a system without appropriate guardrails for student submissions. We've seen, from past actors in this space, that they take student submissions and appropriate them for their own use.  Students should not have to consent to having a "smart" machine grade their submission as part of their learning experience.  My second issue here is that such machine grading takes away Teacher Agency to some extent, and it may be taken away as a means to being more "efficient" or "less burned out." Teachers and course facilitators are in a classroom for a reason. If grading submissions is becoming an issue it's important to interrogate why they are becoming an issue instead of throwing some LLM at it.

Example 2: Continuing on with that thread of human-machine collaboration, when working with ChatGPT it's like you're working in a group, but instead of other humans, your team-mate is an AI.  Maybe if AIs were like Mr. Data on Star Trek, I might have a different opinion.  Right now LLMs are like dumb appliances.  They can "learn" but they are essentially machines. Collaboration requires agency, scope definition, goals, and drive, which machines simply do not have.  In Connectivism, you can have interactions between human and non-human/appliance nodes, but I would not go so far as to say that they are collaborating. It's not even a one-sided "collaboration" for the human in that equation.  When you're collaborating in a team you don't have to fact-check your team-mates submissions.  You can have sufficient overlap between areas of expertise so as to have more than one pair of eyes on claims made, and people who are more expert at something can ELI5 things to other team members, but ultimately there is a back and forth.  In a human-machine "collaboration" you have the issue whereby the human needs to be an expert in the subjects to be able to know where the machine goes wrong and correct it.  In a learning context, I think that this is potentially detrimental to the learning process.  It's not the knowledge navigator future we've dreamed of - at least not yet.

The question that came to my mind is this:  why are some folks thinking of LLMs as a "collaborator" and not looking at Google search as a collaborator?

Example 3: OK, final critique here.  One of the things I've heard over the last few weeks is something along the lines of: if you are a good prompt engineer you can get some amazing information, which you have to fact-check."   There are just too many conditionals here to be that kind of study buddy that was mentioned above in example 2. Now, this reminds me of my undergraduate days when I learned about library databases and how to search for resources using Boolean logic. Yes, you needed to play around with your logic and your search terms (and sometimes you needed to learn controlled vocabulary), but you got actual sources that you could read and evaluate (and cite).  I think prompt engineering is less of a sign of things that learners need to learn and more of a sign of a system that is still half-baked ;-). That said, I come back to the fact that you need to know how things work in order to assess whether the output is of any use (or even factual).  An example that comes up is people learning another language.  You would write something in English (assuming English is not your native language) and pop it into an LLM to have it convert that output to something more "native sounding." Amongst other issues, it's useful to know why something sounds more correct than other options when you're learning a language. An LLM could do it for you, but that doesn't help you progress as a learner. We had an example of why it's important to know your stuff (even if machines help) in Star Trek Picard this season. The short version is that the ship's captain is brought to sick bay with some symptoms. The veteran doctor realizes that he has internal bleeding that the younger doctor's medical imaging devices failed to catch. If the veteran doctor didn't know her stuff, the captain would be dead 🤷🏻‍♂️

Anyway, this post is getting too long, so I'll save my ideas for using ChatGPT/AI for another post ;-)


Thoughts? 


~~~~~~

Just for documentation purposes, here are the objectives of the first session:

By participating in the synchronous Zoom session and any additional activities you pursue as part of your own learning experience, you will see how your colleagues are responding to ChatGPT. By the end of the live session and completion of any other activities you pursue, you will be able to:

  • Identify at least three ways ChatGPT might be of benefit to you and those you serve in your section of our lifelong learning environment
  • Anticipate at least three challenges ChatGPT may pose to you and those you serve
  • Describe at least one way you may begin incorporating ChatGPT into your work or describe at least one step you can take to overcome a challenge you face in incorporating ChatGPT into your lifelong learning efforts


Comments

Popular posts from this blog

Academic Facepalm (evaluation edition)

Discussion forums in MOOCs are counter-productive...well, sort of...

Latour: Third Source of Uncertainty - Objects have agency too!