Graduate Students as OSCQR reviewers

OSCQR Digital Badge

In the beforetimes (summer 2019), I had access to some graduate assistant hours and I needed to find a project for them.  Since this group of graduate assistants was destined to become educators, I thought it would be an interesting idea to train them on the OSCQR rubric and have them be " reviewer 1" and "reviewer 2" on a few course reviews that I wanted to undertake. I took on the role of the instructional designer in this exercise (reviewer 3).  Now, I know that the faculty member who is teaching these courses also needs to be part of the conversation, but more on that later...

My original goal for this exercise, beyond the actual review, was to conduct a collaborative autoethnography of the process of having graduate students conduct OSCQR reviews of courses that they had themselves had most likely taken as a learner. Content-wise the material should have been similar even if the instructors and modalities were potentially different.  Well, the Fall 2019 semester got very busy, and then we've been living in COVID world since 2020. Additionally, those students graduated and moved on with their professional lives, so such a paper is no longer possible. I was considering using an autoethnographic approach just with my own reflections on the process (after all, I still have most of my notes), but I've got several other irons in the fire, and I am not sure how useful it would be to go through the process of trying to find a journal to get a peer-reviewed version out.  Recently, I was inspired by Maha though, and her blogging of an unpublished paper, that I thought I would give this approach a try.  So here it goes!

The Need

It's been my observation over the last 15 years in higher education that there are many silos that just don't connect.  One might expect silos to exist between departments, but through interactions with various departments I've come to the conclusion that many faculty know what they teach and have a vague idea of what other courses exist in their department, however, they don't have deep knowledge of what goes on in those courses. Additionally, even if faculty are "peer reviewing" a fellow colleague's course (including courses of colleagues in other departments), there is a reluctance to address issues with pedagogy or material because of "academic freedom" 🙄. Even things like accessibility get rolled into the avoidance technique of academic freedom, which makes course improvement an issue. I suppose there is a curriculum committee in each department for this, but as a designer every time I've brought it up with academic departments I always get the audible grunts or roll eyes.  No one wants to peer review someone else's course, even with a rubric.  In any case, for this project, I was interested not in what course materials faculty were using in their courses, but in things like the alignment of objectives and readings and activities (making sure the connections were clear to learners), accessibility, design, and tool usage. My main focus was the student experience, and the goal was to present findings and recommendations in the Fall semester. This would be a good opportunity to open up a dialogue between the faculty on course improvement and cross-course communication in general.

The Assistants

The graduate assistants who were my reviewers for this project were students 25-30 years old, who were mostly done with their degrees, so they had experienced most of the curriculum that they were reviewing from a student lens already.  Some courses were new to reviewers. None of the assistants had any instructional design experience and their pedagogical training only included face-to-face contexts.  My hope in training them on OSCQR during this review was that they would also be able to take what they learned, both from OSCQR and from reviewing all virtual classes into their face-to-face classrooms (and virtual classrooms if the need arose). Each assistant had 10-12 hours per week of work (if I remember correctly).

The Process

The process started with a virtual training session on OSCQR 3.0. This took place over zoom since, even in 2019, we had a hard time getting everyone to campus at the same time.  It was also not necessary to meet face-to-face for this. I walked the three assistants through the OSCQR rubric and its annotations and provided a quick instructional design for online learning BootCamp. This was the highlights of the highlights version of ID for OL.  To demonstrate the rubric I used one of the soon-to-be-reviewed courses as an exemplar.

There were 18 courses to be reviewed.  These were the most commonly offered courses in the department, so they represented the biggest bang for time spent.  With two reviewers per course, this meant that each reviewer would review 12 courses over the summer.  This also meant that we were reviewing about one course each week of the summer, and once a week we tried to have a debrief about our findings. I was available for questions throughout the week. At the conclusion of this review, each of the 18 courses had qualitative feedback from the three reviewers. 

What worked in the training and what didn't?

One of the areas of improvement is definitely the demo course used to train the reviewers on OSCQR. I didn't have much of an opportunity to create a sample course to run the training, so we all evaluated a course from the pool of courses that were going to be reviewed that summer. I think there are pros and cons here.  One pro is that you tend to get one course more under the microscope than what you might normally have, but on the other hand, you risk reviewing every course in that collection similarly to that initial course, so reviews of courses might end up recommending that courses look like that first one (if the reviewers thought it was particularly well designed). 

In retrospect, I should have used another course, for example, I could have reached out to a colleague in another department to see if they'd volunteer their course as a course to train on.  I did end up using one of my old INSDSG courses to provide exemplars or alternate ways of addressing elements of the different categories of OSCQR. I think the zoom training session worked well, and the weekly debriefs worked well enough. If I were to do this again, I'd formalize the debrief a bit, maybe ask reviewers to show and tell some things that they felt worked rather well in courses that they reviewed and any questions they may have had.  I might also include a weekly reflection.  

One of the things I wanted to get to with some sort of ethnographic component in the original project was to see if student reviewers had any trepidations about reviewing courses in their department, if they felt like they needed to be positive, or if they felt that there might be fear of retaliation. I didn't sense a lot of this, but it would be good to have some data.

Finally, I think that 18 courses was a little much for the reviewers we had available.  I think the reviews felt a little bit like a conveyor belt rather than an opportunity for review and conversations, which I hoped to foster.

How did the courses fare?

Most courses made it through the OSCQR review with minor or moderate changes being suggested. I think that this is partly a sign of the maturity of the program (courses have undergone many revisions over the last 15 years), and partly due to faculty familiarity with things like Quality Matters and OSCQR. Also, in order to teach online (at least back then), faculty needed to complete some coursework to prepare them to teach online. Additionally, our ID group had begun an accessibility campaign the year prior to the review, so I have a sense that many of the inaccessible elements were handled then.  We did find some accessibility issues, but I think there would have been more had our colleagues in ID not been proactive with their initiatives. 

Some of the things that came up as elements for review included onboarding information, including technological requirements for the courses, links and information about campus services (library, accessibility office, tech support, etc.), contact info for the department,  and of course accessibility of attached documents and presentations.  My two big takeaways here are these:

1) Some common elements of courses (such as access to campus services, department contacts, and so on), can go into a template that every course can use.  While I don't think one template could apply to every single course a department uses, at the very least some common things that you want every student in the department to have and know should be included. Each onboarding for a class will vary depending on the topic, the course, and the instructor, but some things can be standardized.  I think having faculty develop their own department's template(s) would go a long way to help learners recognize the signposts in each course so that they can wayfind with more ease, yet still retain faculty voice in the decisions made in those templates.

2) Accessibility is an ongoing effort! It needs to be baked in, not sprinkled on (as my old friend Margaret used to say).  Even though we worked hard in 2018-2019 to address accessibility issues, there were still elements of courses that were inaccessible. I think that this is one of those battles that require a few stakeholders to be involved.  The faculty member should certainly strive to make the content as accessible as possible right from the start, but maybe there could be a team in the department or the college that helps out with ensuring last mile compliance. Automation has certainly helped a lot, for example automatically generated closed captioning, but those should be verified by a human.

Reflection: Technolutionism and Faculty Learning Communities

Over the past decade (or more), our ID group has been great! They've nurtured faculty who've been teaching online (at least from what I can see from my end) but there is a certain spirit that seems to just stick around. That is a faith in technosolutionism.  From automatic AI transcriptions, to "plagiarism detection" tools like Turn It In, to remote proctoring. From chats with other colleagues, some departments seem to see cheaters and plagiarizers everywhere.  This isn't healthy.    Another issue arises with adjunct faculty. There needs to be a way to welcome our adjuncts into the overall discussion and training around teaching and learning, but it can't be uncompensated. Full-time faculty can count this kind of thing toward their PD and be paid for it, whereas adjuncts are only paid for the time they are in the classroom.  If we want to include our adjunct colleagues into these discussions (which in turn translate into classroom pedagogies) we need to compensate them for their time, and encourage (or require?) participation in faculty learning communities, both intra- and inter-departmentally. I also think it's a good idea (at least for graduate students who will be going into teaching after they graduate) to prepare them on how to review courses and offer constructive feedback, even for people who they know.  I am not sure how to get current faculty members unstuck from the fear/concern of making suggestions to their colleagues (you know...because of "academic freedom"), but maybe we can break that cycle with the next-gen of teachers and IDers.

That's it for now.  Your thoughts?



Comments

Popular posts from this blog

Latour: Third Source of Uncertainty - Objects have agency too!

MOOC participation - open door policy and analytics

You've been punk'd! However, that was an educational experience