Software Goes Through Beta Testing. Should Online College Courses? I don't often see educational news on slashdot so it piqued my interest. Slashdot links to an EdSurge article where Coursera courses are described as going through beta testing by volunteers (unpaid labor...)
The beta tests cover things such as:
... catching mistakes in quizzes and pointing out befuddling bits of video lectures, which can then be clarified before professors release the course to students.
Fair enough, these are things that we tend to catch in developing our own (traditional) online courses as well, and that we fix or update in continuous offering cycles. The immediate comparison, quite explicitly, in this edsurge article is the comparison of xMOOCs to traditional online courses. The article mentions rubrics like Quality Matters and SUNY's open access OSCQR ("oscar") rubric for online 'quality'. One SUNY college is reportedly paying external people $150 per course for such reviews of their online courses, and the overall question seems to be: how do we get people to do this beta test their online courses?
This article did have me getting a bit of a Janeway facepalm, when I read it (and when I read associated comments). The first reason I had a negative reaction to this article was that it assumes that such checks don't happen. At the instructional design level there are (well, there are supposed to be) checks and balances for this type of testing. If an instructional designer is helping you design your course, you should be getting critical feedback as a faculty member on this course. In academic departments where only designers do the design and development (in consultation with the faculty member as the expert) then the entire process is run by IDs who should see to this testing and control. Even when faculty work on their own (without instructional designers), which happens to often be the case in face-to-face courses, there are checks and balances there. There are touch-points throughout the semester and at the end where you get feedback from your students and you can update materials and the course as needed. So, I don't buy this notion that courses aren't 'tested'.†
Furthermore, a senior instructional designer at SUNY is cited as saying that one of the challenges "has been figuring out incentives for professors or instructional designers to conduct the quality checks," but at the same time is quoted as saying “on most campuses, instructional designers have their hands full and don’t have time to review the courses before they go live.” You can't say (insinuate) that you are trying to coax someone to do a specific task, and then say that these individuals don't have enough time on their hands to do the task you are trying to coax them to do. When will they accomplish it? Maybe the solution is to hire more instructional designers? Maybe look at the tenure and promotion processes for your institutions and see what can be done there to encourage better review/testing/development cycles for faculty who teach. Maybe hire designers who are also subject matter experts to work with those departments.‡
Another problem I have with this analogy on beta testing is that taught courses (not self-paced courses, which is what xMOOCs have become) have the benefit of a faculty member actually teaching the course, not just creating course packet material. Even multimodal course materials such as videos, podcasts, and animations, are in the end, a self-paced course packet if there isn't an actual person there tutoring or helping to guide you through that journey. When you have an actual human being teaching/instructing/facilitating/mentoring the course and the students in the course there is a certain degree of flexibility. You do want to test somewhat, but there is a lot of just-in-time fixes (or hot-fixes) as issues crop up. In a self-paced course you do want to test the heck out of the course to make sure that self-paced learners aren't stuck (especially when there is no other help!), but in a taught course, extensive testing is almost a waste of limited resources. The reason for this is that live courses (unlike self-paced courses and xMOOCs) aren meant to be kept up to date and to evolve as new knowledge comes into the field (I deal mostly with graduate online courses), Hence spending a lot of time and money testing courses that will have some component of the course change within the next 12-18 months is not a wise way to use a finite set of sources.
At the end of the day, I think it's important to critically query our underlying assumptions. When MOOCs were the new and shiny thing they were often (and wrongly) compared with traditional courses - they are not, and they don't have the same functional requirements. Now that MOOCs are 'innovating' in other areas, we want to make sure that these innovations are found elsewhere as well, but we don't see a stop to query if the functional requirements and the environment are the same. Maybe for a 100 level (intro course) that doesn't change often, and that is taken by several hundred students per year (if not per semester) you DO spend the time to exhaustively test and redesign (and maybe those beta testers get 3-credits of their college studies for free!), but for some courses that have the potential change often and have fewer students, this is overkill. At the end, for me, it comes down to local knowledge, and prioritizing of limited resources. Instructional Designers are a key element to this and it's important that organizations utilize their skills effectively for the improvement of the organization as a whole.
† Yes, OK, there are faculty out there have have taught the same thing for the past 10 years without any change, even the same typos in their lecture notes! I hope that these folks are the exception in academia and not the norm.
‡ The comparison here is to the librarian world where you have generalist librarians, and librarians who also have subject matter expertise in the discipline that they are librarians in. Why not do this for instructional designers?