Quality of MOOCs?
Continuing on with the review of articles in the book titled Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future today I have a chapter dealing with quality of MOOCs
Chapter 2 is titled Quality Assurance for Massive Open Access Online Courses: Building on the Old to Create Something New. The abstract tells us:
Institutional quality assurance frameworks enable systematic reporting of traditional higher education courses against agreed standards. However, their ability to adequately evaluate quality of a MOOC has not been explored in depth. This chapter, Quality Assurance for Massive Open access Online Courses – building on the old to create something new, explores the added learning and teaching dimensions that MOOCs offer and the limitations of existing frameworks. Many components of a MOOC are similar to traditional courses and, thus, aspects of quality assurance frameworks directly apply, however they fail to connect with the global, unrestricted reach of an open learning and teaching platform. The chapter uses the University of Tasmania's first MOOC, Understanding Dementia, as a case. MOOC-specific quality assurance dimensions are presented in an expanded framework, to which the Understanding Dementia MOOC is mapped, to demonstrate its usefulness to a sector grappling with this new learning and teaching modality. This chapter continues the commentary on – Policy issues in MOOCs Design, through the topic of ‘quality issues critical comparison – contrasting old with new.'
This was an interesting article, not because of the MOOC angle, but really about learning more about accreditation and peer review in an Australian context. The MOOC angle seemed...a little off. There are two big questions that came up as I was reading this article:
- Why does an institution offer MOOCs?
- How does one measure 'quality' in an educational context?
Now, I know that we have frameworks available to us as educators to quantify the 'quality' of our online courses. One prime example is Quality Matters. However, I think that all quantified means of measuring human learning do fall short. I've passed many courses in my days as a learner (especially in required undergrad courses) where I just checked items off the list. I knew the lines I was expected to paint in, and I did so proficiently enough to pass tests. Hence, quality-wise, I guess that means that the course was good, since I passed, the course and the course had gone through the requisite steps of both internal and external review, but it doesn't mean I learned anything.
One of the proposals of the authors is that MOOC business models have failed to reflect 'reality' is because they have not been integrated formally into university frameworks through quality assurance. I didn't see anything in this article that supports this hypothesis. Quality is a tricky thing. Unfortunately, for education I don't think that there isn't one simple solution to obtaining and measuring quality. We have, in my opinion, come up with a system that tries to keep honest people honest, however I don't think this system of peer review, internal and external review, and course evaluation are any indication of quality. Quality seems a bit elusive as a concept because it means different things to different people.
The type of quality we see described in traditional contexts is that of design. Making sure that (a) goals and objectives match the (b) instructional activities , and that (c) assessments tie back to objectives, and that materials used tie back to a + b + c. This is a simplified view, but it's all about connecting the dots in course design. Actual learning and application - once the class is over, is not usually something that is testable. In the parlance of Kirkpatrick's model of evaluation, we undertake level 1 and level 2 evaluations, but we are not able to conduct level 3 or level 4, which would require us to have access to the learners after the fact for further testing. In graduate programs where there might be a final capstone, portfolio, comprehensive exam, you might be able to conduct level 3 evaluations to some extent, but that's about it.
So, when we're talking about "quality" in MOOCs it's important to figure out what we mean by quality. The other thing that makes MOOCs, in my opinion, a bit harder to assess, especially in implementation, is the variable learners in the course. In traditional assessments of courses we know that courses need a minimum number of students to run (a business decision), so faculty can plan potential activities knowing the lower and upper limits. In a MOOC this is pretty hard because registrations mean nothing. How are outcomes measured when there is a lot of potential flux?
In terms of making the decision to offer a MOOC, the big question is why do universities do this? What's in it for them? The public education and access mission of some schools might be a reason, but given the costs described by the authors of making a MOOC, why go through these steps? Why not focus on OER development or something cheaper? I am sure that there is still hope for the academic youtube channel ;-) The authors write, rightly so, that MOOCs are not an easy path to revenue, so I am curious as to the reasons institutions decide to offer these MOOCs (other than the "they are new and shiny, and we must participate!" type of reason).
The authors, going back to quality assurance, claim that the "traditional approach of utilising external peer review to ensure that the course level learning outcomes are appropriately calibrated still has merit in the MOOC environment". To a small extent I agree, if you are talking about specific xMOOCs with specific outcomes, and specific limitations. However, I am reminded of a comment a friend and colleague (Maha B) made somewhere online (twitter? blog? facebook?) about feeling constrained when she had to fully develop the course structure of a (traditional) online course before the course started. This didn't leave much flexibility for learner interests. I see where Maha is coming from, and for experienced educators, while it makes me nervous, I keep an open mind.
Personally I like everything planned ahead of time for two reasons (1) I know an overall path I've designed, and I can work with it and help guide novice learners on rails, and I can also defend the design when it comes to a curriculum committee; and (2) it helps learners plan the semester to have something on rails. That said, I do not like being rigid in my teaching - just because we have a roadmap it doesn't mean that we can take the path least travelled, or even go off the road. This is little sidebar was with regard to 'regular' courses.
With MOOCs - given that they are a form of online education that we are still studying in their nascent state, to try to pigeonhole them into a rigid structure that was built in order to ensure that college credit was worth something comparable between institutions. MOOCs are not credit-bearing courses. They are optional, free, open to entry and exit, and they don't award any college credit. So why try to slice them and dice them by using measurements that are created for credit-bearing courses when the actual ethos and purpose of such courses is not the same as credit-bearing college courses? Furthermore, MOOCs (again depending on the course) can be completely undeveloped from at the beginning. There can be connecting and connective threads going from week to week, however the entire course structure need not be completed from the onset. This is one of those constraints that exists with credit-bearing courses, but there is no reason for it to exist with MOOCs.
In the end, I think that the concept of 'quality' in a MOOC won't elicit a unified definition of what that looks like.
Thoughts?
Citations:
Walls, J., Kelder, J., King, C., Booth, S., & Sadler, D. (2015). Quality Assurance for Massive Open Access Online Courses: Building on the Old to Create Something New. In E. McKay, & J. Lenarcic (Eds.) Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future (pp. 25-47). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-8324-2.ch002
Comments