Assessment in MOOCs
The more I read chapter in Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future, the more I am starting to feel like Anton Ego from the animated movie Ratatouille ;-) It's not that I am aiming to write harsh reviews of the stuff I read, but I kind of feel like the anticipation I have for reading some published things about MOOCs just aren't met with the appropriate level of satisfaction from reading what I am reading.
This time I am reviewing chapter 7, which is titled Beyond the Phenomenon: Assessment in Massive Open Online Courses (MOOCs). The abstract is as follows:
When I was a student in Applied Linguistics, for some of my courses I had professors who had us writing essays to practice answering questions in our field, but also staying on-point and not meandering, or including materials that were just not connected to the questions asked. This was an invaluable exercise as it helped hone my skills as a writer. Receiving peer review was also important, but I think having that in-class experience was really fundamental. I think that it is this type of feedback that is missing in several chapters I've read thus far.
For example, this chapter (according to the abstract) asks what are alternative assessment techniques for MOOCs. This is a good question! The author writes about the (on average) 10% completion rate in xMOOCs (the author is not that specific that they are xMOOCs, but from the text you can tell), and that other measures of what it means to complete a MOOC are necessary. I completely agree with this position. However, the author never really defines how completion is measured in those xMOOC contexts, and it's this that I find problematic. How can one start talking about alternatives (or additions to), when the current state is not really defined in terms of what assessment takes place in MOOCs to derive that magical completion number? This is particularly important because the author (in the solutions & recommendations section!) then goes on to describe CPR (Calibrated Peer Review) and AES (automatic essay scoring) which are used in coursera and edx MOOCs. These are tools used now to determine whether someone has (or has not) done what they need to do to be considered as completer. This doesn't really move the peg forward in terms of thinking about alternative (and alternatives to) assessment in MOOCs.
They talk (encyclopedically) about proctoring, MOOCs for credit, verified certificates, different MOOC 'types' (DOCC, BOOC, LOOC, SPOC, MOOR, SMOC), and digital badges (just to name a few things), but all of this is really disconnected from assessment in general. Proctoring, credit, verified certs, and badges are by-products of assessment (not assessment types), and MOOC types don't really contribute much to the assessment discussion. The language of MOOCs (i.e. the predominance of English) is discussed, but only really to suggest that existing assessment instruments (which only yield a 10% completion rate) be translated. OK. I don't disagree, but can't there be more substantive discussion here? How can this help more learners complete the MOOC. And, is a higher completion rate what we are looking for? Or is there more of a nuanced understanding of learning and assessment (and completion) in MOOCs?
I did chuckle a bit when I read that "a discussion forum is the main course component for active learner interactions and course participation in an online learning environment including a MOOC." (p. 125). While xMOOCs tend to have forums, not all MOOCs are traditionally forum driven, and I would say that forums aren't the main course component for course activity. I think by claiming this you are really framing MOOCs with the same frame as a certain type of online and distance education course for one thing. It also predisposes one to think of activity (and what is assessable) in specific ways, ways which are defined by existing learning environments that have other underlying factors that influence and impact their design.
I really wanted to like this chapter (I really did :-) ), however between the disconnected information and the failure to deliver what what promised (or at least what I read into the abstract), it's hard to say that it's a must read. That said. If you completely ignore the title of the chapter and ignore the abstract, it's not a bad summary of current and potential topics to consider in the credentialing of MOOCs.
Have you read this? Your thoughts?
CITATION:
Chauhan, A. (2015). Beyond the Phenomenon: Assessment in Massive Open Online Courses (MOOCs). In E. McKay, & J. Lenarcic (Eds.) Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future (pp. 119-140). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-8324-2.ch007
This time I am reviewing chapter 7, which is titled Beyond the Phenomenon: Assessment in Massive Open Online Courses (MOOCs). The abstract is as follows:
MOOC course offerings and enrollments continue to show an upward spiral with an increasing focus on completion rates. The completion rates of below 10 percent in MOOCs pose a serious challenge in designing effective pedagogical techniques and evolving assessment criterion for such a large population of learners. With more institutions jumping on the bandwagon to offer MOOCs, is completion rate the sole criterion to measure performance and learning outcomes in a MOOC? Learner interaction is central to knowledge creation and a key component of measuring learning outcomes in a MOOC. What are the alternate assessment techniques to measure performance and learning outcomes in a MOOC? MOOCs provide tremendous opportunity to explore emerging technologies to achieve learning outcomes. This chapter looks beyond the popularity of MOOCs by focusing on the assessment trends and analyzing their sustainability in the context of the MOOC phenomenon. The chapter continues the discussion on ‘ePedagogy and interactive MOOCs' relating to ‘performance measurement issues.'
When I was a student in Applied Linguistics, for some of my courses I had professors who had us writing essays to practice answering questions in our field, but also staying on-point and not meandering, or including materials that were just not connected to the questions asked. This was an invaluable exercise as it helped hone my skills as a writer. Receiving peer review was also important, but I think having that in-class experience was really fundamental. I think that it is this type of feedback that is missing in several chapters I've read thus far.
For example, this chapter (according to the abstract) asks what are alternative assessment techniques for MOOCs. This is a good question! The author writes about the (on average) 10% completion rate in xMOOCs (the author is not that specific that they are xMOOCs, but from the text you can tell), and that other measures of what it means to complete a MOOC are necessary. I completely agree with this position. However, the author never really defines how completion is measured in those xMOOC contexts, and it's this that I find problematic. How can one start talking about alternatives (or additions to), when the current state is not really defined in terms of what assessment takes place in MOOCs to derive that magical completion number? This is particularly important because the author (in the solutions & recommendations section!) then goes on to describe CPR (Calibrated Peer Review) and AES (automatic essay scoring) which are used in coursera and edx MOOCs. These are tools used now to determine whether someone has (or has not) done what they need to do to be considered as completer. This doesn't really move the peg forward in terms of thinking about alternative (and alternatives to) assessment in MOOCs.
They talk (encyclopedically) about proctoring, MOOCs for credit, verified certificates, different MOOC 'types' (DOCC, BOOC, LOOC, SPOC, MOOR, SMOC), and digital badges (just to name a few things), but all of this is really disconnected from assessment in general. Proctoring, credit, verified certs, and badges are by-products of assessment (not assessment types), and MOOC types don't really contribute much to the assessment discussion. The language of MOOCs (i.e. the predominance of English) is discussed, but only really to suggest that existing assessment instruments (which only yield a 10% completion rate) be translated. OK. I don't disagree, but can't there be more substantive discussion here? How can this help more learners complete the MOOC. And, is a higher completion rate what we are looking for? Or is there more of a nuanced understanding of learning and assessment (and completion) in MOOCs?
I did chuckle a bit when I read that "a discussion forum is the main course component for active learner interactions and course participation in an online learning environment including a MOOC." (p. 125). While xMOOCs tend to have forums, not all MOOCs are traditionally forum driven, and I would say that forums aren't the main course component for course activity. I think by claiming this you are really framing MOOCs with the same frame as a certain type of online and distance education course for one thing. It also predisposes one to think of activity (and what is assessable) in specific ways, ways which are defined by existing learning environments that have other underlying factors that influence and impact their design.
I really wanted to like this chapter (I really did :-) ), however between the disconnected information and the failure to deliver what what promised (or at least what I read into the abstract), it's hard to say that it's a must read. That said. If you completely ignore the title of the chapter and ignore the abstract, it's not a bad summary of current and potential topics to consider in the credentialing of MOOCs.
Have you read this? Your thoughts?
CITATION:
Chauhan, A. (2015). Beyond the Phenomenon: Assessment in Massive Open Online Courses (MOOCs). In E. McKay, & J. Lenarcic (Eds.) Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future (pp. 119-140). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-8324-2.ch007
Comments