NOTE: this is a report of the post I wrote for Sloan-C back in November of 2013. I am reposting here as a backup. The original can be found here http://blog.sloanconsortium.org/2013/11/18/mooc-evaluation-beyond-the-certificate-of-completion/
This coming January will be my third year of involvement in MOOCs.
Questions have come up in the last year around the issue of why students
“drop out” and how to better retain students. Tied to these questions
is the issue of evaluation of learners and learning in MOOCs. At this
point, I’ve witnessed at least three different kinds of MOOCs, and they
all approach evaluation somewhat differently.
During my first year all MOOCs were of the cMOOC kind. This included, among others, LAK11 (Learning Analytics), CCK11 (Connectivism) and MobiMOOC (mLearning).
There was not an evaluation of learner knowledge acquisition component
in these MOOCs because the MOOCs were focused on community and emergent
learning. This meant individuals made their own goals, within the
framework of the course, and worked out a plan to attain those goals. In
the end, they were only accountable to themselves and any sponsors they
might have had for participating in the learning activity. This lack of
external accountability earned cMOOCs
the nickname “massive open online conferences” instead of their
original “massive open online courses.” This was OK, as far as I was
concerned, because I was happy to learn new things instead of having to
show it on a piece of paper.
When Stanford, MIT, Harvard, and others dubbed “elite” Universities,
decided to join the game, learner evaluations came into the picture, and
came in a systematic way. This is partially because these courses were
converted from current campus courses and evaluations were part of the
norm. This brought up new considerations such as: How does one evaluate
hundreds of thousands of students? Even in a “mini-massive” course of
hundreds of students there is an issue in evaluation because it takes so
darned long. As a result, automated testing, by way of multiple-choice
quizzes, and peer reviews entered the picture. Big data and crowd
sourcing also seemed to provide answers. In the end, you received a nice
little certificate of participation if your overall grade was above a
certain percentage. In this sphere, the “with distinction” mark was also
available for students who went above and beyond the minimum
requirements. As I’ve written elsewhere, as the requirements vary from
course to course, the “with distinction” mark means little since there
is no standard rubric for it.
Now, we’ve seen other MOOC practices emerge. One recent category is the project-based MOOC (or pMOOC). The OLDS MOOC, and, more recently one could argue, Mozilla’s Open badge MOOC,
fall into this category. In this type of MOOC, participants work on a
project (or projects) throughout their involvement in the MOOC. The
projects receive student comments by peers designed around improvement
or they are evaluated by a team. The work seems substantial enough to
keep achievement hunters (those just looking for a quick path through
the MOOC in order to get a piece of paper, or a badge) at bay.
The question of learner evaluation in MOOC environments is quite big.
Yet, it all comes back to one fundamental question: What is the final
outcome of your MOOC? The “C” in MOOC is for “c”ourse. We have this
notion in our heads that courses have evaluations and grades. Perhaps
it’s time to reassess this aspect, just as we need to reassess the
significance of retention rates in MOOCs. Some self-check feedback is
probably worthwhile in any course, MOOC or not. In smaller courses,
establishing that you are on the right path might be as simple as a
discussion forum or discussion with peers and the instructor, so no test
is needed. In MOOCs, depending on the subject, some automated testing
may help. Peer reviews (not peer grading) may help in building a
community of learners that help scaffold each other’s learning
Evaluation as a means of self-check has its place. The proof,
however, on whether you can put this knowledge to use, is in practice. A
piece of paper saying you participated in a MOOC is for now not worth
the paper to print it. Institutions offering MOOCs do not give you
credit for the course, other institutions don’t accept it for credit,
and no one recognizes, at this point, that piece of paper. Even
Coursera’s signature track, with proctored exams, does not yet gain
recognition. So, at the end of the day, if learners aren’t getting some
external recognition of their learning, what is the point of formal
graded evaluations in MOOCs? I would argue that it’s time to go back to
the drawing board. When designing MOOCs, do a learner and learning
outcome analysis, and work toward development of MOOCs that makes sense
for that environment. Then work on evaluation mechanisms that make sense
for your stated course goals.
What are your thoughts on the subject?