Monday, August 31, 2015

MOOC Cheater! I caught you!

This past week the web was abuzz with new research to come out of Harvard and MIT on cheating identification in MOOCs, specifically xMOOCs hosted on the edX platform, but I suspect that any platform that collects appropriate analytics could see this used.  The title of the paper is Detecting and Preventing "Multiple-Account" Cheating in Massive Open Online Courses and it's an interesting read. I find the ways of crunching data collected by web-servers as a way of predicting human behavior fascinating.  While I am more of a qualitative researcher at heart, I do appreciate the ways in which we can use math, data, and analytics to derive patterns.

That said, my main argument with the authors of the article are not the methods they use, but rather the actual utility of such an algorithm.  The authors write that CAMEO (Copying Answers using Multiple Existences Online)† is a potential threat to MOOCs because

  1. CAMEO is highly accessible. Anyone can create additional accounts, harvest answers, and not be dependent on someone else to provide the cheater with the answers.
  2. Anyone can sit down in one sitting and acquire certification for a MOOC
  3. Therefore cheating via CAMEO can really lower the value of MOOC certificates, or just render them valueless.
  4. As an added bonus, CAMEO, it is pointed out, is counter to the ToS of the various xMOOC providers.
While I think that the process is interesting, I think that the authors' cautionary tales are part FUD and part bunk.  Yes, CAMEO is accessible to everyone.  If I had nothing better to do I would most likely create a few more accounts on Coursera and edx so I could ace my tests.  So what? It doesn't mean that I learned anything, and on top of that edx institutions have no (or little) skin in the game.  The reason why cheating is treated so seriously on campuses is because Universities lend their names and reputations to the students who graduate from their schools. Thusthe learners gain credibility by virtue of the credibility of the school.  I have not seen this happen in MOOCs yet.  MOOCs are not treated the same, at least as far as credibility goes, as traditional school environments.  I am not saying that they can't be, but they are not now.  In instances where we come closer to having the same level of skin in the game, we have verified certificates where people are proctored when they take these online exams.

The second issue, of being able to sit down in one sitting and get a certificate, is really a non issue. Some people already know the stuff that is covered in the MOOC, but they don't have a way to show that they already know the stuff.  Going through a MOOC where they can just sit down and take the assessments (if all of them are multiple choice anyway), means that in a relatively small time-span they can get some sort of acknowledgement of their previous knowledge.  There is nothing wrong with this.  This actually happened to me last summer.  I was interested in the Intro to Linux MOOC on edx.  Once the thing started I realized that through my peripheral linux use over the past 15 years I already knew the basics.  The course wasn't directed toward me, but I ended up taking the tests and the exams (which seemed easy) and I passed the course way before the closing date.  I suppose that the course rekindled the linux flame and got me using Ubuntu on a daily basis, but from just a course perspective I could be considered a cheater if concern #2 is one thing that pulls cheaters to the forefront.

Finally, the worry about diminishing he value of the certificate of completion...Well... hate to burst your bubble, but I would argue that certificates of completion for MOOCs are nice little acknowledgements for the learner that the work was done, but in real life they have little meaning to anyone but the learner. A certificate of completion may mean something to a current employer who may have asked you to undertake some sort of course, but it's really just a rubber stamp.  The rubber meets the road when you need to apply what you've learned, and neither a MOOC, not traditional corporate training (for that matter) can ensure that you can perform.  There need to be additional on-the-job support mechanisms available (if needed) to make this happen.  A certificate just means that you spent the requisite amount of time in front of a computer and you got some passing grade on some assessment (well, maybe - some corporate trainings have no assessments!).  At the end I wouldn't worry about the diminished value of a certificate of completion because it has no value.

To be fair, the authors do talk about the limitations of their findings, such as only having suspected cheaters, and not having confirmed their suspected cheaters with reality, but they also talk about the reality of trying to prevent "cheating" in MOOCs.

I would have found this paper much more interesting if it weren't so value-laden and steeped in preventing "cheating" in MOOCs.  Cheating, to me anyway, means that you have something substantive to gain by taking the shortcut.  In learning the only substantive thing to gain is the knowledge itself, and there is no shortcut for that (unless someone has invented a matrix-style knowledge dump machine and I can learn kung fu now).

Your thoughts?


NOTES:
† There is a line in the pilot episode of Marvel's Agents of S.H.I.E.L.D. when Agent Colson asks Skye if she knows whatStrategic Homeland Intervention, Enforcement and Logistics Division means and she responds that someone really wanted it to spell SHIELD.  I couldn't help but think about this when CAMEO was spelled out..

Wednesday, August 26, 2015

Virtually Connecting at #digPed 2015 (Day 5)

 This is also cross-posted on VirtuallyConnecting.org



The final virtually connecting session of the DigPed Lab Institute (don’t call it a conference!) was on Friday August 14, 2015 and despite the fatigue as people crossed the finish line for this lab institute we had an engaging and lively discussion for our vConnecting session!
Joining us in the virtual realm in this vConnecting session were my co-facilitator Autumm Caines (@autumm393), Greg McVerry (@jgmac1106) who was also joining us from EdCamp, Patrice Prusko (@ProfPatrice), Scott Robinson (@otterscotter), Stephanie Loomis (@mrsloomis), and Jen Ross (@jar).  Onsite we had our onsite vConnecting buddy, Andrea Rehn (@ProfRehn), as well as Amy Collier (@amcollier) who delivered the Friday keynote with Jesse Stommel, Hybrid Pedagogy’s Chris  Friend (@chris_friend), and Sonya Chaidez (@soniachaidez)
There were three broad topics of discussion: emergent learning, and connecting to it was the notion of not-yetness, and “safe” spaces for learning.
Emergent learning, as we discussed, is a space of opportunity, as well as a space of resistance in higher education.  It’s a place of resistance  right now due to pressures faced in our environment. Pressures brought on by measurement, pressures of clearly defined learning objectives, and pressure to get a handle of what “learning” really means. There is also a pressure to get  all learners to be in the same spot when they finish a course of study. This is problematic both in terms of not accounting for variance in learners themselves, but also it means that there is a possibility of missing out on some really great opportunities from the classroom diversity, and opportunities for exploratory learning that can pop up during a regular class session.
Amy talked a bit about three provocations in emergent learning:
  • Complexity is something we should strive for.  When we embrace it we can have excitement and joy around learning.
  • Measurement of learning, specifically the push for evidence-based teaching has narrowed what we think of as “learning” and what “counts” as learning.  This has an impact in how we design and implement “learning”.
  • Really strict and prescribed rubrics  for measuring the learning outcomes, or even the design of courses, and the use of rubrics such as Quality Matters, can really constrict how we design and approach learning. In addition to what Amy said, In my mind this also can mean that courses can look pretty cookie cutter regardless of the course being taught.
Relating to this Chris describes (one of) the drives of academics, that interest to go out there in order to find more knowledge, not necessarily to get specific, defined, and definitive answers, but knowledge.  This emergent and messy learning cannot easily be encapsulated by learning outcomes. In order to do this as well learners need to be able to embrace this uncertainty and undeterminess of the learning experience, or put another way, this not-yetness. Learning is continuous and it does not have just one end, but it does have, potentially, many different checkpoints.
When we are focusing on just outcomes we sometimes miss and forget to the ask the deep, and sometimes philosophical questions. Questions such as what is our goal? What is the purpose? What is it that we are trying to do, and for whom?
One of the interesting comments during this discussion was that Emergent outcomes make students feel very nervous because they don’t know the final outcomes at the beginning of the course. The way we’ve packaged education over the years brings up the analogy of a journey, and in many journeys we have maps to guide us (and taken to an extreme: Tours which also tell us exactly how much time we are spending and where).  By comparison emergent learning might mean going on without a map and making your own.  This tension between the two extremes means that there is potential for pressure on the instructor. How do you both plan for emergent learning, but also work within the framework of accountability, as Greg put it, to be responsible to your learners since that’s the environment that they will need to operate in?
Also, how does one design and implement instruction in a course, or series of courses, where the learner mindset might be “tell me what I need to know, so I can do it, get my degree, and move on”.  This linear progression from point A to Point B doesn’t provide a fertile environment for emergent learning, or does it?
In terms of safe learning spaces, a good point made by Chris, is that you can’t just create a safe classroom space by fiat. No place is truly risk-free, thus the inherent risks of spaces need to be discussed honestly with participants of that space. As Amy said, Instead of thinking in terms of safe spaces, we might want to reframe it as a space of trust. Stephanie also brought up an important point of being able to agree to disagree with your peers.  While many people may do this, it may be done in a dismissive manner.  We should also to be able to understand the other point of view even if you don’t agree with it, not just write off the other person for not having your views.
Finally, the value of emergent goals are much more visible and potent once a final reflection is done by learners. This is where they can “see” for themselves how much more they have learned through an emergent approach as compared to simply just reading certain parts of the textbook each week and doing something with it.   This was an interesting vConnecting session.  If you’re interested on any topics engage with us on @vconnecting on twitter.

Monday, August 24, 2015

How to teach swarming?


The other day I came across a post on someone's blog on group work, and I saw this funny (but true, at least to most of my experiences) graphic on group work.  One of the soft skills required to graduate in the MEd program I teach in is to be able to demonstrate the ability to work with others on projects and joint efforts.  This is quite broad as it doesn't specify whether someone is cooperating all the time, collaborating all the time, or choosing the situation and working accordingly.

So, given my experiences working with others, in school, at work, and through extracurricular activities like Rhizo, I thought that it would be good to have a mix of individual activities and group activities in the course I just finished teaching.  This seemed to have worked out well enough.  As with any team project no one seems to come out of the activity without some minor bruising; working with others is a contact sport, at least as far as the ego is concerned†.  So, I was trying to figure out how to best approach these group efforts in the courses I teach.  My ideal would be to have some sort of swarming activity happen, in a manner similar to what I experienced in the Rhizo group.  Instead of thinking of my part and your part, I am thinking of our project.

I wasn't taught how to swarm, and I suspect that others in Rhizo weren't either.  We sort of figured it as we went through the various projects we've collaborated on over this time frame.  The question is: how do I operationalize it for my course? And, how do I help others swarm with learning how to swarm is not really part of the learning objectives? 

Getting back to group work, some of the concerns that have come up over the past 3 years I've been teaching can be classified into the following concerns:
  • one person's grade is based on group performance, so there is FUD about one person's grade
  • one person feels that they are contributing more than their fair share 
  • I work from time-x to time-y, why can't my group members work at the same time? 
It seems that the round robin approach and the synchronous contact approach are the two most common approaches for groupwork.  It makes sense because most learners learn to do this in a face to face setting, in-between course sessions for the round-robin, and either before or after class when the face to face course is in session.  Swarming is different.  People don't wait for one person to finish before going in, they work concurrently. This has the effect of potentially obfuscating who does what, which is another concern. 

As a research collaborator in the Rhizo group I have never been worried that I do a ton of the work and others are freeloading. On the contrary, sometimes I feel like the slacker of the group and that I don't contribute enough (so many smart people in that group).  I've grown accustomed, at least a little, to being able to contribute as much as I can when I can. I don't know how long the rhizo group will last in its current form, but fluidity seems to be the keyword with the swarm, so I am convinced that when I have more time to devote I will definitely do so.  In a class of 13 weeks, however, how does one make group members comfortable with varied levels of participation?  Does each learner need to be the exact same as their peers?  Grading rubrics would tell us yes.  The pursuit of objectivity, while a good pursuit to have (IMHO), also means treating learners the exact same, and I am not sure that's the right approach*.

Unequal effort, or at least the perception of unequal effort, also means that there is a worry on the part of learners that they will get a bad grade not because of their work, but because their team mates will let them down.  This leads people to want to work work on their own. 

So, back to swarming.  My course is over.  I probably won't be teaching it for another year. In the fall semester I am supervising the capstone project, and in the spring I am taking off from teaching (I can only teach 2 courses per year) and focusing on my final EdD course before I start my dissertation seminars (EDDE804). This means that I have time to think about this and brainstorm.  How would you help learners (in an introductory course) to unwind and swarm for their groupwork? 


NOTE:
† a little connection to "checking the ego" at the door discussion we've had with Rhizo folks :-)

* Interesting side note.  Before I started teaching, and in my first few courses taught, I tended to be more of a rubric person. Everything needed a detailed rubric.  While I value rubrics now, I am less "dependent" on them. I haven't been teaching long (in the grand scheme of things), but I do feel that there are more valuable experiences to be had by going off-road sometimes...

Friday, August 21, 2015

Some thoughts on Peer Reviewed writing...

Pondering like it's 1999
It seems like forever ago that Sarah H. posted a link to an article on Times Higher Education titled The worst piece of peer review I’ve ever received.The article doesn't seem to be behind a paywall so it's worth going and having a read either before or after you read this blog post.  As I was reading this article my own thoughts about peer review, and now being a journal editor, sort of surfaced anew. I wish I had taken some notes while I was reading so that this blog post could be more rich, but  I'll just have to go from memory.

One of the things that stood out to me was this: if your peer reviewers are not happy with your submission you are doing something right. OK, this is quite paraphrased, but I hope I retained the original meaning.  I am not so sure I agree with this.  I've done peer review for articles and when I am not happy (well, "convinced" would be a better word) is when there are methodological issues, or logical fallacies, or the author hasn't done a good enough review of the literature.  In thinking of my role as a peer reviewer, or even a journal editor (still doesn't feel "real") my main goal isn't to dis someone's work. My goal is geared more toward understanding.  For instance, if an article I review has logical fallacies in it, or is hard to follow (even if logical), then what hope is there for the broader journal audience if I have problems with the article? I see the role of the editor and the reviewer NOT as gatekeeper but as a counselor. Someone who can help you get better "performance" (for lack of a better word).

Now this article brought up some other general areas as well which I have made into categories:

Peer Review as quality assurance
This concept to me  is completely bunk. It assumes, to some extent, that all knowledge is known and therefore you can have reasonable quality assurance.  The truth is that we research and publish because all knowledge isn't known and we are in search of it.  This means that what we "know" to be "true" today may be invalidated in the future by other researchers.  Peer review is about due diligence and making sure that the logic followed in the article is sound.  We try to match articles with subject experts because the "experts" tend to read more about that topic and can act as advisors for researchers who are not always that deep into things (everyone needs to start somewhere, no?).

Peer Reviewers are Experts, or the experts
I guess it depends on how you define expertise.  These days I am asked to peer review papers on MOOCs because I am an expert. However, I feel a bit like a fraud at times.  Because I've been working on projects with the Rhizo Team, and I've been pursuing my doctorate, my extracurricular reading on MOOCs has drastically declined.  I have read a lot on MOOCs, but I still have a a drawer full of research articles on MOOCs that I have only read the abstracts of. The question that I have then is this: How current should an expert be?  Does the expert need to be bleeding edge of research or can he lag behind by 18 months?

Validity of peer review
Peer review is seen as a way of validating research. I think that this, too, is bunk.  Again, unless I am working with the team that did the research, or try to replicate it, I can't validate it.  The best I can do is to ask questions and try to get clarifications.  Most articles are 6,000-9,000 words. That is often a very small window through which we look to see what people have discovered. This encompasses not only the literature review, and the methods, but also the findings and the further research section.  That's a lot!  I also think that the peer reviewer's axiology plays a crucial role as to whether your research is viewed as valid or not.  It's funny to read in class about the quant vs. the qual "battles".  Now that this is over with (to some extent anyway), the battle rages as to what is an appropriate venue for publication, and the venue determines the value of the piece you authored.  If your sources are not peer reviewed articles, but rather researched blog posts from experts in the field, all that some peer reviewers will see is blog posts, and those are without value to them.  To some extent it seems to me that peer reviewers are outsourcing the credibility question.  If we see blog posts in the citation list the work us thrust upon us to verify what people are using as their arguments (which makes more work for peer reviewers). If some something is in a peer reviewer journal we can be more lazy and assume that the work passes muster (then again, I've seen people claim that I support the concept of digital natives when in fact I was quoting Prensky and setting up an argument against the notion...lazyness)

Anonymity Peer Review
I think anonymity is an issue.  Peer review should never be anonymous.  I don't think that we can ever reach a point of impartial objectivity, and as such we can never be non-biased.  I think that we need to own our biases and work toward having them not influence our decisions. I also think that anonymous peer reviews, instead of encouraging open discussion, are just walls behind which potential bad actors can hide. I think it's the job of editors to weed out those bad actors, and there should be standards for review where both strong and weak aspects of the article can be addressed.

Peer Review as a yay or nay
Peer review systems have basically 3 decisions: accept with minor revisions, accept with major revisions, reject.  While this may have worked in the print days of journals and research, it doesn't work today - or at least it doesn't work for me.  Peer reviewers are stuck with a yay or nay decision on articles, and so are journal editors. There are articles that I've spent time giving feedback to the authors (as a peer reviewer).  Since it wasn't a minor revision, I chose "major" revision.  Other peer reviewers either selected major decision or deny.  There have been cases where the major revisions warranted a re-evaluation of the article (IMHO) after the revisions were done, but they were denied by the editors.  I don't know if editors of those journals had more article submissions than what they knew what to do with, but having peer review as a yay/nay decision seems quite wrong to me.  I believe that if resources exist to re-review an article after updates are made, the journal should re-review.


Peer Review Systems suck
This was something that was brought up in the THE article as well. My dream peer review system would provide me with something like a Google Docs interface where I could easily go and highlight areas, add commentary in the margins, and provide people with additional readings that could help them.  The way systems work now, while I can upload some document, I can't necessarily easily work in an word processor to add comments. What I often get are PDFs, and those aren't easy to annotate.  Even if I annotate them, extracting those comments is a pain for the authors. The systems seem built for an approve/deny framework and not for a mentoring and review framework.


Time-to-publication is insane
I hate to bring this up, but I have to, and at the same time I feel guilty as a journal editor. In my own world I would accept an article for review, have people review it, and if it passes muster (either right away or eventually) it would go up on a website ready to be viewed by the readers.  The reality is that articles come in, and I get to them when I have free time.  Getting peer reviewers is also time consuming because not everyone responds right away, so there is some lag there.  If there are enough article candidates for an issue of the journal, I get to these sooner.  If there are only one or two submissions I get to them later.  I would love to be able to get to them right away, but the semiotics of academic journals favor the volume# issue# structure, which implies that at least x-many articles need to be included in every issue. Given the semiotics of the IT system that publishes our journal, I feel a bit odd putting out an issue with 1 or 2 articles at a time. 

So, I, and other researchers, will work hard to put together something, only to have it waiting in a review queue for months. This is just wrong. However - at least on my end - it's also a balancing of duties. I do the journal editing on top of the job that pays the bills, so journal editing is not my priority at the moment.  I also want to work on my own exploration of ideas with people like the rhizo folks, so that also eats up my time (eats up my time sounds so negative, I actually like working with the rhizo folks - alternative words for this are welcomed in the comments section of this blog post).  I would hazard a guess that other journal editors, who do editing for free, also have similar issues. So, do we opt for paid editors or do we re-envision what it means to research and publish academic pieces?

I think I wrote a lot.  So, I'll end this post here and ask you: what are your thoughts on this process?  How can we fix it?

Wednesday, August 19, 2015

The past is calling, it wants its disruption back!

Another story I had in my Pocket account (for what seems like forever) is this story from campus technology talking about how nano-degrees are disrupting higher education.  I don't know about you, but it seems to me that people just love the word disrupt, or the pairing disruptive innovation.  I have a feeling that in 10-15 years when we're past this fad we will look back at this time period with the same sense of nostalgia that we look upon movies made in the 80s (you know, all of the movies that have synth-music playing).

Regardless of where you call it a nanodegree, an x-series set of courses, or a certificate this concept isn't new, and the article points to this fact that this isn't new. Certificates have been around for quite some time, and both higher education institutions and professional associations offer a wide variety of certification options for learners.  The professional associations, such as ATD or SHRM for example,  in theory, should have their finger on the pulse of the industry and they should be providing that "innovation" that these nanodegrees possibly provide. Academia might be accused of being a bit out of touch with industry, but in the past decade we've seen a proliferation of degrees and certificates that aim close that gap ("Homeland Security Studies" anyone?).

One of the things that worries me is the following rationale:
"We boil things down to their essence," he said. "That's kind of what a nanodegree is. We're telling students, this is exactly what you need to know to be in that job. And we absolutely have to deliver that.
The quote, which comes from Udacity COO, is best paired with comments I've often seen from CEOs that say that college graduates don't know how to do basic things.  And here is where a conflicting duality becomes apparent (at least to me).  The more specific a degree program is†, the less widely applicable it is in the long run. You have a box and you're in it.  The broad a degree program is, the more versatile it is in the long run, however you still need to be on-boarded as an employee to know what the organizational norms are.  It seems, to me at least, that employers want to have their cake and eat it too!  They want employees who are ready to hit the ground running, to be hired with that organizational know-how already there, and when those employees don't know this information ahead of time they are branded as incompetents who don't know "basic" things. I'd like to see a comparison of how different companies define "basic".

It is obvious that the sweet spot is somewhere in the middle. You want your graduates, and your employees, to be able to start with some sort of head start, and know some aspects of your company in order to ease transition into a new environment, but you aren't hiring drones. You are hiring inquisitive (hopefully) critical individuals who can think outside of the box.  The box is there for now, but it isn't going to be there forever.  I think that branding certificates as nanodegrees and replicating the wheel won't help learners and it won't help industry.  My (off the cuff) solution is for companies to work closer with academia on placement. Instead of college being a borg maturation chamber for youth, why not blend learning with actual work? If a college degree takes 4 years to complete, why not tie in a part-time job at a firm while you're in school and have embedded college advisors and counselors in the first to help students acclimate to the work environment and to be able to apply what they are learning to what they do? 

Certificates are fine and dandy, but they won't solve your issue.  I can train you to setup and active directory for your organization, but without knowing your own organization's norms, your AD setup can ultimately fail.

Your thoughts?



NOTES:
† degree program here is any sort of formal program, be it a BA, MA, PhD, certificate, nanodegree, whatever.