Monday, August 31, 2015

MOOC Cheater! I caught you!

This past week the web was abuzz with new research to come out of Harvard and MIT on cheating identification in MOOCs, specifically xMOOCs hosted on the edX platform, but I suspect that any platform that collects appropriate analytics could see this used.  The title of the paper is Detecting and Preventing "Multiple-Account" Cheating in Massive Open Online Courses and it's an interesting read. I find the ways of crunching data collected by web-servers as a way of predicting human behavior fascinating.  While I am more of a qualitative researcher at heart, I do appreciate the ways in which we can use math, data, and analytics to derive patterns.

That said, my main argument with the authors of the article are not the methods they use, but rather the actual utility of such an algorithm.  The authors write that CAMEO (Copying Answers using Multiple Existences Online)† is a potential threat to MOOCs because

  1. CAMEO is highly accessible. Anyone can create additional accounts, harvest answers, and not be dependent on someone else to provide the cheater with the answers.
  2. Anyone can sit down in one sitting and acquire certification for a MOOC
  3. Therefore cheating via CAMEO can really lower the value of MOOC certificates, or just render them valueless.
  4. As an added bonus, CAMEO, it is pointed out, is counter to the ToS of the various xMOOC providers.
While I think that the process is interesting, I think that the authors' cautionary tales are part FUD and part bunk.  Yes, CAMEO is accessible to everyone.  If I had nothing better to do I would most likely create a few more accounts on Coursera and edx so I could ace my tests.  So what? It doesn't mean that I learned anything, and on top of that edx institutions have no (or little) skin in the game.  The reason why cheating is treated so seriously on campuses is because Universities lend their names and reputations to the students who graduate from their schools. Thusthe learners gain credibility by virtue of the credibility of the school.  I have not seen this happen in MOOCs yet.  MOOCs are not treated the same, at least as far as credibility goes, as traditional school environments.  I am not saying that they can't be, but they are not now.  In instances where we come closer to having the same level of skin in the game, we have verified certificates where people are proctored when they take these online exams.

The second issue, of being able to sit down in one sitting and get a certificate, is really a non issue. Some people already know the stuff that is covered in the MOOC, but they don't have a way to show that they already know the stuff.  Going through a MOOC where they can just sit down and take the assessments (if all of them are multiple choice anyway), means that in a relatively small time-span they can get some sort of acknowledgement of their previous knowledge.  There is nothing wrong with this.  This actually happened to me last summer.  I was interested in the Intro to Linux MOOC on edx.  Once the thing started I realized that through my peripheral linux use over the past 15 years I already knew the basics.  The course wasn't directed toward me, but I ended up taking the tests and the exams (which seemed easy) and I passed the course way before the closing date.  I suppose that the course rekindled the linux flame and got me using Ubuntu on a daily basis, but from just a course perspective I could be considered a cheater if concern #2 is one thing that pulls cheaters to the forefront.

Finally, the worry about diminishing he value of the certificate of completion...Well... hate to burst your bubble, but I would argue that certificates of completion for MOOCs are nice little acknowledgements for the learner that the work was done, but in real life they have little meaning to anyone but the learner. A certificate of completion may mean something to a current employer who may have asked you to undertake some sort of course, but it's really just a rubber stamp.  The rubber meets the road when you need to apply what you've learned, and neither a MOOC, not traditional corporate training (for that matter) can ensure that you can perform.  There need to be additional on-the-job support mechanisms available (if needed) to make this happen.  A certificate just means that you spent the requisite amount of time in front of a computer and you got some passing grade on some assessment (well, maybe - some corporate trainings have no assessments!).  At the end I wouldn't worry about the diminished value of a certificate of completion because it has no value.

To be fair, the authors do talk about the limitations of their findings, such as only having suspected cheaters, and not having confirmed their suspected cheaters with reality, but they also talk about the reality of trying to prevent "cheating" in MOOCs.

I would have found this paper much more interesting if it weren't so value-laden and steeped in preventing "cheating" in MOOCs.  Cheating, to me anyway, means that you have something substantive to gain by taking the shortcut.  In learning the only substantive thing to gain is the knowledge itself, and there is no shortcut for that (unless someone has invented a matrix-style knowledge dump machine and I can learn kung fu now).

Your thoughts?


NOTES:
† There is a line in the pilot episode of Marvel's Agents of S.H.I.E.L.D. when Agent Colson asks Skye if she knows whatStrategic Homeland Intervention, Enforcement and Logistics Division means and she responds that someone really wanted it to spell SHIELD.  I couldn't help but think about this when CAMEO was spelled out..

Wednesday, August 26, 2015

Virtually Connecting at #digPed 2015 (Day 5)

 This is also cross-posted on VirtuallyConnecting.org



The final virtually connecting session of the DigPed Lab Institute (don’t call it a conference!) was on Friday August 14, 2015 and despite the fatigue as people crossed the finish line for this lab institute we had an engaging and lively discussion for our vConnecting session!
Joining us in the virtual realm in this vConnecting session were my co-facilitator Autumm Caines (@autumm393), Greg McVerry (@jgmac1106) who was also joining us from EdCamp, Patrice Prusko (@ProfPatrice), Scott Robinson (@otterscotter), Stephanie Loomis (@mrsloomis), and Jen Ross (@jar).  Onsite we had our onsite vConnecting buddy, Andrea Rehn (@ProfRehn), as well as Amy Collier (@amcollier) who delivered the Friday keynote with Jesse Stommel, Hybrid Pedagogy’s Chris  Friend (@chris_friend), and Sonya Chaidez (@soniachaidez)
There were three broad topics of discussion: emergent learning, and connecting to it was the notion of not-yetness, and “safe” spaces for learning.
Emergent learning, as we discussed, is a space of opportunity, as well as a space of resistance in higher education.  It’s a place of resistance  right now due to pressures faced in our environment. Pressures brought on by measurement, pressures of clearly defined learning objectives, and pressure to get a handle of what “learning” really means. There is also a pressure to get  all learners to be in the same spot when they finish a course of study. This is problematic both in terms of not accounting for variance in learners themselves, but also it means that there is a possibility of missing out on some really great opportunities from the classroom diversity, and opportunities for exploratory learning that can pop up during a regular class session.
Amy talked a bit about three provocations in emergent learning:
  • Complexity is something we should strive for.  When we embrace it we can have excitement and joy around learning.
  • Measurement of learning, specifically the push for evidence-based teaching has narrowed what we think of as “learning” and what “counts” as learning.  This has an impact in how we design and implement “learning”.
  • Really strict and prescribed rubrics  for measuring the learning outcomes, or even the design of courses, and the use of rubrics such as Quality Matters, can really constrict how we design and approach learning. In addition to what Amy said, In my mind this also can mean that courses can look pretty cookie cutter regardless of the course being taught.
Relating to this Chris describes (one of) the drives of academics, that interest to go out there in order to find more knowledge, not necessarily to get specific, defined, and definitive answers, but knowledge.  This emergent and messy learning cannot easily be encapsulated by learning outcomes. In order to do this as well learners need to be able to embrace this uncertainty and undeterminess of the learning experience, or put another way, this not-yetness. Learning is continuous and it does not have just one end, but it does have, potentially, many different checkpoints.
When we are focusing on just outcomes we sometimes miss and forget to the ask the deep, and sometimes philosophical questions. Questions such as what is our goal? What is the purpose? What is it that we are trying to do, and for whom?
One of the interesting comments during this discussion was that Emergent outcomes make students feel very nervous because they don’t know the final outcomes at the beginning of the course. The way we’ve packaged education over the years brings up the analogy of a journey, and in many journeys we have maps to guide us (and taken to an extreme: Tours which also tell us exactly how much time we are spending and where).  By comparison emergent learning might mean going on without a map and making your own.  This tension between the two extremes means that there is potential for pressure on the instructor. How do you both plan for emergent learning, but also work within the framework of accountability, as Greg put it, to be responsible to your learners since that’s the environment that they will need to operate in?
Also, how does one design and implement instruction in a course, or series of courses, where the learner mindset might be “tell me what I need to know, so I can do it, get my degree, and move on”.  This linear progression from point A to Point B doesn’t provide a fertile environment for emergent learning, or does it?
In terms of safe learning spaces, a good point made by Chris, is that you can’t just create a safe classroom space by fiat. No place is truly risk-free, thus the inherent risks of spaces need to be discussed honestly with participants of that space. As Amy said, Instead of thinking in terms of safe spaces, we might want to reframe it as a space of trust. Stephanie also brought up an important point of being able to agree to disagree with your peers.  While many people may do this, it may be done in a dismissive manner.  We should also to be able to understand the other point of view even if you don’t agree with it, not just write off the other person for not having your views.
Finally, the value of emergent goals are much more visible and potent once a final reflection is done by learners. This is where they can “see” for themselves how much more they have learned through an emergent approach as compared to simply just reading certain parts of the textbook each week and doing something with it.   This was an interesting vConnecting session.  If you’re interested on any topics engage with us on @vconnecting on twitter.

Monday, August 24, 2015

How to teach swarming?


The other day I came across a post on someone's blog on group work, and I saw this funny (but true, at least to most of my experiences) graphic on group work.  One of the soft skills required to graduate in the MEd program I teach in is to be able to demonstrate the ability to work with others on projects and joint efforts.  This is quite broad as it doesn't specify whether someone is cooperating all the time, collaborating all the time, or choosing the situation and working accordingly.

So, given my experiences working with others, in school, at work, and through extracurricular activities like Rhizo, I thought that it would be good to have a mix of individual activities and group activities in the course I just finished teaching.  This seemed to have worked out well enough.  As with any team project no one seems to come out of the activity without some minor bruising; working with others is a contact sport, at least as far as the ego is concerned†.  So, I was trying to figure out how to best approach these group efforts in the courses I teach.  My ideal would be to have some sort of swarming activity happen, in a manner similar to what I experienced in the Rhizo group.  Instead of thinking of my part and your part, I am thinking of our project.

I wasn't taught how to swarm, and I suspect that others in Rhizo weren't either.  We sort of figured it as we went through the various projects we've collaborated on over this time frame.  The question is: how do I operationalize it for my course? And, how do I help others swarm with learning how to swarm is not really part of the learning objectives? 

Getting back to group work, some of the concerns that have come up over the past 3 years I've been teaching can be classified into the following concerns:
  • one person's grade is based on group performance, so there is FUD about one person's grade
  • one person feels that they are contributing more than their fair share 
  • I work from time-x to time-y, why can't my group members work at the same time? 
It seems that the round robin approach and the synchronous contact approach are the two most common approaches for groupwork.  It makes sense because most learners learn to do this in a face to face setting, in-between course sessions for the round-robin, and either before or after class when the face to face course is in session.  Swarming is different.  People don't wait for one person to finish before going in, they work concurrently. This has the effect of potentially obfuscating who does what, which is another concern. 

As a research collaborator in the Rhizo group I have never been worried that I do a ton of the work and others are freeloading. On the contrary, sometimes I feel like the slacker of the group and that I don't contribute enough (so many smart people in that group).  I've grown accustomed, at least a little, to being able to contribute as much as I can when I can. I don't know how long the rhizo group will last in its current form, but fluidity seems to be the keyword with the swarm, so I am convinced that when I have more time to devote I will definitely do so.  In a class of 13 weeks, however, how does one make group members comfortable with varied levels of participation?  Does each learner need to be the exact same as their peers?  Grading rubrics would tell us yes.  The pursuit of objectivity, while a good pursuit to have (IMHO), also means treating learners the exact same, and I am not sure that's the right approach*.

Unequal effort, or at least the perception of unequal effort, also means that there is a worry on the part of learners that they will get a bad grade not because of their work, but because their team mates will let them down.  This leads people to want to work work on their own. 

So, back to swarming.  My course is over.  I probably won't be teaching it for another year. In the fall semester I am supervising the capstone project, and in the spring I am taking off from teaching (I can only teach 2 courses per year) and focusing on my final EdD course before I start my dissertation seminars (EDDE804). This means that I have time to think about this and brainstorm.  How would you help learners (in an introductory course) to unwind and swarm for their groupwork? 


NOTE:
† a little connection to "checking the ego" at the door discussion we've had with Rhizo folks :-)

* Interesting side note.  Before I started teaching, and in my first few courses taught, I tended to be more of a rubric person. Everything needed a detailed rubric.  While I value rubrics now, I am less "dependent" on them. I haven't been teaching long (in the grand scheme of things), but I do feel that there are more valuable experiences to be had by going off-road sometimes...

Friday, August 21, 2015

Some thoughts on Peer Reviewed writing...

Pondering like it's 1999
It seems like forever ago that Sarah H. posted a link to an article on Times Higher Education titled The worst piece of peer review I’ve ever received.The article doesn't seem to be behind a paywall so it's worth going and having a read either before or after you read this blog post.  As I was reading this article my own thoughts about peer review, and now being a journal editor, sort of surfaced anew. I wish I had taken some notes while I was reading so that this blog post could be more rich, but  I'll just have to go from memory.

One of the things that stood out to me was this: if your peer reviewers are not happy with your submission you are doing something right. OK, this is quite paraphrased, but I hope I retained the original meaning.  I am not so sure I agree with this.  I've done peer review for articles and when I am not happy (well, "convinced" would be a better word) is when there are methodological issues, or logical fallacies, or the author hasn't done a good enough review of the literature.  In thinking of my role as a peer reviewer, or even a journal editor (still doesn't feel "real") my main goal isn't to dis someone's work. My goal is geared more toward understanding.  For instance, if an article I review has logical fallacies in it, or is hard to follow (even if logical), then what hope is there for the broader journal audience if I have problems with the article? I see the role of the editor and the reviewer NOT as gatekeeper but as a counselor. Someone who can help you get better "performance" (for lack of a better word).

Now this article brought up some other general areas as well which I have made into categories:

Peer Review as quality assurance
This concept to me  is completely bunk. It assumes, to some extent, that all knowledge is known and therefore you can have reasonable quality assurance.  The truth is that we research and publish because all knowledge isn't known and we are in search of it.  This means that what we "know" to be "true" today may be invalidated in the future by other researchers.  Peer review is about due diligence and making sure that the logic followed in the article is sound.  We try to match articles with subject experts because the "experts" tend to read more about that topic and can act as advisors for researchers who are not always that deep into things (everyone needs to start somewhere, no?).

Peer Reviewers are Experts, or the experts
I guess it depends on how you define expertise.  These days I am asked to peer review papers on MOOCs because I am an expert. However, I feel a bit like a fraud at times.  Because I've been working on projects with the Rhizo Team, and I've been pursuing my doctorate, my extracurricular reading on MOOCs has drastically declined.  I have read a lot on MOOCs, but I still have a a drawer full of research articles on MOOCs that I have only read the abstracts of. The question that I have then is this: How current should an expert be?  Does the expert need to be bleeding edge of research or can he lag behind by 18 months?

Validity of peer review
Peer review is seen as a way of validating research. I think that this, too, is bunk.  Again, unless I am working with the team that did the research, or try to replicate it, I can't validate it.  The best I can do is to ask questions and try to get clarifications.  Most articles are 6,000-9,000 words. That is often a very small window through which we look to see what people have discovered. This encompasses not only the literature review, and the methods, but also the findings and the further research section.  That's a lot!  I also think that the peer reviewer's axiology plays a crucial role as to whether your research is viewed as valid or not.  It's funny to read in class about the quant vs. the qual "battles".  Now that this is over with (to some extent anyway), the battle rages as to what is an appropriate venue for publication, and the venue determines the value of the piece you authored.  If your sources are not peer reviewed articles, but rather researched blog posts from experts in the field, all that some peer reviewers will see is blog posts, and those are without value to them.  To some extent it seems to me that peer reviewers are outsourcing the credibility question.  If we see blog posts in the citation list the work us thrust upon us to verify what people are using as their arguments (which makes more work for peer reviewers). If some something is in a peer reviewer journal we can be more lazy and assume that the work passes muster (then again, I've seen people claim that I support the concept of digital natives when in fact I was quoting Prensky and setting up an argument against the notion...lazyness)

Anonymity Peer Review
I think anonymity is an issue.  Peer review should never be anonymous.  I don't think that we can ever reach a point of impartial objectivity, and as such we can never be non-biased.  I think that we need to own our biases and work toward having them not influence our decisions. I also think that anonymous peer reviews, instead of encouraging open discussion, are just walls behind which potential bad actors can hide. I think it's the job of editors to weed out those bad actors, and there should be standards for review where both strong and weak aspects of the article can be addressed.

Peer Review as a yay or nay
Peer review systems have basically 3 decisions: accept with minor revisions, accept with major revisions, reject.  While this may have worked in the print days of journals and research, it doesn't work today - or at least it doesn't work for me.  Peer reviewers are stuck with a yay or nay decision on articles, and so are journal editors. There are articles that I've spent time giving feedback to the authors (as a peer reviewer).  Since it wasn't a minor revision, I chose "major" revision.  Other peer reviewers either selected major decision or deny.  There have been cases where the major revisions warranted a re-evaluation of the article (IMHO) after the revisions were done, but they were denied by the editors.  I don't know if editors of those journals had more article submissions than what they knew what to do with, but having peer review as a yay/nay decision seems quite wrong to me.  I believe that if resources exist to re-review an article after updates are made, the journal should re-review.


Peer Review Systems suck
This was something that was brought up in the THE article as well. My dream peer review system would provide me with something like a Google Docs interface where I could easily go and highlight areas, add commentary in the margins, and provide people with additional readings that could help them.  The way systems work now, while I can upload some document, I can't necessarily easily work in an word processor to add comments. What I often get are PDFs, and those aren't easy to annotate.  Even if I annotate them, extracting those comments is a pain for the authors. The systems seem built for an approve/deny framework and not for a mentoring and review framework.


Time-to-publication is insane
I hate to bring this up, but I have to, and at the same time I feel guilty as a journal editor. In my own world I would accept an article for review, have people review it, and if it passes muster (either right away or eventually) it would go up on a website ready to be viewed by the readers.  The reality is that articles come in, and I get to them when I have free time.  Getting peer reviewers is also time consuming because not everyone responds right away, so there is some lag there.  If there are enough article candidates for an issue of the journal, I get to these sooner.  If there are only one or two submissions I get to them later.  I would love to be able to get to them right away, but the semiotics of academic journals favor the volume# issue# structure, which implies that at least x-many articles need to be included in every issue. Given the semiotics of the IT system that publishes our journal, I feel a bit odd putting out an issue with 1 or 2 articles at a time. 

So, I, and other researchers, will work hard to put together something, only to have it waiting in a review queue for months. This is just wrong. However - at least on my end - it's also a balancing of duties. I do the journal editing on top of the job that pays the bills, so journal editing is not my priority at the moment.  I also want to work on my own exploration of ideas with people like the rhizo folks, so that also eats up my time (eats up my time sounds so negative, I actually like working with the rhizo folks - alternative words for this are welcomed in the comments section of this blog post).  I would hazard a guess that other journal editors, who do editing for free, also have similar issues. So, do we opt for paid editors or do we re-envision what it means to research and publish academic pieces?

I think I wrote a lot.  So, I'll end this post here and ask you: what are your thoughts on this process?  How can we fix it?

Wednesday, August 19, 2015

The past is calling, it wants its disruption back!

Another story I had in my Pocket account (for what seems like forever) is this story from campus technology talking about how nano-degrees are disrupting higher education.  I don't know about you, but it seems to me that people just love the word disrupt, or the pairing disruptive innovation.  I have a feeling that in 10-15 years when we're past this fad we will look back at this time period with the same sense of nostalgia that we look upon movies made in the 80s (you know, all of the movies that have synth-music playing).

Regardless of where you call it a nanodegree, an x-series set of courses, or a certificate this concept isn't new, and the article points to this fact that this isn't new. Certificates have been around for quite some time, and both higher education institutions and professional associations offer a wide variety of certification options for learners.  The professional associations, such as ATD or SHRM for example,  in theory, should have their finger on the pulse of the industry and they should be providing that "innovation" that these nanodegrees possibly provide. Academia might be accused of being a bit out of touch with industry, but in the past decade we've seen a proliferation of degrees and certificates that aim close that gap ("Homeland Security Studies" anyone?).

One of the things that worries me is the following rationale:
"We boil things down to their essence," he said. "That's kind of what a nanodegree is. We're telling students, this is exactly what you need to know to be in that job. And we absolutely have to deliver that.
The quote, which comes from Udacity COO, is best paired with comments I've often seen from CEOs that say that college graduates don't know how to do basic things.  And here is where a conflicting duality becomes apparent (at least to me).  The more specific a degree program is†, the less widely applicable it is in the long run. You have a box and you're in it.  The broad a degree program is, the more versatile it is in the long run, however you still need to be on-boarded as an employee to know what the organizational norms are.  It seems, to me at least, that employers want to have their cake and eat it too!  They want employees who are ready to hit the ground running, to be hired with that organizational know-how already there, and when those employees don't know this information ahead of time they are branded as incompetents who don't know "basic" things. I'd like to see a comparison of how different companies define "basic".

It is obvious that the sweet spot is somewhere in the middle. You want your graduates, and your employees, to be able to start with some sort of head start, and know some aspects of your company in order to ease transition into a new environment, but you aren't hiring drones. You are hiring inquisitive (hopefully) critical individuals who can think outside of the box.  The box is there for now, but it isn't going to be there forever.  I think that branding certificates as nanodegrees and replicating the wheel won't help learners and it won't help industry.  My (off the cuff) solution is for companies to work closer with academia on placement. Instead of college being a borg maturation chamber for youth, why not blend learning with actual work? If a college degree takes 4 years to complete, why not tie in a part-time job at a firm while you're in school and have embedded college advisors and counselors in the first to help students acclimate to the work environment and to be able to apply what they are learning to what they do? 

Certificates are fine and dandy, but they won't solve your issue.  I can train you to setup and active directory for your organization, but without knowing your own organization's norms, your AD setup can ultimately fail.

Your thoughts?



NOTES:
† degree program here is any sort of formal program, be it a BA, MA, PhD, certificate, nanodegree, whatever.

Monday, August 17, 2015

Have you registered you badge?


When the Rhizo Team (well a subset of the Rhizo team) and I worked on the article Writing the Unreadable Untext for Hybrid Pedagogy we used Wordsworth's phrase “We murder to dissect”. If memory serves me right it was Sarah H. that initially brought this idea forward....or was it Keith? † That's the beauty of swarm writing, individual credit evaporates and it's what we accomplish together that feeds back to us as individuals.

In any case, it is this phrase that came to mind as I was reading a story on Campus Technology titled New Registry Will Demystify Badges, Credentials and Degrees, where the main crux of the story is that academia and industry are teaming up to create a registry with the intent of demystifying the value of different degrees, credentials, certifications, and so on. From the new story:
The registry "will allow users to easily compare the quality and value of workforce credentials, such as college degrees and industry certifications, using a Web-based system with information provided directly by the institutions issuing the credentials,"
This raised a bit of an eyebrow.  The first thing that came to mind is how much will this cost, and what is the ultimate benefit?  I am not talking about the cost of setting up the system, but rather, much like going gambling in Vegas, how much will it cost individual credentialing agents to be part of that conversation.  For example, let's assume that I run a training center where I train individuals on Microsoft Windows Server, or Active Directory.  I already give out a certificate of participation for those who make it and go through the steps, but I also want to give out badges - some more granular than others. Who will add those credentials to the registry? Is it me? or it is someone else?  Who vets those credentials?  Is there a system of peer review or can you just take my word?  And, how much does it cost to be listed?  The reason cost seems to come to mind is that for online programs, some states (such as Alabama, Arkansas, Maryland, and Minnesota) you need to be registered with the state, which costs money to register, and in some cases you need to place a collateral to be registered (I guess in case someone sues you).

My point is, how fair would such a system be?  Would it really demystify alternative credentialing or will it just re-enforce the existing power structures that we have with academia and professional organizations as credentialing bodies?  Isn't the point of an alternative credential that we are not working within the existing power structures and are looking for valuable alternatives to the way we do things now?  Do we murder our own initiatives in order to "demystify" them and compare them 1:1 with what already exists in the system?

Your thoughts?


NOTES:
† Sarah H. informs me that it was Maha who brought up the quote :)

Thursday, August 13, 2015

Measuring Learning

I know... I know... This is perhaps a tricky question to answer, but bear with me here, Perhaps the answer to this question of "how do we measure learning" is "well, d'uh! with some sort of test or assessment".  This might be true in one-off training, you visibly see employees either performing or not performing, but when it comes to a higher education context what does it mean to have been badged, branded, certified (whatever the term you use) as having had an education?  In Higher Education we measure "learning" through a credit hour system. But what the heck does that mean? Well, I know what it means and its history, but how the heck does that connect to learning?

There are three critical incidents that got me thinking about this today.  First is a conversation I had with a prospective student for my day-job. The person who was inquiring about our program was asking about how many weeks our courses run each semester.  When I informed them that our programs run on a semester-basis and run for 13 weeks, this person was perplexed as to why the courses were 3 credits and not 5. This in turn perplexed me, which opened the door for an interesting mental exercise.  The potential student, it turns out, is used to a 8-week course structure for 3 credits, and so, they rightfully assumed that all schools do the same thing. There was also a good assumption (a folk explanation, but good for the amount of data that they had), that credits are based on the number of weeks a course runs.

For those who don't know, in the US the definition of a credit hour is a minimum of 3 hours of student effort per week for 15 weeks for 1 college credit). This means that a 3 credit course will require a minimum effort on the part of learners of 135 hours for each 3-credit course.  This, however, is the minimum.  A more realistic amount of effort is 4 hours of effort for each credit, which makes the total hours per 3-credit course 180 hours.  Now, your mileage may vary.  If you are taking a course in which you already know some stuff, your hours may be less.  If you are wicked smaht it might take less.  If you are like me (going off on tangents to explore interesting things) it might take more.  That said, the definition in the US ultimately boils down to the number of hours a student has put in some effort for those credits.  So, a 30-credit graduate degree from my department is something like 1400 hours - assuming you put in the minimum amount of time during class, and throw in some token time to study for your comprehensive exams. A bit short of Gladwell's 10,000 hour rule, but getting there ;-)

When I was an undergraduate, and even a graduate student, I didn't give this any thought.  I needed something like 120 credits for my undergraduate, and anywhere between 30-credits and 54-credits for my each graduate degree I earned.  I never really thought about what those credits meant, just that I needed them.  Courses were assigned a certain amount of credits, so I just scratched off courses from my list like a prison inmate at times.  My graduate degrees it didn't feel that bad, but my undergrad felt like I needed to put in my time.  Once those courses were done I banked my credits and moved onto the next course. Being now where I find myself, as a college lecturer, and a graduate program manager, what credits mean, and how to measure learning is much more important to me than when I was a student (ironic, eh?)

This whole situation reminded me of something that happened at work last year (incident #2). One of our alumnae was applying for a PhD program in Europe (#woot!) and she needed a document from us certifying she had completed a certain amount of graduate-level hours for her Masters degree in Applied Linguistics.  I was expecting to be able to do get some sort of US to ECTS conversion calculator and be done with it, but it was more complicated than that.  Europe also runs on a credit system as well.  Doing a little digging on the interwebs Masters programs are rated anywhere between 30-60 ECTS.  One ECTS, at least in Spain, is 25 hours of effort (Greece is 30), so the range of hours of effort for a European MA vary between 750 and 1500 hours.  This is still based on effort put in, and not real actual learning.  Students are still banking credits like I did.

Then, the third incident was a few weeks ago when I blogged about Campus Technology, one of the keynotes for this year, and competency based education. This isn't the first time there was something about Competency Based Education at a Campus Tech conference keynote,  A few years back there was a keynote by someone from Western Governors University.  The CBE model doesn't seem to be based on time spent on task, or effort put in by students. To some extent it seems to be more connected to what students can do rather than how much time is spent doing it.  That said, I am still left wondering how we go about measuring learning, measuring what students have learned so we can accurately certify them with confidence. This way we are reasonably assured that we have done our due diligence to prepare minds for the world outside of our own sphere of influence?

Even with the CBE model, it seems to me that there is a certain degree of banking that is going on.  I've gone through, I've completed x-many courses, I still have y-many courses left to complete to get my degree in Z. It seems to me that the discussions around curriculum are still constrained on the notion that undergrad degrees look like xyz, and they require abc of effort.  MA degrees, regardless of discipline, look like xyz, and they require abc of effort.  And so on.  I think that this isn't just a matter of assessment, bur also a matter of curriculum, but it all comes back to the same question for me: how do you measure something that may or may not be measurable?

Thoughts?

Friday, August 7, 2015

The Ethics of open online research

In my continuous quest to go to Pocket-Zero (may be a losing battle since I keep adding interesting stuff to read), I came across a post from a friend and colleague, Rebecca, who was discussing and brainstorming a bit about the ethics of research in twitter communities. As a quick synopsis, of the  hot button issue (at least from what I interpreted), was that in one instance (mature) researchers were researching a more general hashtag on twitter and this seemed to be OK, while in another instance a younger researcher (high school student) was researching a hashtag specific to breast cancer and social media, where a level of trust seemed to have been breached.  So, the question is: what is fair game in social media research? Specifically Rebecca asks What are the ethical obligations of anyone wishing to conduct research/analysis on a twitter community of care? Are the obligations different if the community is not care based? (e.g. #lrnchat).

My initial thinking is that care-based communities are not necessarily different from communities that are not thought of as care-based. For instance, one of the reasons I joined social media through blogging (many...many...many...moons ago) was that I really felt undervalued at work. I continued to blog and contribute to twitter communities not only because it was an avenue for expression (before the academic expression in this particular blog) but because I felt part of a community.  In this sense, even though the community I was a member of was not necessarily a care community, it became a support community for me.  While communities may have a certain mission, such as to bring together people who are doing their doctorate, or bring adjuncts together, or bring together individuals who suffer from a particular ailment, I think that care of different types, a feeling of belonging, and a feeling of support,  are the underlying reasons why we choose to join communities.  Because of this I think that #bcsm and #lrnchat (for instance) fall under the same rubric for me.

I also think that if we are blogging,  tweeting, posting on facebook (and so on) in the open then those are fair game for researchers unless we specifically say to people (through our twitter or blogger profiles) that what we post is not authorized for research purposes unless expressed permission is sought. I have no problem throwing a "all rights reserved" (as far as research goes) on my tweets or posts if I want to feel a sense of protection. This could be a viable solution for this matter.  This doesn't mean that our tweets won't be scraped by big data collectors, but we can tell people to exclude our tweets from their data set. This isn't a perfect system because it puts the onus on the subject, and not the researcher, to police it - but I do think it's a start.  As a researcher (well, some days I feel like a  three-year-old child in his dad's shoes when it comes to research, but anyway) I wouldn't think twice about using data that is on the open internet for research.  I would make an effort to see if there are any terms of use associated with that data, such as communities prohibiting research unless otherwise authorized, but other than that - as I said - it feels like fair game.

That said, I do think that there are issues with this high school kid who did this research, and this ties in to a conversation I had with another friend, Carrie, last week when we were talking about the most recent LAK conference.  The conversation with Carrie centered around data nerds (term used quite affectionately) who were studying for their MA and as part of their studies they were doing some analytics research. The data models were interesting, but what were these people going after graduation?  They did not know how to apply this knowledge to solve real world problems.  Part of the solution of real world problems is working with human beings and realizing that human beings are not numbers.  We can't just be quantified and that there are qualitative components, including emotional issues - such as feeling violated and perturbed by certain behaviors of others.

My concern is that when young adults are doing such research, which seems to focus on the data side and not the human side, they start treating humans as numbers. These young researchers are geeking out on the math and the social network analysis, but they are failing to account for the fact that on the other side of the line you have living, breathing, feeling, human beings.  If there is such a disconnect between the science and how the researcher's actions impact on the lives of the research subjects, as anonymized as they may be through aggregate social media data.  To some extent my knee-jerk reaction to this is to think of curriculum for such young adult researchers as starting from the qualitative perspective instead of the qualitative perspective, and to tackle such pragmatic, ethical, and philosophical issues that impact the way we act.

Rebecca, in her post, was wondering if how she felt was an over-reaction.  I don't feel qualified to judge that,  but I can judge my own feelings when someone says that they read my blog or follow me on twitter.  On a cognitive level I understand that by writing in such a public forum I will have people who read me.  My analytics tell me that each post gets around 70 readers (or around 200 if the post is really popular).  When people comment on my blog posts I know that they've read the post because they are making reference to it in their comments.

Even so, when I see someone outside of the blog context, in an office meeting, or in the classroom context, I get weirded out for a moment when people tell me that they read my blog, or that they enjoyed my post on whatever topic they were referencing. This is a gut reaction. On a cognitive level I get that there are more people who read my posts than people who comment on them. On an emotional level, when I engage with the readers here (as many, or as few as there might actually be), this context (this specific URL, or on twitter when I post links to my posts) feels like an appropriate context whereas in real life, in a job interview, in a meeting, or even meeting former students who read my blog that contexts feels out of sync to mention my blog.  Cognitively I don't think it is out of sync, but my initial reaction (the first few second before the logical part of my brain kicks in) feels that this is off in some way.  I am wondering if there is psychological research on this topic because I find my own reactions to the revelation that people read my blog pretty interesting.

Your thoughts?  What do you think of Rebecca's questions on the ethics of twitter research?


Monday, August 3, 2015

What's the point of (higher) education?

With Campus Technology behind us, I've got some free time to compose some thoughts on what I experienced this year in Boston.  I like going to Campus Tech each year as I have an opportunity to attend some sessions, see what the EdTech vendors are up to, and meet with new and existing colleagues.  One of the keynotes this year, by SNHU (Southern New Hampshire University) President was really unsettling.

Whereas the keynotes in previous years seemed to be hinting toward innovation in higher education, this particular keynote, under the guise of disruptive innovation in higher education seemed to hint more toward a commodification of higher education, a de-professionalization of many types of jobs in the field, and a process for teaching and learning that reminded me of an industrial age model of education. This was a bit jarring to me, as a regular attendee (and twitter reporter) of campus technology each year.  On the one hand Paul LeBlanc (SNHU President) did sufficiently stir the waters and got enough people to talk about the state of academia, however I am not sure his innovation is really innovative and a sustainable direction for an institution.

LeBlanc has a variety of main points from which he built in. One of his points, and faults of the current system is that Faculty drive the process at traditional schools. Faculty think up of new courses, which go through a governance approval process, which is also time consuming.  SNHU's approach seems to be to disaggregate the faculty, let "SMEs" and IDs take care of the course and curriculum design, hire instructors to teach the same course without variation. This was troubling on a variety of levels.  First, LeBlac seems to be hinting at faculty as not being subject experts.  I disagree with this.  Faculty are hires precisely because they are subject experts.  They, in theory, know their field and keep up with the field.  They are in one of the better positions to think up new curriculum of modifications to the existing curriculum.

I agree that the governance process can take ridiculously long at times (oh, the stories I could share!), but that doesn't mean that we ought to get rid of this system. To me what it means is that we need to look at our current system and see how it can be made more efficient, not completely dismantle it.  LeBlac's rationale for this approach is that faculty don't know what's happening in the walls outside of academia, and therefore the only people who can actually inform the curriculum with what's really needed are Subject Experts (which weren't defined, by the way), from outside the walls of academia.

This narrative isn't something that LeBlanc alone reports. Anant Agarwal, of edx fame, in a recent article on Fortune cites a Deloitte survey  that "found that the overwhelming majority of respondents felt it was on-the-job skills—not what they had learned in college — that got them through their daily workload. The study concluded that there was a significant gap between the skills desired by workplaces and what those polled had actually possessed by graduation." This connects with LeBlanc's comments that CEOs say that graduates from colleges don't have the qualities that their companies are looking for.  I guess the solution is to let the CEOs (companies really) specify what they need. I do call this a bit BS though.  CEOs are often not the best judges to know what skills front-line employees need to have.  To have them have an opinion on what skills undergraduates need for their companies is asking for misinformed opinions.

Another aspect of the presentation was time-to-completion as a key factor.  While I do agree that time-to-completion is important (heck, nowhere is this more evident than the 8-10 year liberal arts PhD!), but I do think that we are diluting learning into cram-and-jam sessions when we advertise 1 year Master degrees (just as an example).  I see this at work.  A number of potential students ask if our degree can be completed in year.  The answer is: no. I suppose that theoretically it could, 4 classes in the fall, 4 in the spring, comprehensive exams in the spring, and 2 electives in the summer and you're done.  But, what have you really learned?  You've crammed just enough to write those papers and take those exams, but have you really learned anything?  Never mind application, because application supposes that you've learned something. In cases like these I suspect that people just want some sheepskin to show that they've "learned" something so that they can get their raise, or change jobs, or whatever.

This argumentation of time-to-degree fits in with the 'adults are busy' narrative.  Adults have lives, jobs, responsibilities, they just don't have time for the "long road" to education.  While I also do hate pedantry in my own educational experiences, I guess what bothers me about this attitude is two things.  First, it treats traditional education as some sort of jail where you must put in your time (while singing nobody knows my sorrow), and it treats these accelerated "for adults" degrees the same way Alan Thicke is presenting Tahiti Village in this ad. Education can be hard, it can take time to acquire, and it needs effort to apply it.  Most else that can be broken down into discreet steps to follow, and are applicable to specific jobs, is really on the job training.

The other thing that bugs me is that there is still an infantilization of the traditional college age goer, the 18-year0old students. These students, according to LeBlanc and other supporters of the difference between "adults" and "kids" believe that traditional college age individuals need an 'incubation' or 'maturation' environment before they hit real life, and college is it. I think that treating 18-year-olds like they need maturation is completely and utter baloney.  As a first generation college graduate I know that my parents (and indeed many friends and family in our circle), didn't go to college to mature. They had jobs, they had family responsibilities, societal responsibilities, at age 18 (some even earlier).  This infantilization is the traditional college demographic does harm to them, and us.
 It only serves as an artificial separation of one group of students - the one who had a break in their education, from the student who went straight from high school to college. Instead of infantilizing one group, how about defining academic supports that are unique to each group?

This keynote reminded me of another, equally ridiculous post, on why MOOCs will fail to displace traditional universities (I was not aware they were competing with one another).  The main theses of the author are that MOOCs aren't dating sites, whereas colleges are - and people apparently attend college to find a mate; and the other is that college is a signal to potential employers that you (a) can get into college, so you must be wicked smaht, and (b) you have the perseverance to make it through, so they should hire you. What a bunch of BS.

The first fault here is the issue of causation vs correlation.   The author cites Assortative Mating [which can basically be boiled down to individuals who are alike pairing up] as a reason why colleges have additional value for people who attend - they provide an environment of other smaht people as potential mates. The issue here is that college isn't the only environment where people pair up. The workplace, and any other place that creates a community is a place where people can meet others with similar interests, hobbies, political thought, education, and whatnot. Could we claim that people search for jobs in order to find mates? Some might, but I don't think this is generalizable.

My own experience - college was that college was really a requirement.  Forget K-12,  K-16 seems to be the new expectation, and one that not everyone can afford!  Someone made it a requirement, but also forgot to make it free. Making college a requirement for many jobs is, as I have said before, sloppy HRM.

So, I'll end this post with the same question I started - What is the point of higher education? Thoughts?