Friday, December 20, 2013

2013 MOOC Learnings

Apple's Clarus the cowdog;
and his "moof" 'bark
Well, it's the end of 2013 and it's been a MOOC-kinda year, so before I head off for a small break (which is probably going to involve a lot of MOOCing), I thought I should write a summative post for my year's exploits in MOOCs.

2013, other than it being the year of the Anti-MOOC (according to some) was really the year of the xMOOC for me.  I participated in a lot of xMOOCs and got to see how different organizations had different takes on how to approach courses that are online and have, potentially, a large amount of participants.  Most of my MOOC experiences were coursera based (it seems like they are at the top of the hill at the moment), but I did expand my horizons by taking a course on EdX on the Ancient Greek Hero, a Harvard course, and a couple of courses through the Virtual Linguistics Campus which are courses offered through the Philipps-Universität Marburg. The VLC, interestingly enough got an award for Excellency in Higher Education for 2013, which makes me wonder how MOOCs fit into that award.

The thing with all of these three MOOC providers is that they all had pretty much the same formula: view videos, take some quizzes, and maybe participate in discussions.  In coursera I did the bare minimum for discussions, in edX I did none, and in the VLC I basically participated for troubleshooting purposes.  This just brings me back to the notion of engagement, and how MOOCs, done at the institutional level, really fail to engage.  The VLC folks I'll give a pass to for one reason: it seems to me that the european tradition centers around mostly lecture, so in essence the VLC folks made their paid materials open-access and continued on with the same approach as they had in the past in their traditional courses.  With coursera and edx, being in the US, while we come from that lecture tradition, we do tend to change things around and attempt new things to engage our learners.  It seems wrong to me to replicate that lecture "feel" in an online course, and by extension in a MOOC.  We need to do something better in order to engage the learner.

Speaking of engagement, alternative credentialing may be the way to get some of this engagement happening.  For example, in the Mozilla Open Badges MOOC that I took part in last fall, there were badges that marked each step of the process.  Participants wrote  up activities each week, if they passed, they got a badge, and if they didn't they got feedback and were encouraged to submit again with changes, taking a queue from mastery learning.  That said, even though I racked up the badges, I wasn't as active in the LMS forums because I really didn't see as much value.  The interaction seemed quite didactic in nature, top down, and the forums weren't all that useful.  When the content is, to some extent user generated, and learners read and react to things posted by the facilitators, and then learners respond to other learners, remix and redistribute, that's when activity becomes more noticeable.  Or, as was the case with FSLT12, if there is some shared understanding and a common goal, then people feel more engaged and want to participate in forums.  In the Open Badges MOOC I didn't really see that shared goal as much, despite the best  efforts of the organizers :) This just goes to show that you can design a perfectly good course, but you are still tied to whoever signs up and participates in it.

It is sad to say that my only cMOOC was OLDS MOOC.  There should have been more, where there more that I didn't see?  Also, OLDS MOOC wasn't really a "traditional" cMOOC in a sense, but rather, I think, that it was retroactively named a pMOOC.  I actually really liked OLDS MOOC and I got a few things that I want to fold into the classes that I teach. I want to be on the lookout for cMOOCs for next year so I don't miss any good opportunities :)

Speaking of next year: I've decided to give Udacity another try.  I have signed up for a course on statistics (a refresher for me), and a course on the design of everyday things.  I liked the book and wanted to see what the author had to say.  It seems that both are self-paced eLearning.  More on that as I get through the courses.  I have also signed up for two FutureLearn courses to see what's up there.  I signed up for a Corpus Linguistics Course and a course called "The mind is flat: the shocking shallowness of human psychology."  I am quite curious not just about the content, but also about seeing another European Approach the MOOC.  I haven't seen anything on edX or yet that has really piqued my interest from a content perspective. Maybe the course on Alexander the Great on edX could be a good candidate, but I have so many things on my plate that it would probably be a bad idea to take this on during the Spring semester.  Long story short, I am waiting for that singularity of innovation to occur (or have we reached the slog phase of the MOOC?)

As an aside, I am looking for an acronym, relating to MOOCs, that spells MOOF (that way I can use the Clarus icon more often ;-).  Massive Open Online Fun? Have ideas? Leave a comment!

Friday, December 6, 2013

MOOC Participants who liked this post, also found this useful....

Jeeves will point you to the right discussion forum
A couple of years ago when I was putting pen to paper and I was working on my Academic Check-ins paper I was doing some more research into recommender systems, you know the systems like the ones that they have on and Netflix whereby if you rate a certain product in a certain way, or if you view certain products, more recommendations come up based on your usage pattern of the system.

Now, those systems aren't perfect by any stretch of the imagination, but they can serve as ways of finding some diamond in the rough that you didn't know exist.  Think about it, both in a shopping or entertainment venue, and a MOOC you have one potentially huge issue: limited time to devote, a large sea of information to go through in order to find what might entertain you, or pique your intellectual interest and  get you engaged with some subject.  Last summer, at the end of Campus Technology 2013, I was having food and drinks with new friends and colleagues that I met at this conference.  I brought up a suggestion: what if we could develop a system that could help learners cut through the noise? A system, based partly on linguistic corpus analyses of the learners or work, as well as learner psychometrics and learner and learning analytics.

The way that I articulated the system last summer would be as follows:  Learners who are participating in MOOCs, be they cMOOC, xMOOC, pMOOC, or whatever other variety comes our way, would be able to connect their twitter, blog, Google+, disqus, and facebook accounts and the system would be able to to an linguistic analysis of their posts on these services and compare them to other MOOC participants in the same MOOC who these learners should be interacting with.  This could be based on levels of educational homophily (same-ness) that learners exhibit through their posts.  Here, learners can act as More Knowledgeable Others to help each other grow as learners. To ensure that there isn't a danger of groupthink, the system would also throw in (through a magical algorithm) people with differing points of view. This way learners would have the option to read dissenting views, and hopefully engage intellectually with that aspect as well.  The level of difference could, conceivably, be something that the learner, the owner of their educational match-making profile, have control over. So if a learner feels comfortable only being stretched so much right now, they can control the level of difference that they are exposed to.

To take it one step further, learning management systems for MOOCs, such as coursera, udacity and EdX, could tie into this system. This way there is one big dashboard of learning analytics and corpus data that be analyzed to help the learner discover interesting peers, hot discussion topics and interesting topics for the learner to participate in the discussion forums of those services. So, if I am following #edcmooc for example on coursera, and Jeeves knows about it (nicknamed this system Jeeves), then Jeeves would be able to see what I am writing on my blog about this course, how I am reacting to the materials and peers on twitter, the facebook group, and how I am up-voting or down-voting some threads, and through a daily email I could be told which threads are "on fire" and that I should look at, and which threads or peers I might want to connect to, follow, or respond to. This type of adaptive system would be learning not just from one MOOC, but all MOOCs I've participated in. And, if I want to, from my blog in general.

The system would be portable, and have APIs to hook into any MOOC platform. This would ensure that the person in charge is the learner themselves, not someone else.  In the summer I was thinking of a simpler system, maybe built into something like gRSShopper or a MOOC platform, but since then I've been thinking that data portability and cross-system compatibility is much more important given the plethora of MOOC platform providers cropping up around the world.

What surprises me is that this idea is something that hasn't gotten any traction yet. I did say it in a public venue among fellow learning enthusiasts. I wonder why I haven't seen anyone else pitch this yet. Your thoughts?

Wednesday, December 4, 2013

Crowdsourcing the PhD search...

Since I have a captive, in a sense, audience, I thought I would use the power of the crowd to help me identify a suitable PhD program for myself :)

Now, over the years I've been thinking about pursuing a PhD, but a sage mentor once told me that I should take at least a year break from school before making any decisions.  Essentially clear the head out, think about what I like to do, and then think about what PhD program might be most appropriate for me.  Well, it's been four years, and I've already made a few excel spreadsheets with potential programs, but they all fall short in some way shape or form, usually the main issue is financial ;-)  So, I thought I would tap into the wisdom of the crowds on the web, on #edcmooc, and people following the Sloan Consortium to see what you all think about my (potentially unreasonable) conditions for the "perfect" PhD program; it can be an EdD too - PhD is just short-hand for the purposes of this post.

Program of Study

My educational background has taken me to many places. All of those places are quite fun, but I think I've settled on the intersection of distance learning, language learning, and educational technology (ICT).  This means that I am looking at a Universities that have faculties of Education, or faculties of liberal arts or language and literature (Applied Linguistics can fit in a variety of places) that can mentor me in these fields.  I am both a US citizen and a Greek citizen, so I guess that opens up the doors on a couple of continents.  Some initial ideas floated to me were the National and Kapodistrian University of Athens, which seemed quite promising, but the recent strikes, and related uncertainty make it a problematic candidate. I can't even apply since it's closed due to strikes :).  Athabasca University is another option. Great program, really interesting faculty mentors, and they tackle a topic of interest: distance education. Cost is a bit of an issue. The Open University in the UK was/is also another option of consideration. But I am not all that familiar with the UK system.  From my various spreadsheets of research into PhD programs over the years, most universities that I scoped our originally are out of the picture because they would require me to move and quit my job - which isn't an option. This leads me to the residency requirements...

Residency Requirements

Since I need to maintain my day job, which I like by the way, in order to pay my bills, a program of study that requires me to quit my job to be a full time student, or required frequent traveling, is out of the picture for practical purposes. I can take a trip once or twice per year if needed and use up my vacation time to attend required local meetings. So, I would say that, my options are programs that are predominantly distance education; or perhaps european programs that work more on an independent study basis and I can work on my dissertation from day one and cover any gaps as I need to. Personally I like the independent study idea much more for a variety of reasons - this brings me to Entry Level requirements and Coursework.

Entry Requirements & Coursework

One of the things that was disheartening, when I was creating my big ol' spreadsheet of academic programs, was entry level examinations like the GRE. I contacted a variety of programs to see if the GRE requirement can be waived, and I explained my background.  The response I got was a polite, but canned, response that the GRE is a requirement.  In all honesty, having completed 4 masters degrees, for 2 of which I received the Academic Excellence award, and having completed 138 credits of graduate academic work, I think that institutions can show a bit of flexibility ;-)  This also brings me to the question of advanced standing.  In the US we seem to have a lot of coursework required to do a PhD above the MA degree. Is it really necessary to take 12, 13, 14 or more courses before you are even allowed to present a dissertation proposal?  In my humble opinion, no.  So, my ideal program would minimize the coursework requirement, allow me to work independently when appropriate, and would allow me to work on my dissertation on day one.


Cost is a big issue, looking back at my good ol' spreadsheet I see two basic options:
1. Quitting work to get a Graduate Assistantship that pays for all tuition and fees, but you'll be eating ramen noodles for the next six years. That is if you are lucky to recuperate work-wise and get a job right after you are done with your PhD studies.  This isn't an option

2. Keep your job, pay your bills, but take our crazy amounts of loans in order to subsidize your PhD studies.  Thank you, but no thanks.  The student loan debt crisis is already crazy in the US, so I don't want to add to it, and put myself into further debt.

Is there a third option? Reasonable tuition that can be paid for in full by grants or scholarships?

My Dissertation idea

So, now we get to the heart of the matter.  What is the dissertation topic?  I've been involved with MOOCs for the past three years now, and online education for five or so. I am interested in continuing to investigate this topic, and specifically I am looking at designing, and implementing, an ESL MOOC based on whatever research is currently out there.  In this MOOC I plan to collect a variety of data.  I haven't decided what I will be analyzing yet for the dissertation part, but in one form or another the following types of data seem to be prime candidates for analysis in a MOOC, and a language learning MOOC at that:

  • Learner scaffolding
  • Learner linguistic production
  • Learner-constructed corpus data analysis
  • Learner participation patters (deep topic, can span many media, or just some)
  • Pathways that learners take to learning
  • Learner motivation
  • Learner resilience in massive open learning environments (MOLE!)
So, my friends and peers,  do you know of a good place I can take this dissertation topic where I can be mentored, have a ton of fun with it, and earn a PhD, without quitting my job, going into crazy amounts of debt, or have to waste time taking unnecessary coursework?  Your input is much appreciated ;-)

Monday, December 2, 2013

#edcmooc - A chat with Prof. Eliza

I was thinking about what to create for my digital artefact for EDCMOOC.  My initial thought was to create a sample dialogue between a fictionalized EDCMOOC student and Prof. Eliza.  Prof. Eliza would be, of course, based on the the venerable ELIZA computer therapist program. I could then go in and modify the specific psychotherapy lines with something specific to education.

In the faceless environment no one knows if you are a dog, so in online education how do we know if a professor isn't just a machine pretending to be human?  In any case, this was meant to be a parody. There were two directions I could go once I got the dialogue all set:

  • I was going to ask a friend to run some lines with me, and they would play the role of Professor Eliza. The computerized responses as said by a human would/should make people think about appropriateness of responses of professors to learners and how helpful they are.
  • Or, I could text chat with Professor Eliza and record that, and the EDCMOOCer reaction to the text (frustration would be part of the reaction).
Well, since time ran out, and my goals were too lofty, I decided to do a little mLearning and use a, new to me, application called animoto.  I used the dialogue from the ELIZA engine, and the "title" text (top) is eliza, and the subtitle (bottom) text is the response from EDCMOOCer.  In the end, can machines, or fake artificial intelligences feel lonely?  You tell me ;-)

Chat with Eliza

Friday, November 29, 2013

#edcmooc Human 2.0? Human+=Human?

Vice Admiral Tolya Kotrolya
Well, here we are! The final(ish) week of #edcmooc.  As I wrote in my tweet earlier this week, I think #edcmooc, like #ds106, is probably more of a way of life for some than an actual course.  Sure, there won't be additional prescribed readings or viewings after this week, so the course is technically over, however the hallmark of any good course, as far as I am concerned, is one where the learners keep thinking about the material long after the course is done.  They keep engaging with the provided materials and with any new material that they encounter in critical ways.  In other words, once the course is done we don't shelve and compartmentalize the knowledge gained and leave it to gather dust like an old book on the shelf.

In any case, like previous weeks, here are some thoughts on the hangout, readings, and viewings from this week. To be honest, I don't particularly care about a certificate of completion, but I am interested in designing an artefact as a challenge to myself.  I am predominately text-based, so doing something non-text-based (or artistic) would be something that push me a bit.  That said, I am not sure if I will be able to do this in one week's time. What do others think about the artefact?  Challenge accepted?

From the Week 3 Hangout

Last week (Week 2 Hangout) the group mentioned a quote, which was repeated (and corrected) this week: "We have a moral obligation to disturb students intellectually" - Grant Wiggins.
This made an impression on me, not because of the content of the quote, but rather the succinctness of the quote.  The content is something that I, luckily, have been exposed to in my career as a graduate student.  In my first graduate course, an MBA course, Professor Joan Tonn loved to tell us "can you give me a quote?" whenever we claimed something to be true based (supposedly) on what we had read.  This was her way to not only get us to back up our claims based on the research we read, but also get us out of our comfort zone so that we could expand our knowledge and understanding.  I remember my fellow classmates and I scurrying those first few weeks in class to find the page number in our books and articles where the our supporting points were mentioned in order to be able to respond to can you give me a quote?

Another example comes from the end of my graduate studies, while I was finishing up my lat MA degree.  It seemed that Professor Donaldo Macedo's  favorite phrase was can I push you a little? In sociolinguistics he "pushed" us to think a bit more deeply about the readings, deconstruct what the readings were, and deconstruct why we held certain beliefs. This isn't always comfortable, thus the initial question can I push you a little (you could always say no, by the way).  In psycholinguistics Professor Corinne Etienne kept us on task, like Joan Tonn we also needed to quote something, and make sure that what we were quoting supported our arguments and answered the posed questions.  This means, that unlike certain politicians, we couldn't side-step the original question and answer the question that we hoped we would be asked.  These are all examples of disturbing the sleeping giant that is the mind, waking it up to do some work and expand its horizons, not just respond back with some canned, comfortable, response.

In my own teaching, I have adopted a devil's advocate persona. I try to disturb the calm waters of the course discussion forums by posing questions that may go a bit against the grain, or questions that might probe deeper into student's motivations to answer a certain way. I prefer the title of devil's advocate because unlike can I push you a little it doesn't necessarily mean that I hold the beliefs that I am asking probing questions about, but rather it means that I am interested in looking at the other sides of the matter, even if I don't personally agree. I don't participate in many faculty workshops or things like pedagogy chat time, so I often wonder how my peers do this in their courses, intellectually disturb students, in order to get them to expand their thinking.  If any teachers, instructors, or professors are reading this, feel free to share in the comments :)

Another interesting mentioned by Christine Sinclair is that it is difficult to mention contributions of 22,000 participants in a MOOC.  I have to say that it's difficult to mention all of the contributions of a regular 20 student course in one hour's time frame! In the courses that I teach I try to do a 1 hour recap podcast every week or every other week (depending on how much content is available, and how conducive the content is to an audio podcast) and I have a hard time finding the time to mention everything that was important to mention!  I can't imagine how many hours it would take to read, prepare, and produce a live hangout to get most of the contributions mentioned.  The MOOC Hangout would be like C-SPAN ;-)

Another difficult thing to figure out who wants to be mentions and who does not. This is a problem with a regular course of 20 as well. If you have a public podcast about the course, even if it's only "public" open to 20 students, some students don't want to be named because it makes them uncomfortable.  For my own course podcasts I go back and forth between mentioning names, or just mentioning ideas and topics brought up and acknowledging the contributions of students that way. The people who wrote about what I would mention in the podcast would know that it was their contribution that was mentioned, and they would (hopefully) feel some sort of sense of acknowledgement, and thus get a sense of instructor presence in the class this way.

From the Videos

In the videos section of #edcmooc this week we had Avatar Days, a film that I had seen this before. One of the things that I was reminded of was that I really liked the integration of avatars in real life environments. It is a bit weird to see a world of warcraft character on the metro, or walking down the street, but it's pretty interesting visually.

I do like playing computer games, but I am not much of an MMORPG person. I like playing video games with established characters, like Desmond (Assassin's Creed), Sam Fisher (Splinter Cell), Snake (Metal Gear), and Master Chief (Halo). I play for action, and to move the story forward.  For me these games are part puzzle,  part history, part interactive novel.  I only play one MMORPG and that is Star Trek Online. The reason I got sucked into it was because I like Star Trek, and this is a way to further explore the lore of that universe. I have 3 characters in Star Trek Online (one for each faction of the game) and while the game gives me a spot to create a background story for them, it seems like too much work. I really don't see my characters, one of whom you can see above  - Vice Admiral Tolya Kotrolya, as extensions of myself.

Watching Avatar Days again had me thinking: Are avatars an escapist mechanism? A way of getting away from the mundane and every day? Are they extensions of real life, or someone you would like to be, or whose qualities you'd like to possess? How can we, in education, tap into those desired traits that people see in their avatars to help them move forward and accomplish those things in real life?  For instance, let's say that I didn't play by myself most of the time and I were really active in my Fleets (equivalent of Guild in WoW), and I wanted to be the negotiator or ambassador to other fleets, I would guess that I would need some skills in order to be able to qualify for that position. Let's say I continue to hone my skills in that role. Now, being an ambassador in real life is pretty hard (supply/demand and political connections), but can you use those skills elsewhere?  This is an interesting topic to ponder. I wonder how others of their avatars.

True Skin (above) was also quite an interesting short film. The eye enhancements reminded me a little or Geordi La Forge, the blind engineer on Star Trek: The next generation.  These enhancements made me think a bit of corrective lenses for people with myopia or presbyopia. In a sense people who need eye glasses or contacts to see could be considered more than human if we really thought about it from an augmentation perspective. The portrayal in this video just seems foreign, and thus potentially uncomfortable, because eye augmentation to see in the dark, or have information overlay on what you see is out of the norm for us at the time being. Another interesting thing to consider are memory aids.  We use memory aids a lot in our current lives.  Our phones have phone books in them, calendars, to-do lists.  If we don't remember the film that some actress was in we look it up on IMDB.  I remember about ten or so years ago I had a friend who vehemently opposed any sort of PDA (remember those? ;-) ) because he prided himself on remembering his credit card numbers, phone numbers, important information.  Sure, some information is important to remember without needing to look it up, however when you have memory aids for potentially less important information such as who was the actor who portrayed character-x in the 1999 remake of movie y, it frees your mind to remember, and work on, other more important things.  This way you are offloading (a computer term co-opted to describe a biological process) less important information to an external device to leave the main computer (the brain) to do other things.

The virtual displays on someone's arm reminded me a lot of the biotic enhancements that can be seen in the Mass Effect series of games (speaking of Mass Effect, the background music in Robbie reminded me of Mass Effect). The thing that really struck me in this video was the quote: "no one wants to be entirely organic."  This is an interesting sentiment, but what happens to the non-organic components when the organic dies? Supposedly the non-organic components cannot function on their own, so where does the id reside, and is it transferable to a data backup, to be downloaded to another body upon the organic components inevitable death?   The last question about this video is: when will it become a series on TV? ;-)

A quick comment on the gumdrop video: I loved it!.  It reminded me of a series on the BBC called creature comforts (sample video. In this series they recorded humans and they matched them with their animal personals (I guess), so the claymation animal was saying what the human had spoken.  Gumdrop could very well be a human voiced by a robot.

Finally, a quick note about the Robbie video. This video was a bit rough watch. The first thing that surprised me was that the space station was still in orbit after all those years. I would have assumed that eventual drift would cause it to come into Earth's gravity and cause it to crash. While watching the video I was wondering when this was taking place. I kept thinking "how old would I be in 2032?" and I made the calculation.  Then "How old would I be in 2045?" and I made the calculation, and then Robbie mentions that he (she? it?) has been waiting for 4000 years. At that point I stopped counting knowing that I would be long dead when Robbie's batteries died. When the robot mentioned that he lost contact with earth the first thing that came to mind was a scene from planet of the apes; specifically the end where the main character says "Oh my God. You finally really did it, you maniacs, you blew it up." I am not sure what that says about me, but I would surely hope that they weren't responding to this robot because things went sidewides on the surface of the planet.

From the Readings

Finally there were some interesting things that I wanted to point out from the various articles that we had for this final week. In Transhuman Declaration there was this belief or stance (italics my own):
Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.
The thing that stood out to me was the invocation of morality.  I haven't really thought about the nature of morality in quite some time - or rather I haven't had to debate it; That said I am curious as to whether or not morality and moral behavior is a standard or expected standard amongst human beings, or whether it falls under the category of "common sense," which as we know common sense isn't all that common, but rather it's something that's made up of the cultural and lived experiences of the person who holds these things as common sense.  Is morality something that is malleable? Or is it a constant?  If it's malleable, what does that say about the expectation to act morally? If you harm or injure someone or something while trying to act morally does that negate or minimize the fact that you have actually harmed them or stepped all over on their rights?

The final article, for me anyway, was Humanism & Post-humanism here is something that got the mental gears working:
In addition to human exceptionalism, humanism entails other values and assumptions. First, humanism privileges authenticity above all else. There is a "true inner you" that ought to be allowed free expression. Second, humanism privileges ideals of wholeness and unity: the "self" is undivided, consistent with itself, an organic whole that ought not be fractured. Related to the first two is the third value, that of immediacy: mediation in any form--in representation, in communication, etc.--makes authenticity and wholeness more difficult to maintain. Copies are bad, originals are good. This leads some humanisms to commit what philosophers call the "naturalistic fallacy"--i.e., to mistakenly assume that what is "natural" (whole, authentic, unmediated, original) is preferable to what is "artificial" (partial, mediated, derivative, etc.).
What really got me about this is that in humanism there seems to be no space for the fact that while we do process things differently, we aren't really 100% unique as individuals.  The old adage of standing on the shoulders of giants goes beyond academic writing.  We are comprised of the sum (or more than the sum sometimes) of our experiences, which encompasses human relations, education, personal experiences, environmental factors and many many more things.  We can be clever, ingenious, and visionaries, but we weren't born with all of what we need, we acquired it along the way and it shaped us into who we are.  We can be authentic, but we can't be authentic without other people around. Others both shape us and allow us to show our individuality and elements of authenticity. Thus, while we may not copy verbatim, we do copy in some way, shape, or form while we remix that into something that makes it "new" and not a copy of something.

Furthermore, this whole notions of wholeness is a bit where I saw Carr's Is Google Making us Stupid? article. It seems that one of the laments (I won't go into everything in this article) is that people seem to skim these days, that they don't engage deeply because the medium of the web has trained us (or derailed us as the reading might imply) because there are way to many flashy things on the screen that vie for our attention.  I completely disagree.  Even when people had just plain-old, non-hypertext, books, things keep vying for our attention. If we are not interested in what we are reading it is more than easy to pick up that comic book, listen and sing-along to that song on the radio (or the MP3 player), or to call your friends and see they want to hang out.  Even when you're engrossed in the reading, in traditional, non-hypertext, materials if there are footnotes or endnotes that give you a lot of supplemental information, they take you out of the flow of your reading.  Deep reading isn't an issue that is technology related, bur rather a more complicated (in my opinion) endeavor which has to do with reader motivation, text relevance to the reader, text formatting and type-setting (ease of reading) and setup of the mechanics and grammar of the text, i.e. the more clunky or "rocky" the text, the more inclined the reader will be to skim or just avoid the text.  There are more critiques that I have of the Atlantic Article by Carr, but I'll limit it to this one.  Now, back to  Humanism & Post-humanism. Another interesting quote (italics my own) is as follows
most of the common critiques of technology are basically humanist ones. For example, concerns about alienation are humanist ones; posthumanism doesn't find alienation problematic, so critical posthumanisms aren't worried by, for example, the shift from IRL (in-real-life) communication to computer-assisted least they're not bothered by its potential alienation. Critical posthumanisms don't uniformly or immediatly endorse all technology as such; rather, it might critique technology on different bases--e.g., a critical posthumanist might argue that technological advancement is problematic if it is possible only through exploitative labor practices, environmental damage, etc.
This is a pretty interesting though, that most common critiques of technology are humanist ones.  It reminds me a lot of my Change Management course when I was an MBA student and the children's book who moved my cheese. Well, I saw it as a children's book, but it may not be. It's probably a tale that can be dissected and critiqued quite a lot from a variety of stances. The thing that stood out for me is the worry that technology has the potential to alienate by not having people communicate with one another in established ways, but what about people who are already not communicating well with established ways, but can use ICT to help assist with communication.  The usual example of this is are students in classes that are generally more timid or laid back.  In a face to face classroom, which has space and time limits imposed by its very nature, the students who are more outgoing and outspoken might monopolize the course time. This won't give learners who are not as outspoken an opportunity to chime in, or share their point of view, or understanding once they have processed the readings for the course, things that could move the conversation and learning forward in interesting and unforeseen ways. 

In an online course or a blended course, however, learners have more affordances that are not there in a strictly face to face course.  They have time to chime in, and thus the conversation can go on longer and thus more things can be teased out of a discussion topic.  Furthermore, students who aren't as outgoing in the face to face classroom have an opportunity to take the microphone (so to speak) and share with others what their thoughts are on the subject matter that is being discussed.  Instead of vying for that limited air time that you have in a face to face classroom, ICT has the potential to democratize the discussion that happens in the classroom by providing opportunities for all to contribute.  Technology by itself wont' be the panacea that makes this happen, let's not kid ourselves; there are many underlying foundations that need to be in place for students to use the affordances of ICT effectively.  That said, this is a case where ICT has the potential to bring together, not alienate fellow interlocutors and travelers on the path of learning.

So, what are you thoughts? :) How does Human 2.0 sound?

Saturday, November 23, 2013

#edcmooc - almost human

Man, it's been a crazy week.  I've been jotting down notes for this post from the various viewings, readings, fellow blogger posts, and discussion forums.  This was meant to be several posts over the week, but it all wrapped up into one big thing. Oh well.  Such is life ;-) This week I'm creating some category headers to make things easier to read.

From the week 2 synchronous session

The synchronous Google+ session was pretty interesting.  From it came a few interesting points to ponder.  One of the participants of EDCMOOC brought up a question on whether or not it is necessary for everything to be a game?  The question was probably geared toward questioning gamification and the implication that learning shouldn't need to entice learners to partake in and engage with learning.  I personally disagree.  Everyone finds a reason to participate, or not participate in a learning venture.  For some people learning is a thrill ride, so even when they are down in the dumps and struggling with material they are having fun.  For others, when they are struggling it may feel like they are publicly flogged - not a nice feeling to have.  By incorporating game mechanics into learning you aren't just trying to make something more enjoyable, but rather I would argue you are trying to provide additional appropriate supports, like learning scaffolds, and appropriate rewards for meeting certain crucial checkpoints. Focusing only on the badge or the fun aspect really does gamification a disservice. A related comment to this had to do with the notion that if we view education as a game, then we will find way to try to "beat" it or maybe "cheat" our way through.  Personally students think we currently do that and we're not treating education in a game-like manner.  Students always haggle for one more point on that exam, or ways of getting extra credit, or they figure out the professor's preferences and just regurgitate what they think the professor wants to hear. In these cases there may not be any actual learning happening, but rather a way to "cheat" the system. We, as humans, are problem solving animals.  Gamification or not, we'll try to beat the system.

A follow up comment comment was questioning the necessity of viewing everything through a competitive lens. The implication is that learners work in a solitary manner, in a zero-sum environment, so my win means your loss.  So, as the commenter asked, why not work together? Education doesn't have to be zero-sum.  We can, and in fact do, work together. But, it all comes down to each person's individual goals and motivations for being part of an educational venture. How you as a learner traverse the path of the course from start to end will depend on a lot of things, including your own motivations.

Finally, there was an interesting discussion on essays which came about from the readings on automated or peer grading of essays. There were two distinct points that came out:

  • Why write it if no one else will read it an interact with it (Jen?)
  • Isn't one of the points of essays to have that conversation with yourself? Taking a conversation and internalizing your conversational partner (Hamish) 

For what it's worth, I think that writing, as a process, is both an internal and external motivator.  While I think many of us would like to engage with others through our writing, it's not the only motivator for our writing.  There are many blogs out there with few readers, this blog included. I don't write necessarily to have people read my posts and thus engage intellectually with me, but it's a way of discussing various views with yourself and by doing it openly you have the opportunity to both involve others in this process, or share your understanding with others. In one case (discussion with one's self) you are pushing yourself to become more knowledgeable, while in the other case (discussion/engagement with others) you are potentially engaging in a vygotskian dialogue where you are the More Knowledgeable Other in some cases, and your peers in others; thus through common dialogue expanding the overall knowledge and (hopefully) understanding in the network. When it comes to automated essay scoring, I already mentioned that the emphasis is placed on the grading, and not the process in a previous post.  I would also add, based on this hangout recording, that I think that essay scoring is potentially detrimental to the discussion with one's own self because you are not writing to engage with yourself or others but rather you are writing to beat the automated machine algorithm that scores your paper.

From the Week 3 viewings

There were a few interesting videos this week to poke around the old "meat brain" and make it do some work. The They are Made of Meat is pretty hilarious, and the World Builder video was quite touching. It reminded me of the Animus is Assassin's Creed and moving around within the animus, both the reconstructed environments and the "getting your bearings" environments (game play introduction) that look more blocky. Specifically the World Builder made me think a bit of what it means to be disabled, and if our bodies are incapacitated but the spirit (the ghost in the shell, if you will) is there and ready to participate, what does that mean about the human ways in which we can interact?  If there is a separate, but connected, matrix-like reality that connects the minds of people in comma conditions, what that do to our definition of human communication>

This brings us to the difficulty of defining what is being human and Fuller in defining humanity.  In thinking of this TEDx video it reminded me of some really interesting characters that I've come across over the years in Science Fiction shows that are either androids, cyborgs, or some  other type of artificial intelligence.

Lt. Commander Data, Star Trek
One of the first characters that comes to mind  is that of Lt. Commander Data on Star Trek: The Next Generation. Throughout the series, and the subsequent movies, the audience (and the cast of the show) explore what it means to be an individual and to be human.  Data refers to his maker (Dr. Soong) as his father, and Dr. Soong's significant other as his mother.  Data keeps on his pursuit  of becoming more human by experimenting with art, having a pet (spot the cat), having romantic relations with a crew member, and in the series also creating an android of his own as a way of procreating. When Starfleet engineering wanted to take him apart to learn what makes him tick (and risk damaging him) the whole issue of what it means to be an individual came up again.  Finally, at the end of the last TNG movie, data sacrifices himself to save his friends, and the crew.  As a fan this annoyed me, but in the comic leading up to the 2009 Star Trek film we see Data back from the dead, by transplanting a copy of his memories and experiences into beta, a more primitive android, of the same make. Eventually the primitive positronic brain of beta adapts to accomodate all of data's personality.  So, what does that mean in terms of who is human? How does it connect to World Builder?

Finally, with Data, what I found interesting was that a Vulcan (arguably another human) told data in one of the episodes that Vulcans strive all their life to reach what Data has had since birth, but on the other hand Data would gladly give it up to be more human.

Rommie, Andromeda
Next we have another Roddeberry creation: Andromeda. In Andromeda the spaceships have avatars that help the ship AI communicate with the crew and captain.  The relationship between ship's Avatars and their Captains seems to also take more intimate dimensions from time to time since it appears that these Artificial Intelligences aren't cold calculating war machines, but rather symbiotic beings that are capable of fondness, caring or even love.  As Data, from Star Trek, would say "As I experience certain sensory input patterns, my mental pathways become accustomed to them. The inputs eventually are anticipated and even missed when absent."  The interesting thing about Andromeda (or "Rommie") is that she exists in, at least, three places simultaneously - an android body, a holographic projection, and on a computer screen.  There have been many times where all three personifications of Andromeda have conversations to work out problems and figure out courses of action.  This reminds me a lot of what Hamish mentioned in the Hangout about writing in order to have a conversation with one's self, and to be able to work out issues.  I am currently re-watching Andromeda, part way through season two, since I missed a lot of it when it was originally on television.  Who knows what else comes up in terms of being human as the series progresses.

Cameron, Terminator

The next person that comes up is Cameron from Terminator: The Sarah Connor Chronicles. In the movies the terminators are portrayed as cold heartless machines that do what they are programmed to do.  This usually involves lots of killing of humans. However, as John Connor (from the future) captures and re-programs one of them (Arnold Schwarzenegger) that cyborg is then turned to protecting his younger self in the past.  The television series actually took a different approach from the movies.  Not all of the terminators were seen as one minded assassins sent to the past.  Cameron, the protector cyborg, shows us a glimpse of what might be happening in that metal head of hers.  She doesn't just adapt to fit in by using new slang, stances and acting "more normal" to fit in.  It seems to me that the writers of the show tried to show us her interest in the arts when she was practicing ballet on her own in one of the episodes; even after the mission which involved knowledge and skills of ballet was over.  She kept on experimenting with it.  She also showed an emotional component, a connection to John's younger self.  This may have been a preservation mechanism, we don't know. The series was killed off after two seasons, but it would be interesting to explore more what constitutes human and what sort of human elements can these killer cyborgs show.  Another interesting character is the T-1001 terminator, Catherine Weaver, and her "son" John Henry.  Too much to go into at this moment though. This was a good series ;-)
Dorian, Almost Human

Finally, I was thinking of the Cylongs in Battlestar Galactica , both "Chromejobs" (robots) and "skinjobs" (humanoids, made of flesh and blood) and the duality of what it means to be non-human.  Both chromejobs and skinjobs were cylon, the "enemy" of the "ragtag crew" of Adam. The chromejobs came before the skinjobs but they were seen as equal...or where they?  I don't remember a lot from the series since it's been a while since I've seen it.  So, I will focus instead on my last case: Dorian, the android from the new series Almost Human. At this point there have only been two episodes, so it's not that easy to discuss a lot about these characters, but the portrayal of androids in this series, and Dorian in particular is pretty interesting.

Dorian was decommissioned and replaced with more sterile, tactical, police androids because the DRN (Dorian) model emotion engine made them "unpredictable." I guess by "unpredicatable" the script writers meant to imply that they were quite human and acting "illogically" as a Vulcan would put it. Seems to me that in high risk situations where police need a police android partner that unpredictability would be a benefit, not a hinderance. From the two episodes we've seen Dorian in, it would appear that there are feelings there toward humans they care for, but also for fellow androids.  When an android was put down (through no fault of her own it should be added), Dorian stayed in the lab to ease her passing in a way. Is this a human trait?  Do humans actually do this? I would argue that this, empathy, isn't a universal human trait.  I'm quite curious to see what the writers have for us as the season continues.
Finally, back to the TEDx talk, it was interesting to think about the "elevation" or raising of all humans to a certain level coming in with Christianity.  A concern about the poor that wasn't there before. Since I don't have much background in this arena I'll go with it, but I am thinking about the current rhetoric about "democratizing" education with MOOCs by serving underrepresented student populations or the less advantaged in developing nations.

From the week 3 Readings

Finally there were some interesting articles this week, although I must admit that I didn't find them as interesting as the past couple of weeks.  There were, however some intersting points made in the Human Element on One of them keeps coming up over and over again in one of the courses I teach. This point is as follows (quotes from IHE)
But Hersh believes there is another major factor driving the gap between retention rates in face-to-face programs and those in the rapidly growing world of distance education: the lack of a human touch.
One of the misconceptions that students have coming into their first online course is that they expect that online courses will be a straight replication of the processes and procedures that exist in face-to-face courses.  This, of course, is not possible, and having such an expectation will lead to an inevitable sense of disappointment.  Recording a play and posting it on YouTube for viewing will not give you the same feeling and engagement you have when you go to the theater. The audience will engage differently in the theater than they will on YouTube. Thus, if you are presenting and attempting to engage in a new medium using another medium's rules and expectations for action and reaction you will not be very successful at your end goals. Distance education doesn't have a lack of human touch, it's just a different human touch than people are expecting.

The other thing from this article that I guess I don't understand what his "Human Presence Learning Environment," based on the Moodle LMS, has that is so different from other learning management systems. Just incorporating video doesn't seem like such a great leap forward and these days you can do this with many external providers, including Google+. It seems to me that they just wanted to have something new in terms of a name or an acronym to get their 15 minutes of fame.  A fellow academic, a couple of decades older than I, told me recently that he thought that new acronyms that viewed something existing from a slightly different angle were silly to him when he was first starting out, but he found out that this was the way to get funding for research.  The review, validation and reframing of the existing just isn't sexy enough to get you attention.  Too bad for our profession.

Finally, I think in MOOCs the "human connection," whatever that might be, can help make MOOC "completion" rates higher, however you define completion rates (I personally am still a skeptic on this front). But I am wondering how one can increase communication when you are in the virtual equivalent of a stadium with many unknown peers, and facilitators moving around in the crowd with them in their bright yellow shirts handing out participation stickers and handing the microphone to someone with a bright idea.  There is an idea that has been brewing in this arena since last spring.  More on this as I hash it out.

Next was the article Human Touch on This article reminded me of a classmate I had once who had two kids who he never allowed to use a computer, watch TV or play video games. I think that this quote from the article perfectly summarized his position:
A computer can inundate a child with mountains of information. However, all of this learning takes place the same way: through abstract symbols, decontextualized and cast on a two-dimensional screen. Contrast that with the way children come to know a tree–by peeling its bark, climbing its branches, sitting under its shade, jumping into its piled-up leaves. Just as important, these firsthand experiences are enveloped by feelings and associations–muscles being used, sun warming the skin, blossoms scenting the air. The computer cannot even approximate any of this.
Having grown up in what is really a village, with lots of land around me, and dirt, and some farm animals, I think that this is an important part of growing up: the great outdoors, the fresh smell, the dirt (and subsequent washing up), however I wouldn't exchange computing for this, nor this for computing.  I think that these days there are ways of thinking that need to be nurtured, not just treating a computer like a tool to be learned in your final year.  I think there are measures of creativity that can be accomplished with games and computing that cannot be accomplished in real life.  There is also a lot of real life that cannot be accomplished virtually. This is something I saw as a computer trainer in a previous job.  Many students had learned the tool mechanically, so when an update came, and things moved around, there was a difficulty in being critical and finding the right information in that giant mountain of information.  The skill that they have picked up is using a pre-determined critical path, not finding their own critical path from a mountain of information.  This, to me, is much more important than having someone learn how to use a computer as a tool in their final year.
Of course, computers can simulate experience. However, one of the byproducts of these simulations is the replacement of values inherent in real experience with a different set of abstract values that are compatible with the technological ideology. For example, “Oregon Trail,” a computer game that helps children simulate the exploration of the American frontier, teaches students that the pioneers’ success in crossing the Great Plains depended most decisively on managing their resources. This is the message implicit in the game’s structure, which asks students, in order to survive, to make a series of rational, calculated decisions based on precise measurements of their resources. In other words, good pioneers were good accountants.
I remember playing the Oregon Trail when I was in high school on an Apple IIgs.  Of course, by that time it was old, but I didn't care because I had not grown up with this technology.  I approached the game as a game, not as a way to learn history.  I think that games will always fall short on the goals that we want for them to reach.  There is just no way, with today's technology, to reach the levels of sophistication that are portrayed in Star Trek's Holodeck.  I think games are a good start to get a hook into student learning. We can then take that interest and expand upon it with additional information that would benefit them in the long run.  You could even tie-in the great outdoors in this!  Give the learners the materials that frontier settlers had, and tell them that they need to solve a problem with these seemingly unconnected materials.  Make them junior McGyevers and help them learn a lot of different skills, not just names and dates, and resource management.

Finally, a funny xkcd comic shared by a fellow participant in week 2.

Simple Answers to Technology (

What do you all think of these things?

Wednesday, November 20, 2013

SPOCs are illogical

Angry Spock (Star Trek reboot)
OK, OK... the title was easy pickings but this article is quite serious.  I've chosen to ignore, for the most part, the whole idiocy of the term SPOC (small private online course).  SPOCs are really just "regular" online courses, as I've written in my one other post about SPOCs. It bothers me that there is so much revisionist history around the topic of "traditional" online education with articles such as these where organizations like Colorado State University claim to be "pioneers" in SPOCs since they've been doing online education for the past five years.  A whole five years? Our fully online Applied Linguistics MA has been around for eight years, and our overall organization, UMassOnline, has been around for about ten years doing "SPOCs." Maybe we are pioneers too, who knows, but it's really difficult to critically discuss MOOCs, traditional online education and flipped classrooms when people muddle the water with SPOCs, another useless acronym that overlaps with currently existing terms.

So, I was pretty content to just ignore SPOCs, but this blog post came across my twitter feed (I think courtesy of EDCMOOC) that I couldn't ignore from a philosophical perspective. Well, it was this article, and the mentioning of the term from a colleague of mine which made me almost gag that really was the impetus for this post. So, in this article SPOC has been succinctly defined as:

The term “SPOC” has been used to describe both experimental MOOC courses where enrollment was limited as well as packaging options whereby academic institutions can license MOOC content for professors to use as components in their traditional courses.
This is a good place to point out that an "experimental MOOC" is redundant.  ALL MOOCs at this point are experimental.  We haven't cracked this nut, so we're experimenting with large scale online courses and various evaluation mechanisms in an environment where we're not worried about accreditation and academic honesty as much.  Sure, we pay lip service to academic honesty by clicking the little "I am in compliance with the honor code" button, but at the end of the day no one is risking their reputation, as far as academic honesty, retention and measurable outcomes, goes.

Beyond the whole experimental thing, I should point out, and will go into a little more elaboration later on in this post, that MOOCs and licensing are antithetical to one another.  Part of MOOC is Open.  We can argue all day about what "Open" really means, but at the end of the day the Open in MOOC was intended to be Free to use, Free to Remix,  Free Repurpose, Free to Feed Forward. But for now, let's focus on the limited enrollment:
One of the most successful limited-enrollment MOOC/SPOC classes was CopyrightX from Harvard that only allowed 500 who made it through an application process to enroll. The course was still free, but students who took part were expected to be full participants (not auditors or dabblers), and the combination of limited enrollment and a decent-sized teaching staff meant that students could be given research and writing assignments that would be graded by teachers vs. peers.
Last summer I was having a chat with a respected friend and colleague, over beers, after the end of Campus Technology 2013. My colleague works for an entity that deals in MOOCs, and the organization does cap courses for one reason or another.  When I discovered this, I shot off the first volley and proclaimed that those courses weren't MOOCs if they prevented more than X-amount of people to sign-up. An interesting discussion ensued whereby I was able to work out and better articulate (and understand) my own positions on MOOCs and course caps.  At the end of a very interesting discussion this is what I came up with:  It's perfectly fine to have an enrollment cap in a MOOC if it's about one of two things: (1) You are either unsure of the various technology pieces and thus you can to hold some variables constant while you stress test your system.  After all, you don't want a repeat of the Georgia Tech MOOC #Fail. And, (2) the other acceptable, for me anyway, reason to cap the course is to experiment with some sort of new pedagogy, design or technique and you want to make sure that you aren't juggling too many things; thus having fewer students is preferable to research purposes.

That said, even with lower course caps, this doesn't make it any less of a MOOC.  After all, as I have argued in previous posts, Massive is relative. Some courses will garner 100,000 students because the barriers to entry are lower, and others will only get 100 because the barrier to entry, such as previous knowledge that is discipline specific, is pretty high.  Further more, the CopyrightX course isn't really a MOOC, in my book.  Not because of course cap, but the way they approached the course.  They expected each and every student to participate based on their own rules, and they treated the course like a web-version of a large, auditorium delivered, course. This came part and parcel with the assistants that they had to help out in the course. This wasn't a MOOC. Perhaps it was more along the lines of a traditional online course, but calling it anything other than a free traditional online course is disingenuous and shows that there is no understanding of past precedents.  Next we have the who sticky issue of licensing.
 The licensing of edX content to San Francisco State College that caused such a ruckus earlier this year represents the other phenomenon being commonly referred to under the name SPOC.  In that case, the same material you or I would see if we enrolled in a MOOC class (such as the lecture videos, quizzes and assignments associated with Michael Sandel’s HarvardX Justice course) would be given to professors who would be free to pick it apart and put it back together in order to customize their own classes in a way that represented their preferred combination of their own teaching resources and third-party materials.
I have two problems with the notion of licensing of MOOC content.  Both of my issues are philosophical. First, as I said above, we've established that MOOCs, have an Open component for use, reuse, remix, and redistribute.  This also happens to be in the tenets of Open Educational Resources.  Sure, with OER you are technically providing materials under an open license, but the language used in the discussion over licensing of MOOC content is really much more commercial in nature.  It's seen as a way of making money for the venture capital funded MOOC LMS platforms like coursera and udacity.  In addition to the philosophical issues of what constitutes open, I have an issue with the crazy amounts of money pumped into VC funded ventures, which inevitably might likely raise tuition and fees for students who are paying to get their accredited degrees. So, in addition to signing contracts with these companies, and giving them the right to redistribute the content, and handing over a considerable chunks of change to design or run these courses, we have content locked up in a closed system. This is a far cry from the Open we envisioned before EdX, Coursera and Udacity came onto the scene.

This reminds me of parallels in the academic journal publishing industry.  Authors do the work for free.  For the most part editors also do the work free.  Journals however cost, and they cost our libraries a pretty penny to have access to journals that those same authors (and their students) are members of.  If you are designing MOOC content with the intent of making a profit from it by reselling it to classroom flippers, then you're not making a MOOC. You're just developing content, like people have done in the past. If MOOC content is available freely for use in other courses, small, large, campus, online, flipped, blended or whatever - you don't need to call it by a new name.  Just use the OER like we've used it before.

Your thoughts on the matter?

Saturday, November 16, 2013

#edcmooc - Where do you want to go today? Build that bridge to your utopia

So, we are at the end of Week 2 of #edcmooc and we are wrapping up the unit on Utopias and Dystopias, and everything in between (because thing is really that black and white). As with the week before there were some videos to watch and think about. I think that the no-lecture-videos format works well.  I like to see what people do with certain conversation starters and where they go with them. As I said last week, even though this course is run through coursera it's very much a cMOOC format to me.

One of the videos presented was the video bellow on bridging the future.  Honestly this video seemed really cool, and a nice proof of concept of what could potentially be done with technology. Students, in this case, seem to be using junior versions of tools, like CAD, that professionals use to do their work. This seems both useful to learn concepts, but useful to also begin learning the tools that are used in real life for these types of tasks.  The one concerning thing that I saw was the lack of books.  Don't get me wrong, I do my fair share of reading on eBooks, but those tend to be non-academic.  If I need to have several books open at the same time an eBook just doesn't cut it.  I don't have the money for five iPads to do what people did in Star Trek with PADDs. I am also wondering what the cost of these things are.  I know that the overall cost tends to go down over time, but I also considering the cost of not equipping classrooms everywhere with this, thus expanding the gap between the haves and have-nots. While this future is cool, it's no utopia, and it's no dystopia.  As I said before, one man's utopia is another's dystopia. What's important is what can we do with this setup that our current setup does not allow?


As a side note to the video discussion the video "A digital tomorrow" (see bellow) was pretty funny.  It may seem dystopic at first but I think that it's probably indicative of what the future may look like.  There will be some pretty interesting technology, but it won't work as well as the advertisements say it does, or as people imagine the future to be: flawless and everything works.  The visuals also reminded me of the jPod TV Series.


On the article front, the articles were pretty interesting reads, but I'll only focus on two articles: Metaphors of the Internet and the article on Peer Reviews vs. Automated Scoring.

The metaphoers articles brought me back to my days as a linguistics student (a few years back) with the mentioning of emic and etic perspectives.  It also reminded me of schema activation from my same applied linguistics work.  It was pretty funny to me how Rheinghold is painted as an internet critic and critic of "other forms of electronic communication [who] often cite[s] commodification as a problematic, destructive force on the Internet," especially since it was written in 2009 and by the Rheinhold seemed to have become an internet "convert" and advocating the harnessing of the internet and the social element in it to amplify our collective intelligence.  Is this just an honest oversight of the author? Or a case of selective bibliography or interpretation?

Metaphors are pretty good at getting people started with understanding a new thing.  They activate schemas in our existing knowledge that help connect what we are learning to what we already know. They are, however, only a beginning. Our understanding of the new should go beyond the connecting with the old. We should understand the nuanced differences of the old the new.  This article reminded me that when the internet was young, and I was starting to learn about it, I didn't have any metaphors for it.  Computing was also new to me,  my English was improving since Greek is theoretically my native language, and existing metaphors like "highway" really meant nothing because my notion of "high way" (Εθνική Οδός) was essentially no different than a long stretch of 2.5 lanes.  I guess my notion of the internet was a place to find things. Maybe the best metaphor I could come up with is the notion of the bazaar.

In the other article, one of the things that really came to mind was that there was way too much emphasis placed on the grading aspect of the essays (raw score) and not enough emphasis on the commentary aspect of essays.  When someone grades a paper, or any assignment that is something other than formulas, there are two aspects: the raw score from a rubric and comments on the essay. Even if someone gets a perfect score on their essay, that doesn't mean that they've reached the apex of their performance,  They can still do better and improve, and this is where instructor comments come in.  You can get 100% on an assignment, and at the same time you can improve your work. You do this by reading the comments from your instructor (or more knowledgeable other) and you apply those to your day to day work.  Mechanized, or peer grading, can give you the same raw scores for some very basic essays, but the commentary for improvement won't be there, not to mention that when essays stray from a prescribed format they will be graded wrong even if they are not.

Finally, in the forums there was also a lot of great activity. I went in an up-voted a few things that stood out to me, but it would take more than one, two, or three blogposts to discuss all the interesting sparks of the imagination in the forums.  For the time being I picked one thread that ties into my others MOOC thoughts.  This thread was: "Would you pay for a MOOC?" The question was:
Would you pay for a degree taught in MOOCs? More importantly, and a topic in and of itself, would businesses and industries hire people who have learned in this type of environment?
Jen Ross asked in this discussion:
Great post, Alan - maybe the question isn't so much 'would you' pay, but 'how much' is a MOOC worth? What is it that we pay for when we pay for education? 
I honestly think that the way things are today what we pay for is Accreditation.  Of course that presupposes a valid pedagogical model and faculty contact time, however one may measure that. This, in the US, seems to mean measurement of "butts in seats" time in many instances. So having a subject expert teach for a certain amount of time, and then passing some sort of summative examination ties together to give us accreditation. This may seem like a really bleak view of education, but with many people going to school for employment purposes, that seems to me to be the main impetus for payment of educational services.

I personally wouldn't pay anything for a MOOC. A MOOC is open, and thus, for me, free. The certificate of completion that coursera, udacity and EdX hand out at the end does not mean much in the real world at the moment. However it is a nice momento of my time in the MOOC! Like others said in the discussion thread, I would probably donate the cost of a cup (or two) of starbucks coffee toward the MOOC if it help support the infrastructure to make the collaboration possible. But, paying as a pre-requisite to participate - no.  Hamish Macleod pointed out that he contributes to wikipedia every now and again because it is a valuable tool for his job. I think that this is an apt analogy for MOOCs.  Furthermore, I do think of open in MOOC as free.  Content usually isn't open as in OER open, so open must be free.  Otherwise, what could open mean?  Open to enrollment?  So are collect courses, and have been for quite some time. So what? :)

I also liked this quote from Roberta Dunham
MOOCs are great ways to share learning without having to deal with the organized higher education syndicate.
I think Roberta hits on an important point, and one of the intents of the original cMOOCs. I think we've come full circle, and if we haven't yet, we may be pretty close.  Keep thinking freely :)

Last thought (more of a don't let me forget type of thing), the issue of accessibility came up this week in #edcmooc; accessibility of two types.  On the forums accessibility was discussed from a health standpoint with people with disabilities and access to MOOCs; and on twitter the issue of the digital divide (and I would add to that computer, information, and critical literacies) and access to MOOCs.  This is a major topic in my mind - but subject to a future post :)

Friday, November 15, 2013

Video Games and Learning MOOC - process throughts

Over the past few weeks I've been dabbling with a course on coursera designed by two professors from UW Wisconsin.  I didn't realize who they were (Squire and Steinkueler) initially, but at the "course" progressed I realized that I had read some of their work before when I was reading about video games and learning.  An added benefit was that there were some guest appearances by Jim Gee, someone who is mentioned quite often in the department I work in (Applied Linguistics at UMass Boston) and whose books on video game learning I've enjoyed in the past.

Since there really wasn't a lot for me to react to while the MOOC was in session I decided to hold off and do one summative post at the end of the MOOC, which just so happens to be this week.

So, the first thing that struck was this insistence that the "M" in MOOC stands for "Massively."  This is just wrong. It's a massive online open course, not a Massively open online course. I know it's a small picky thing, but they do mean different things. Massive and Open are two different words.  So Massive = many people "enroll, and it's Open (whatever value you may ascribe to Open).  Massively Open means something else, like "hey, look at that gaping sinkhole! It's massively open wide! I guess the course could still be Massively open, as in there is no copyright, everything is downloadable, remixable and redistributable, and it's all free. No questions about it. But, this is not the value ascribed to the "mass" part, so let's just get it right - Massive, not Massively.

Now that I've ranted a bit, what was my motivation to join? I was interested in the topic, and I saw Jim Gee's name, someone I've heard about for a few years now as part of my own department's work, as I previously mentioned. I also think that it was motivating not to have to deal with "certificate of completion," tasks like silly little comprehension quizzes and having to work toward contributing to a research project. From the weekly introductions to the course I guess that other participants felt that they were guinea pigs for the instructor's research and they wanted other more "substantive" evaluations of their work.  Personally, I think that contributing to research was much more interesting, and by completing these little assignments it allows participants, who were new to this, to see what is entailed in the process of figuring out what people learn from video games, and how conditions and results of learning are tested in these environments.

One of the data collecting assignments were the Week 1 discussions.  I think that they were really good in that they allowed learners to post something other than text. This was somewhat open ended (with exception of time limit), it allowed people to experiment and post something that discussed issues of week one, but it didn't seem stilted. Even though something different than text was posted, it still leads me to my previous conclusion that discussion boards not that great in MOOC environments.  There were multiple threads per game, and at the end of the week it really was a bit unwieldy to go through more than 60 pages of discussions.  Perhaps grouping discussion by games might work better, or if a topic has already been posted (i.e. game in this case), then you don't let people post on the same topic again.  You encourage people to participate in existing threads about this topic.  This way you don't have zombie threads going around with one or no replies. A discussion forum is useless if no one discusses.  It's nice for data collection, but not for discussion.

Another nice thing was that the instructors got their hands dirty, as much as they could anyway. Even if they can't be right there in the forums, they do acknowledge contributions of participants, especially those who help in the community, in the introductory videos for each week.  It's also interesting to see that they got rid of the "down vote"(or so they said) in threads because people didn't like it.  I'd be interested in seeing research on this, more specifically down-voting, up-voting and effect on participation.

It was also nice to see course creators encourage people to take their materials and use them in their own classes. With the exception of cMOOCs, I don't know if I've come across anyone in a cMOOC saying that their material is OER and encourage others to freely use it.  I know that I like the content, and some of their lectures are pertinent to some courses that I'd like to propose for my own department, so it's nice to see the option there.  I didn't see an easy way to download the material for my own use, so I guess I'll keep looking.

One of the anecdotes shared about kids playing civilization, I think it was in one of the lectures by Squire, reminded me of my high school experience with Bolo and networked Apple LC II on Apple Talk. This was done after school and we regularly had at least four teams of five playing for a few hours after school, along with the head of computer science at the time. There was cooperation among teams, learning the lay of the land, and learning strategy.  For me it was also an opportunity to improve my English since I had just returned from Greece and I needed more practice with the language in a variety of areas.

Finally, what amazed me (and I guess that I shouldn't be surprised) is that in the assignments people just didn't follow directions. For instance there an assignment to do a text analysis and at the end, mention where you got the source text from (what you analyzed), fill out the following info and write a little about why you chose the text and what surprised you
Text resource
The game the text is related to
K1 words: ____%
K2 words: ____%
AWL (academic) words: ___ %
Fry reading level: ___ grade
Many people I saw on the forum just posted their percentages and wrong reading levels, without any other information.  It's like there was no thought process at all in this.  Too bad, because if data in these fora are used for research, it may not be that useful.  Speaking of research I really did like that Squire and Steinkuehler invited people to participate in their research as data crunchers, co-authors, editors and so on.  They haven't figured out how this will work yet, but it's nice to see that such an offer was made.  I think that there is something to this collaborative research, as is evidenced by the work of my colleagues in the MobiMOOC Research Team, so I may be contacting them to see how I can help.

If you participated in this MOOC, what did you think?

Wednesday, November 13, 2013

Some Mid-Week #edcmooc thoughts & reactions

Take one blog, mix with others, add own thought
and see what happens
Over the past couple of days I've been reading what fellow participants have contributed to the blogosphere on #edcmooc.  I've watched the week 2 videos (more on that in post during the weekend), and I am slowly reading (or re-reading) some of the food-for-thought articles posted for week two.

To keep things manageable, I decided to devote only 3 days to fellow participant's blog posts, so I can move forward with other materials well.  In this blog post I wanted to react to a couple of things I read from fellow participants in the last few days.  First off, we have the Sage on the Stage (SoS). I was reading this short post on why the Sage is likely here to stay.  Interesting post, and it's got a couple of comments. I highly encourage you to read and think about it as well.  The gist of the post, from what I read into it, is that the Sage on the Stage instructor is here to stay because (1) people want to be instructed and (2) there needs to be teacher presence (see community of inquiry model).

Now, it's true that in educational or instructional settings, be they higher education, be they professional education, or be they athletics, people seek more knowledgeable others in order to gain from their experience.  We tend to associate this with the title "professor" or "instructor," and of course, what these individuals do is to "instruct." The fallacy, in my mind, is that we seem to associate instruction with a didactic  or "sit down, shut up, and listen."  This isn't the only way to instruct someone. Furthermore, teaching presence means different things to different people.  I take my "traditional" online course as an example.  Usually this course enrolls 15-20 students per semester. There will be students this this course that will need more frequent interaction with me as the instructor, and there will be students who won't need as much.  In a regular (15-20 student) environment it's easier to figure variable this out and to be able to address those student needs. In a MOOC, well... not so much.  Just because something doesn't conform to one way of interpreting teaching presence, that doesn't mean that there isn't teaching presence. So, sure, the sage isn't going anywhere, but the way we use SoS approaches will vary depending on where we need to apply this technique.

On a related note, I think courses are a good opportunity for students to come our of their comfort zone and expand their horizons a bit.  For example, if you have a student that always strives to get that attention, they need to be able to work and learn successfully in environments where they won't get it in quantities that they want it.  If students don't like working with peers, they need to stretch to be able to work with peers when necessary.  No one likes group work because it puts us at a disadvantage. We lose that flexibility of the home court advantage, and we need to communicate with others, and negotiate the outcomes. It adds overhead, for sure, but it does add value to the learning experience by exposing students to a variety of views. This benefits the learner in the end, if the learner is open to such differing views.

The other post that got my brain going this morning was Agata's post on her reactions to Noble.

4. In general, to me, this article misses the point. It gives example of students opposing online education - [emp] especially [emp] when purchased as a part of obligatory fee. In fact, that is the reason why in 1998 online education could not have had such an impact on learning at it does now. People did not have such easy access to different technological channels of information. Since they has to pay for everyhing - they were against. Since parents had to pay - they were against. It is surprising how invalid the arguments of the article are now - in the times when access to the internet is ubiquitous (thanks to wifi) and most people have twitter and facebook in their phones.
Perhaps the validity of this paper can be kept with reference to the developing countries? I wonder how they relate to the accessibility of online education in Europe and the USA.
I actually brought myself back to 1998, when I was an undergraduate studying computer science.  Back then I did have a really fast 56k modem (wow!) but I really didn't get online often from home because it cost a lot, and it was slow.  I did most of my browsing and downloading from campus where we had access to really fast internet.  In those days having an LMS was something that really wasn't useful because we didn't have the ubiquity of access that we tend to have today.  That said, even back then, there were technology fees that I paid to my school every semester (or was it every year?) regardless of access to an LMS for my courses.  This technology fee was used to keep the computer labs up to date, to provide for classroom infrastructure, to provide for classroom technology and so on.  Having to reallocate funds from one spending account to another doesn't seem like such a big deal if the technology fee doesn't increase.

The one thing that we should think about for developing countries is that developing countries are not what our "developed" countries were 10-20 years ago. One of the things that seems to be big in developing countries is mobile access.  So, if access to mobile broadband is cheap and ubiquitous, that's a really important variable that is different between them now, and us then.  This important variable can be used in ways to provide meaningful pedagogical tools that are enabled by technologies now. Back in 1998 we only had CSD (circuitry switched data) at 9.6kbps on some mobile phones. Not much you could do with that; but if developing nations have access to cheap 2G or 3G mobile networks, technology can play a part in meaningful technology enriched pedagogy.

Your thoughts?