#edcmooc Human 2.0? Human+=Human?

Vice Admiral Tolya Kotrolya
Well, here we are! The final(ish) week of #edcmooc.  As I wrote in my tweet earlier this week, I think #edcmooc, like #ds106, is probably more of a way of life for some than an actual course.  Sure, there won't be additional prescribed readings or viewings after this week, so the course is technically over, however the hallmark of any good course, as far as I am concerned, is one where the learners keep thinking about the material long after the course is done.  They keep engaging with the provided materials and with any new material that they encounter in critical ways.  In other words, once the course is done we don't shelve and compartmentalize the knowledge gained and leave it to gather dust like an old book on the shelf.

In any case, like previous weeks, here are some thoughts on the hangout, readings, and viewings from this week. To be honest, I don't particularly care about a certificate of completion, but I am interested in designing an artefact as a challenge to myself.  I am predominately text-based, so doing something non-text-based (or artistic) would be something that push me a bit.  That said, I am not sure if I will be able to do this in one week's time. What do others think about the artefact?  Challenge accepted?

From the Week 3 Hangout

Last week (Week 2 Hangout) the group mentioned a quote, which was repeated (and corrected) this week: "We have a moral obligation to disturb students intellectually" - Grant Wiggins.
This made an impression on me, not because of the content of the quote, but rather the succinctness of the quote.  The content is something that I, luckily, have been exposed to in my career as a graduate student.  In my first graduate course, an MBA course, Professor Joan Tonn loved to tell us "can you give me a quote?" whenever we claimed something to be true based (supposedly) on what we had read.  This was her way to not only get us to back up our claims based on the research we read, but also get us out of our comfort zone so that we could expand our knowledge and understanding.  I remember my fellow classmates and I scurrying those first few weeks in class to find the page number in our books and articles where the our supporting points were mentioned in order to be able to respond to can you give me a quote?

Another example comes from the end of my graduate studies, while I was finishing up my lat MA degree.  It seemed that Professor Donaldo Macedo's  favorite phrase was can I push you a little? In sociolinguistics he "pushed" us to think a bit more deeply about the readings, deconstruct what the readings were, and deconstruct why we held certain beliefs. This isn't always comfortable, thus the initial question can I push you a little (you could always say no, by the way).  In psycholinguistics Professor Corinne Etienne kept us on task, like Joan Tonn we also needed to quote something, and make sure that what we were quoting supported our arguments and answered the posed questions.  This means, that unlike certain politicians, we couldn't side-step the original question and answer the question that we hoped we would be asked.  These are all examples of disturbing the sleeping giant that is the mind, waking it up to do some work and expand its horizons, not just respond back with some canned, comfortable, response.

In my own teaching, I have adopted a devil's advocate persona. I try to disturb the calm waters of the course discussion forums by posing questions that may go a bit against the grain, or questions that might probe deeper into student's motivations to answer a certain way. I prefer the title of devil's advocate because unlike can I push you a little it doesn't necessarily mean that I hold the beliefs that I am asking probing questions about, but rather it means that I am interested in looking at the other sides of the matter, even if I don't personally agree. I don't participate in many faculty workshops or things like pedagogy chat time, so I often wonder how my peers do this in their courses, intellectually disturb students, in order to get them to expand their thinking.  If any teachers, instructors, or professors are reading this, feel free to share in the comments :)

Another interesting mentioned by Christine Sinclair is that it is difficult to mention contributions of 22,000 participants in a MOOC.  I have to say that it's difficult to mention all of the contributions of a regular 20 student course in one hour's time frame! In the courses that I teach I try to do a 1 hour recap podcast every week or every other week (depending on how much content is available, and how conducive the content is to an audio podcast) and I have a hard time finding the time to mention everything that was important to mention!  I can't imagine how many hours it would take to read, prepare, and produce a live hangout to get most of the contributions mentioned.  The MOOC Hangout would be like C-SPAN ;-)

Another difficult thing to figure out who wants to be mentions and who does not. This is a problem with a regular course of 20 as well. If you have a public podcast about the course, even if it's only "public" open to 20 students, some students don't want to be named because it makes them uncomfortable.  For my own course podcasts I go back and forth between mentioning names, or just mentioning ideas and topics brought up and acknowledging the contributions of students that way. The people who wrote about what I would mention in the podcast would know that it was their contribution that was mentioned, and they would (hopefully) feel some sort of sense of acknowledgement, and thus get a sense of instructor presence in the class this way.

From the Videos

In the videos section of #edcmooc this week we had Avatar Days, a film that I had seen this before. One of the things that I was reminded of was that I really liked the integration of avatars in real life environments. It is a bit weird to see a world of warcraft character on the metro, or walking down the street, but it's pretty interesting visually.



I do like playing computer games, but I am not much of an MMORPG person. I like playing video games with established characters, like Desmond (Assassin's Creed), Sam Fisher (Splinter Cell), Snake (Metal Gear), and Master Chief (Halo). I play for action, and to move the story forward.  For me these games are part puzzle,  part history, part interactive novel.  I only play one MMORPG and that is Star Trek Online. The reason I got sucked into it was because I like Star Trek, and this is a way to further explore the lore of that universe. I have 3 characters in Star Trek Online (one for each faction of the game) and while the game gives me a spot to create a background story for them, it seems like too much work. I really don't see my characters, one of whom you can see above  - Vice Admiral Tolya Kotrolya, as extensions of myself.

Watching Avatar Days again had me thinking: Are avatars an escapist mechanism? A way of getting away from the mundane and every day? Are they extensions of real life, or someone you would like to be, or whose qualities you'd like to possess? How can we, in education, tap into those desired traits that people see in their avatars to help them move forward and accomplish those things in real life?  For instance, let's say that I didn't play by myself most of the time and I were really active in my Fleets (equivalent of Guild in WoW), and I wanted to be the negotiator or ambassador to other fleets, I would guess that I would need some skills in order to be able to qualify for that position. Let's say I continue to hone my skills in that role. Now, being an ambassador in real life is pretty hard (supply/demand and political connections), but can you use those skills elsewhere?  This is an interesting topic to ponder. I wonder how others of their avatars.



True Skin (above) was also quite an interesting short film. The eye enhancements reminded me a little or Geordi La Forge, the blind engineer on Star Trek: The next generation.  These enhancements made me think a bit of corrective lenses for people with myopia or presbyopia. In a sense people who need eye glasses or contacts to see could be considered more than human if we really thought about it from an augmentation perspective. The portrayal in this video just seems foreign, and thus potentially uncomfortable, because eye augmentation to see in the dark, or have information overlay on what you see is out of the norm for us at the time being. Another interesting thing to consider are memory aids.  We use memory aids a lot in our current lives.  Our phones have phone books in them, calendars, to-do lists.  If we don't remember the film that some actress was in we look it up on IMDB.  I remember about ten or so years ago I had a friend who vehemently opposed any sort of PDA (remember those? ;-) ) because he prided himself on remembering his credit card numbers, phone numbers, important information.  Sure, some information is important to remember without needing to look it up, however when you have memory aids for potentially less important information such as who was the actor who portrayed character-x in the 1999 remake of movie y, it frees your mind to remember, and work on, other more important things.  This way you are offloading (a computer term co-opted to describe a biological process) less important information to an external device to leave the main computer (the brain) to do other things.

The virtual displays on someone's arm reminded me a lot of the biotic enhancements that can be seen in the Mass Effect series of games (speaking of Mass Effect, the background music in Robbie reminded me of Mass Effect). The thing that really struck me in this video was the quote: "no one wants to be entirely organic."  This is an interesting sentiment, but what happens to the non-organic components when the organic dies? Supposedly the non-organic components cannot function on their own, so where does the id reside, and is it transferable to a data backup, to be downloaded to another body upon the organic components inevitable death?   The last question about this video is: when will it become a series on TV? ;-)

A quick comment on the gumdrop video: I loved it!.  It reminded me of a series on the BBC called creature comforts (sample video. In this series they recorded humans and they matched them with their animal personals (I guess), so the claymation animal was saying what the human had spoken.  Gumdrop could very well be a human voiced by a robot.

Finally, a quick note about the Robbie video. This video was a bit rough watch. The first thing that surprised me was that the space station was still in orbit after all those years. I would have assumed that eventual drift would cause it to come into Earth's gravity and cause it to crash. While watching the video I was wondering when this was taking place. I kept thinking "how old would I be in 2032?" and I made the calculation.  Then "How old would I be in 2045?" and I made the calculation, and then Robbie mentions that he (she? it?) has been waiting for 4000 years. At that point I stopped counting knowing that I would be long dead when Robbie's batteries died. When the robot mentioned that he lost contact with earth the first thing that came to mind was a scene from planet of the apes; specifically the end where the main character says "Oh my God. You finally really did it, you maniacs, you blew it up." I am not sure what that says about me, but I would surely hope that they weren't responding to this robot because things went sidewides on the surface of the planet.

From the Readings

Finally there were some interesting things that I wanted to point out from the various articles that we had for this final week. In Transhuman Declaration there was this belief or stance (italics my own):
Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.
The thing that stood out to me was the invocation of morality.  I haven't really thought about the nature of morality in quite some time - or rather I haven't had to debate it; That said I am curious as to whether or not morality and moral behavior is a standard or expected standard amongst human beings, or whether it falls under the category of "common sense," which as we know common sense isn't all that common, but rather it's something that's made up of the cultural and lived experiences of the person who holds these things as common sense.  Is morality something that is malleable? Or is it a constant?  If it's malleable, what does that say about the expectation to act morally? If you harm or injure someone or something while trying to act morally does that negate or minimize the fact that you have actually harmed them or stepped all over on their rights?

The final article, for me anyway, was Humanism & Post-humanism here is something that got the mental gears working:
In addition to human exceptionalism, humanism entails other values and assumptions. First, humanism privileges authenticity above all else. There is a "true inner you" that ought to be allowed free expression. Second, humanism privileges ideals of wholeness and unity: the "self" is undivided, consistent with itself, an organic whole that ought not be fractured. Related to the first two is the third value, that of immediacy: mediation in any form--in representation, in communication, etc.--makes authenticity and wholeness more difficult to maintain. Copies are bad, originals are good. This leads some humanisms to commit what philosophers call the "naturalistic fallacy"--i.e., to mistakenly assume that what is "natural" (whole, authentic, unmediated, original) is preferable to what is "artificial" (partial, mediated, derivative, etc.).
What really got me about this is that in humanism there seems to be no space for the fact that while we do process things differently, we aren't really 100% unique as individuals.  The old adage of standing on the shoulders of giants goes beyond academic writing.  We are comprised of the sum (or more than the sum sometimes) of our experiences, which encompasses human relations, education, personal experiences, environmental factors and many many more things.  We can be clever, ingenious, and visionaries, but we weren't born with all of what we need, we acquired it along the way and it shaped us into who we are.  We can be authentic, but we can't be authentic without other people around. Others both shape us and allow us to show our individuality and elements of authenticity. Thus, while we may not copy verbatim, we do copy in some way, shape, or form while we remix that into something that makes it "new" and not a copy of something.

Furthermore, this whole notions of wholeness is a bit where I saw Carr's Is Google Making us Stupid? article. It seems that one of the laments (I won't go into everything in this article) is that people seem to skim these days, that they don't engage deeply because the medium of the web has trained us (or derailed us as the reading might imply) because there are way to many flashy things on the screen that vie for our attention.  I completely disagree.  Even when people had just plain-old, non-hypertext, books, things keep vying for our attention. If we are not interested in what we are reading it is more than easy to pick up that comic book, listen and sing-along to that song on the radio (or the MP3 player), or to call your friends and see they want to hang out.  Even when you're engrossed in the reading, in traditional, non-hypertext, materials if there are footnotes or endnotes that give you a lot of supplemental information, they take you out of the flow of your reading.  Deep reading isn't an issue that is technology related, bur rather a more complicated (in my opinion) endeavor which has to do with reader motivation, text relevance to the reader, text formatting and type-setting (ease of reading) and setup of the mechanics and grammar of the text, i.e. the more clunky or "rocky" the text, the more inclined the reader will be to skim or just avoid the text.  There are more critiques that I have of the Atlantic Article by Carr, but I'll limit it to this one.  Now, back to  Humanism & Post-humanism. Another interesting quote (italics my own) is as follows
most of the common critiques of technology are basically humanist ones. For example, concerns about alienation are humanist ones; posthumanism doesn't find alienation problematic, so critical posthumanisms aren't worried by, for example, the shift from IRL (in-real-life) communication to computer-assisted communication...at least they're not bothered by its potential alienation. Critical posthumanisms don't uniformly or immediatly endorse all technology as such; rather, it might critique technology on different bases--e.g., a critical posthumanist might argue that technological advancement is problematic if it is possible only through exploitative labor practices, environmental damage, etc.
This is a pretty interesting though, that most common critiques of technology are humanist ones.  It reminds me a lot of my Change Management course when I was an MBA student and the children's book who moved my cheese. Well, I saw it as a children's book, but it may not be. It's probably a tale that can be dissected and critiqued quite a lot from a variety of stances. The thing that stood out for me is the worry that technology has the potential to alienate by not having people communicate with one another in established ways, but what about people who are already not communicating well with established ways, but can use ICT to help assist with communication.  The usual example of this is are students in classes that are generally more timid or laid back.  In a face to face classroom, which has space and time limits imposed by its very nature, the students who are more outgoing and outspoken might monopolize the course time. This won't give learners who are not as outspoken an opportunity to chime in, or share their point of view, or understanding once they have processed the readings for the course, things that could move the conversation and learning forward in interesting and unforeseen ways. 

In an online course or a blended course, however, learners have more affordances that are not there in a strictly face to face course.  They have time to chime in, and thus the conversation can go on longer and thus more things can be teased out of a discussion topic.  Furthermore, students who aren't as outgoing in the face to face classroom have an opportunity to take the microphone (so to speak) and share with others what their thoughts are on the subject matter that is being discussed.  Instead of vying for that limited air time that you have in a face to face classroom, ICT has the potential to democratize the discussion that happens in the classroom by providing opportunities for all to contribute.  Technology by itself wont' be the panacea that makes this happen, let's not kid ourselves; there are many underlying foundations that need to be in place for students to use the affordances of ICT effectively.  That said, this is a case where ICT has the potential to bring together, not alienate fellow interlocutors and travelers on the path of learning.

So, what are you thoughts? :) How does Human 2.0 sound?

Comments

Popular posts from this blog

Latour: Third Source of Uncertainty - Objects have agency too!

MOOC participation - open door policy and analytics

You've been punk'd! However, that was an educational experience