Thursday, November 15, 2018

Self-Control still difficult!

Attempt at witty title probably failed :-)

I guess I am a little rusty  with creating meaningful blog titles since I have not been blogging frequently recently.  Oh well. I will get back into the swing of things once I finish my EdD...or not... ;-)

In any case, I am catching up with #el30, more specifically last week's guest Ben Werdmuller (see recording here). Interesting fun fact - Ben is the creator of Elgg, which is the platform that Athabasca University's "Landing" runs on.

There were quite a few interesting things that came out of the conversation but there were two that really stuck out to me.  The first is that there was a strand of the conversation that dealt with taking back control of your online identity from the various platform providers, such as facebook, google, yahoo/verizon, twitter, and so on.  A lot of what we do, this blog inclusive, rests no someone else's platform.  If the platform decides to cease operation you lose not just your data, but also the connections that are based upon that data.  Take this blog for instance: If google decided to shut down blogger I could lose all of my posts going back to 2008 when I started doing education-related blogging.  I also lose the connections that I've made through this blog (other people linking to, or reacting to, my writing).  The same is true for things like twitter and facebook.  In some instances services allow you to download your data, but in my experience that's been quite messy in the past.  In most cases what I've gotten is a JSON formatted file (or set of files), and good luck importing that into somewhere where it's usable. If you're lucky you might get an offline viewer for your data.  For blogs I've had luck importing from Wordpress into Blogger and I assume that the converse is true (if google decided to shut down blogger.

I did chuckle a bit at Ben's comment that cPanel looks like something out of the 90s.  I do have a website that I maintain, and the design of it is done on RapidWeaver (MacOS application), export the HTML, and upload via FTP to the server.  The website is designed to pull data for a variety of sources, including Blogger.  When I have to go into cPanel I cringe a bit.  If I had a little more time on my hands I'd love to setup a Wordpress instance on my site but I know that I don't have enough time to really dive into it and migrate everything I have into something I control by myself (hence the title of this post: self-control still difficult).  There were other interesting ideas that came up, such as asymmetrical bandwidth issues, the ability to have access to domain-name registration, and even hosting.  So many threads to pull apart and dissect...and so little time.

The second strand that piqued my interest has to do with prototyping.  The discussion about designing prototypes, getting some user feedback, doing some more prototyping, getting some more user feedback and then coding something really brought me back to my senior year in undergraduate when I was taking a course in designing user interfaces (CS615).   There is a lot of discussion (it seems) these days about getting your hands dirty, and getting something done, but without prototyping something to get a sense of how your initial ideas and concepts work, you could end up trying to solve coding problems that you don't need to bother with anyway because the prototyping stage might indicate that you don't even need to go down that particular path. This also connected well with another comment made (paraphrased): There is no need to start with the universe (aka all the bells and whistles); start with the minimal viable solution.  This was, I feel, an important comment (and sentiment expressed) not just on software development, but on work in general.  I suppose a related sentiment that I've heard in the past: The perfect is the enemy of done. I've seen, over the years, lots of projects fail to even get started because people object over the fact that the new solution isn't at one-to-one parity with the old solution or it's just not perfect.  Many potentially interesting paths are never taken because the lack of perfection prevents people from even trying.

Anyway - those are my take-aways from last week.  Looking forward to viewing this week's recording with Maha, and reading some more unboundeq stuff, which I've seen on twitter over the past few months, but I have not had time to dive into it :)

Friday, November 2, 2018

Post-it found! the low-tech side of eLearning 3.0 ;-)

Greetings fellow three-point-oh'ers
(or is it just fellow eLearners?)

This past week in eLearning 3.0 (Week 2, aka 'the cloud'). This week's guest was Tony Hirsch, and what was discussed was the cloud, and specifically Docker.  Before I get into my (riveting) thoughts on the cloud, let me go back  to Week 0 (two weeks ago) and reflect a little on the thoughts I jotted down on my retrieved post-it note.

So, in the live session a couple of weeks ago (it's recorded if you want to go back and see it), Siemens said something along the lines of "what information abundance consumes is attention". This really struck me as both a big "aha!" as well as a "well, d'uh! why hadn't it occurred to me already? D'oh!". There has been a lot said over the past few years about how people don't read anymore (they skim), and how bad that is.  This ties into "what learners want" (a phrase I've heard countless times on-campus and off), and that tends to be bite-sized info, which leads us to the micro-learning craze.  While micro-learning, or bite-sized learning, has its place, it can't be the end-all-be-all of approaches to learning. When the RSS feed is bursting with around 1700 unread posts (my average day if I don't check it), the effort to really give 100% attention to each item is too much; and part of it is that full articles no longer come over RSS - it's just the title and perhaps the first 250 characters of the article if you're lucky, so the 'click to go to article' is a necessity if you want to read the full thing. Back in the day (ca. 2005) I could actually read most things because my unread count wasn't all that big.  So, as the abundance of data has become a reality, attention deficit seems like a natural connection to that.

Another thing that Siemens said was that before the "messiness of learning was viewed as a distraction from learning, whereas now the making sense part is the learning"  (paraphrased). This got me thinking about messiness and not-yet-ness. I agree that messy learning is what college (BA all the way to PhD) should be what learning is about, but how does that square with the mandates for learning outcomes and the measurability of those outcomes?  This is particularly pointed at the moment as this year one department I am affiliated with went through their 'academic quality' review, and my home department is going through ours in early 2019.  Messy works, but how do you sell it to the upper level admins? Also, how do you sell it to learners who have been enculturated into a transactional model of education?  I don't have the answers, but interesting points to ponder and discuss.

Now, on a more geeky or technical side:  Docker and the cloud.  As Stephen and Tony were discussing the cloud. This made me think of tinkering as learning, authentic learning, and the aforementioned messiness in learning.   We now have the technology that allows us to spin off fresh instances of a virtual machine that has specific configurations.  I've been able to do this on Virtual PC (back before microsoft bought them) on my mac for ages.  It was actually a lot of fun to find old versions of Windows, OS/2, NEXTSTEP, and other operating systems and play around with them on my Mac.  It was a great learning opportunity.  But, but wasn't scalable. As a tinkerer I could do this on my own machines, but I couldn't distribute easily.  Now, if I were teaching a course on (insert software), I could conceivably create the 'perfect' environment and have students be able to spin-up instances of that to be able to try things out without the need to install something locally; not sure what licensing looks like in this field, but let's assume it's 'easy' to deal with. Whereas in prior eLearning (elearning 2.0?) the best that we could do is limited simulations with Articulate, we can actually afford to let the learners loose on a real live running instance of what they are learning.  When they are done, they can just scrap the instance.  Even if you needed to run the instance for an entire semester non-stop (15 weeks), that would still only cost the learner around $80.  Not bad!  The best thing about this?  You can freely mess around, and if you break something (irreparably), start from scratch!

Anyway, those are my thoughts on this week on eLearning 3.0 - what are your AHA moments?

Friday, October 26, 2018

eLearning 3.0: How do I show my expertise?

With my dissertation proposal in the hands of my committee and off for review, I thought I'd participate in a MOOC while I wait to hear back.  Yes, I do have some articles that have piled up (which may be of use to my dissertation), but I thought I'd be a little more social (lurk a little, post a little).  The funny thing is that as soon as I lamented the lack of cMOOCs...there it was, eLearning 3.0 popped up on my twitter feed...and a few Greek colleagues invited me to one in a Moodle. I guess the universe provided for me.

Anyway - I had listened to both the intro video (week 0?) as well at the Downes & Siemens chat (Week 1 & 2) and I had jotted down a few things that piqued my interest...but of course I left them in the office. I guess I'll be blogging about those next week.  The freshest thing in my mind is the chat about xAPI and the LRS (Learning Records Store). In all honesty this went a little over my head. I think I need to read a little more about the xAPI and this whole ecosystem, but the LRS is described as enabling "modern tracking of a wide variety of learning experiences, which might include capturing real world activities, actions completed in mobile apps or even job performance. Data from these experiences is stored in the LRS and can be shared with other systems that offer advanced reporting or support adaptive learning experiences"

This got me thinking about the onus (read: hassle) of tracking down your learning experiences as a learner. I also credit a tweet I read this morning about credentialing, by Donna Lanclos, that really connects well with this. As a learner I don't really care about tracking my own learning experiences. I participate in a learning experience, be it a workshop, a webinar, a course of study, doing research on a paper to be published or presented, or even sustained interaction in a common topic across my PLN.  I enter the learner experience because there is something I want to learn. It can be a simple thing (e.g., how to  unscrew the case to my PC tower to install more RAM), or something more complicated (e.g., getting prepared for a social media strategy for your organization). Few people enter a learning experience just to get a credential†. However, it's the credential that opens doors, be they doors to a promotion, to a new job, or even an opportunity to be part of an exciting new project. So, it seems necessary that we, as learners and professionals, document all this in a way.  The problem is that it's a hassle. There are two big issues here:
(1) What to track (i.e., what's relevant)
(2) Where to track it?

Both issues, very predictably, are answered with "it depends".  What to track depends on the context. You can track everything, but not everything tracked is used in all potential instances where credentialing information is needed. For example, most common things tracked are your college degrees.  This is fairly easy to track because most of us have a small countable number of them (1-3 I'd estimate). However this doesn't necessarily show growth and increasing expertise as a professional.  So we delve deeper.  Just taking myself as an example here are some learning opportunities that I have been part of over the past few years (some offer certificates or badges, some do not):  MOOCs, week-long workshops, day long workshops, conferences, professional development webinars, self-paced elearning, required workshops on campus (e.g., campus compliance, purchasing, etc.), masters and doctoral degree programs, virtually connecting sessions, and so on. Each format is different.  Some have assessments, some do not. Some are mandatory, some are not. They all contribute to my knowledge of my field.

Tracking is another issue.  Where do I track things?  There are many places.  I have a resume - which is out of date, and I can't even find the word document any longer... I have a CV in Word format which I created this year for work purposes, there is LinkedIn, there is ORCID, and there are document repository networks like Mendeley, ResearchGate, Academia.edu, Scribd, and SlideShare; in addition to places where you can help folks with their questions, like Quora for instance. There is goodreads to track what you read. There are places to also track your digital badges, like the Open Badge backpack. I had once actually joined a free service, whose name escapes me at the moment, that was so granular that it could track articles you read - you tagged them with specifics (e.g., elearning, instructional design, online learning), and the service would add 'credit' to your profile for those things★.

So as to not belabor the point, over the years I've come across a variety of learning situations where I've had learning experiences.  Some with a nice shiny certificate at the end, others with just warm fuzzy feelings of accomplishment. How do we automate this multiple-in, multiple-out process so that we can actually track things with more precision, but also have the ability to spit out as many customizable reports as we can for credentialing purposes?  I don't know about you, but I find myself not having enough time to document everything, and I certainly don't keep things like CVs, resumes, and my LinkedIn profile updated frequently.  I think this will be one key challenge in eLearning 3.0.

Thoughts?



Marginalia:
† well, it's my hypothesis that most people enter a learning experience for the learning and not just the certificate/diploma/badge that comes at the end. I do know that there are people like that around, but I think they are not the majority.
★ Tracking every Chronicle and IHE article I read got tired pretty quick - I read too many articles in a day to really make manual input a feasible thing. I  dis-enrolled from that social service within a few days ;-)

Monday, October 22, 2018

Bat-signal for an External Committee Member!

Well, my proposal (basically half my dissertation) is off to the internal members of my committee. Many thanks go to the help of my doctoral supervisors who've asked a lot of questions of my previous drafts and helped me refine my writing :-)

Now the next step (assuming the committee likes my submitted draft) is to both find an external reviewer for this, and also defend it so that I can move onto the next phase: data collection and analysis.

Where do you come in? I need recommendations for an external member to my committee :-) If we've worked together in the past 5 years you would not be eligible to be on the committee, but if you know people who might be good, let me know :-)



Requirements for external committee member

Retrieved from: http://fgs.athabascau.ca/handbook/doctoral/candidacy.php
Also committee member criteria: http://fgs.athabascau.ca/handbook/doctoral/supervisors_and_committee_members.php

  • At least one of the new members must be at arm’s length from the student and the proposal development <-- external="" li="">
  • be active in the general area of the student's research
  • have a tenured (or tenure track) faculty appointment
    • If no tenure track person is identified, there is an 'other' category that the Faculty of Graduate Studies could approve. See the link for details. 
  • hold a degree equivalent to or higher than that for which the student is a candidate (
  • demonstrate continuing scholarly or creative activity of an original nature as defined in item 3.7.3.b. of the AUFA Collective Agreement.
  • The proposed examination committee members must meet the eligibility criteria, and must not be in a position of conflict of interest (direct link to AU policy: http://ous.athabascau.ca/policy/humanresources/150_002.pdf 

My proposal details

(to better inform any recommendations you might have :) )

Title: Factors influencing the initiation and sustained engagement in collaboratives working outside MOOC parameters: an exploratory mixed methods case study

Abstract: This dissertation research will explore the factors for which individuals in an open educational environment choose to create, or join, collaboratives that produce certain mutually agreed-upon deliverables, and the factors that sustain individuals’s through this collaborative endeavor. As such, some of these factors may deal with characteristics and experiences that define such collaboratives, and what members of these collaboratives perceive as a gain from their involvement from such collaborative endeavors. The approach to research this topic will be an explanatory parallel mixed methods case study design that will initially explore quantitative results from the Community of Inquiry instrument as well as qualitative results gathered from an open-ended survey.  Survey participants will be invited to participate in subsequent interviews in order to explore the question in more depth. A better understanding of why such collaboratives form, and what sustains them, might provide clues as to how such collaborative formations may encouraged, or nurtured, in online learning.



Thank you in advance for your help in identifying potential externals :-)



Saturday, July 28, 2018

Community of Inquiry: TeachING not teachER presence

Hey there blogger audience! Well, I assume someone is still there despite not having blogged in a great while. It's hard to believe that July is almost over, and there is only one more month of summer left (😢). Things have been fairly busy, between teaching INSDSG 684, doing a much (much) deeper dive into the CoI, and rewriting my intro chapter for the dissertation proposal†, there has been little time to blog.  Or rather, I guess I could have blogged, but due to my disconnect from my regular communities of practice, nothing really seemed worthwhile writing about.  Until now!

So, back when I was initially contemplating my dissertation topic I thought I'd do a mixed methods research study, possibly with the CoI instrument as that quantitative component.  I nixed that idea early on because I honestly thought that I would get someone who's a stickler for the notion that Quantitative must equal generalizability, and I know that from my sample (even if everyone participates), generalizability isn't attainable. Good description is, but not generalizability.  So I switched to to qualitative-only.  After a good discussion with one of my co-supervisors (where my fears were put to rest 😊) the issue of the CoI came up again (not by me).  This was the third time CoI was brought up (first by me, then by one co-supervisor in 2017, then another in in 2018). I figured that it would be worthwhile to pursue mixed methods again‡.

The next big decision was which elements of CoI to measure for our rhizocollabs.  Social (✓), Cognitive (✓), Teaching (?) How about some of the proposed extensions (?). Social and Cognitive seemed like a no brainer.  TeachING presence, defined as "the design, facilitation, and direction of cognitive and social processes for the purpose of realizing personally meaningful and educational worthwhile learning outcomes"♠ seems important as a coordinating function, and it emphasizes the doing not the doer so measuring some aspect of coordination in these collaboratives (be they "swarms" or not) seems important.  Garrison and others also point out (many, many, times) that it's teachING presence, not teachER presence, and students can exhibit such teachING presence as well in a CoI.  But, when one looks at the CoI survey instrument all questions regarding the teachING presence focus on the instructor. Hmmmmm😖. When you do transcript analysis I can see being able to identify instances of teaching presence amongst non-instructor members of a CoI, but the instrument seems to focus a ton on the instructor.  I've decided to try to measure teaching presence in our collaborations, but I'll be tweaking the CoI instrument questions in this category to be more group oriented rather than teacher and instructional design related.

Between the notes from the articles I read, and the notes from books on CoI, I've got around 40 pages of notes.  Over the next week or so I'll go over them and write a draft of the CoI section for my literature review.  Once I get the all-clear from my co-supervisors for my intro chapter it's full steam ahead to tweak the literature review, which is gargantuan.

Onwards and upwards!




Marginalia:
† I managed to trim five whole pages from the intro chapter while adding, what I hope is, much more detail about what I'd like to do.
‡ Hence the intro re-write, and the much deeper dive into CoI. I am actually glad this happened because through reading more of Garrison's work there is a connection between collaboration & CoI.
See here for more on TP.