Thursday, November 15, 2018

Self-Control still difficult!

Attempt at witty title probably failed :-)

I guess I am a little rusty  with creating meaningful blog titles since I have not been blogging frequently recently.  Oh well. I will get back into the swing of things once I finish my EdD...or not... ;-)

In any case, I am catching up with #el30, more specifically last week's guest Ben Werdmuller (see recording here). Interesting fun fact - Ben is the creator of Elgg, which is the platform that Athabasca University's "Landing" runs on.

There were quite a few interesting things that came out of the conversation but there were two that really stuck out to me.  The first is that there was a strand of the conversation that dealt with taking back control of your online identity from the various platform providers, such as facebook, google, yahoo/verizon, twitter, and so on.  A lot of what we do, this blog inclusive, rests no someone else's platform.  If the platform decides to cease operation you lose not just your data, but also the connections that are based upon that data.  Take this blog for instance: If google decided to shut down blogger I could lose all of my posts going back to 2008 when I started doing education-related blogging.  I also lose the connections that I've made through this blog (other people linking to, or reacting to, my writing).  The same is true for things like twitter and facebook.  In some instances services allow you to download your data, but in my experience that's been quite messy in the past.  In most cases what I've gotten is a JSON formatted file (or set of files), and good luck importing that into somewhere where it's usable. If you're lucky you might get an offline viewer for your data.  For blogs I've had luck importing from Wordpress into Blogger and I assume that the converse is true (if google decided to shut down blogger.

I did chuckle a bit at Ben's comment that cPanel looks like something out of the 90s.  I do have a website that I maintain, and the design of it is done on RapidWeaver (MacOS application), export the HTML, and upload via FTP to the server.  The website is designed to pull data for a variety of sources, including Blogger.  When I have to go into cPanel I cringe a bit.  If I had a little more time on my hands I'd love to setup a Wordpress instance on my site but I know that I don't have enough time to really dive into it and migrate everything I have into something I control by myself (hence the title of this post: self-control still difficult).  There were other interesting ideas that came up, such as asymmetrical bandwidth issues, the ability to have access to domain-name registration, and even hosting.  So many threads to pull apart and dissect...and so little time.

The second strand that piqued my interest has to do with prototyping.  The discussion about designing prototypes, getting some user feedback, doing some more prototyping, getting some more user feedback and then coding something really brought me back to my senior year in undergraduate when I was taking a course in designing user interfaces (CS615).   There is a lot of discussion (it seems) these days about getting your hands dirty, and getting something done, but without prototyping something to get a sense of how your initial ideas and concepts work, you could end up trying to solve coding problems that you don't need to bother with anyway because the prototyping stage might indicate that you don't even need to go down that particular path. This also connected well with another comment made (paraphrased): There is no need to start with the universe (aka all the bells and whistles); start with the minimal viable solution.  This was, I feel, an important comment (and sentiment expressed) not just on software development, but on work in general.  I suppose a related sentiment that I've heard in the past: The perfect is the enemy of done. I've seen, over the years, lots of projects fail to even get started because people object over the fact that the new solution isn't at one-to-one parity with the old solution or it's just not perfect.  Many potentially interesting paths are never taken because the lack of perfection prevents people from even trying.

Anyway - those are my take-aways from last week.  Looking forward to viewing this week's recording with Maha, and reading some more unboundeq stuff, which I've seen on twitter over the past few months, but I have not had time to dive into it :)

Friday, November 2, 2018

Post-it found! the low-tech side of eLearning 3.0 ;-)

Greetings fellow three-point-oh'ers
(or is it just fellow eLearners?)

This past week in eLearning 3.0 (Week 2, aka 'the cloud'). This week's guest was Tony Hirsch, and what was discussed was the cloud, and specifically Docker.  Before I get into my (riveting) thoughts on the cloud, let me go back  to Week 0 (two weeks ago) and reflect a little on the thoughts I jotted down on my retrieved post-it note.

So, in the live session a couple of weeks ago (it's recorded if you want to go back and see it), Siemens said something along the lines of "what information abundance consumes is attention". This really struck me as both a big "aha!" as well as a "well, d'uh! why hadn't it occurred to me already? D'oh!". There has been a lot said over the past few years about how people don't read anymore (they skim), and how bad that is.  This ties into "what learners want" (a phrase I've heard countless times on-campus and off), and that tends to be bite-sized info, which leads us to the micro-learning craze.  While micro-learning, or bite-sized learning, has its place, it can't be the end-all-be-all of approaches to learning. When the RSS feed is bursting with around 1700 unread posts (my average day if I don't check it), the effort to really give 100% attention to each item is too much; and part of it is that full articles no longer come over RSS - it's just the title and perhaps the first 250 characters of the article if you're lucky, so the 'click to go to article' is a necessity if you want to read the full thing. Back in the day (ca. 2005) I could actually read most things because my unread count wasn't all that big.  So, as the abundance of data has become a reality, attention deficit seems like a natural connection to that.

Another thing that Siemens said was that before the "messiness of learning was viewed as a distraction from learning, whereas now the making sense part is the learning"  (paraphrased). This got me thinking about messiness and not-yet-ness. I agree that messy learning is what college (BA all the way to PhD) should be what learning is about, but how does that square with the mandates for learning outcomes and the measurability of those outcomes?  This is particularly pointed at the moment as this year one department I am affiliated with went through their 'academic quality' review, and my home department is going through ours in early 2019.  Messy works, but how do you sell it to the upper level admins? Also, how do you sell it to learners who have been enculturated into a transactional model of education?  I don't have the answers, but interesting points to ponder and discuss.

Now, on a more geeky or technical side:  Docker and the cloud.  As Stephen and Tony were discussing the cloud. This made me think of tinkering as learning, authentic learning, and the aforementioned messiness in learning.   We now have the technology that allows us to spin off fresh instances of a virtual machine that has specific configurations.  I've been able to do this on Virtual PC (back before microsoft bought them) on my mac for ages.  It was actually a lot of fun to find old versions of Windows, OS/2, NEXTSTEP, and other operating systems and play around with them on my Mac.  It was a great learning opportunity.  But, but wasn't scalable. As a tinkerer I could do this on my own machines, but I couldn't distribute easily.  Now, if I were teaching a course on (insert software), I could conceivably create the 'perfect' environment and have students be able to spin-up instances of that to be able to try things out without the need to install something locally; not sure what licensing looks like in this field, but let's assume it's 'easy' to deal with. Whereas in prior eLearning (elearning 2.0?) the best that we could do is limited simulations with Articulate, we can actually afford to let the learners loose on a real live running instance of what they are learning.  When they are done, they can just scrap the instance.  Even if you needed to run the instance for an entire semester non-stop (15 weeks), that would still only cost the learner around $80.  Not bad!  The best thing about this?  You can freely mess around, and if you break something (irreparably), start from scratch!

Anyway, those are my thoughts on this week on eLearning 3.0 - what are your AHA moments?