Friday, November 28, 2014

DALMOOC episode 8: Bureau of pre-learning

I see a lot of WTF behavior from learners. This is bad... or is it?
Oh hey!  It's week 6 in DALMOOC and I am actually "on time" this time!  Even if I weren't it's perfectly OK since there are cohorts starting all throughout the duration of the MOOC (or so I suspect), so whoever is reading this: Hello!

This week the topic of DALMOOC is looking at behavior detectors (types of prediction models).  Behavior detection is a type of model (or types of models) that we can infer from the data collected in the system, or set of systems, that we discussed in previous weeks (like the LMS for example).  Some of these are behaviors like off-task behavior such as playing candy crush during class or doodling when you're supposed to be solving for x. Other behaviors are gaming the system, disengaged behaviors, careless errors, and WTF behaviors (without thinking fastidiously?  or...work time fun? you decide ;-) ). WTF behavior is working on the system but not the task specified.  As I was listening to the videos this week I was thinking about gaming behaviors‡ I was thinking that not all gaming behavior is bad.  If I am stuck in a system, I'm more apt to game it so that I can move and, and try to salvage any learning, rather than just get stuck and say eff-it-all.  I wonder what others think about this.

Some related problems to behavior detectors are sensor free affect detection of boredom, fun, frustration, or delight.  Even with sensors, I'd say that I'd have problems identifying delight. Maybe my brain looks a certain way in a MRI machine when I get a sense of delight, but I as a human this is a concept that would be hard to pin down.

Anyway - another things discussed this week is Ground Truth. The idea is that all data is going to be noisy so it won't be one "truth" but there is "ground truth". I guess the idea here is that there is no one answer to life, the Universe and everything, so we look our data to determine an approximation of what might be going on.   Where to do you get data for this? Self-reports from learners, Field Observations§, text analysis, and video coding. The thing I was considering (and I think this was mentioned) is that self-reporting isn't that great for behaviors students, after all most of us don't want to admit that we are gaming the system or doing something to subvert the system. Some people might just do it because they don't care, or because they think that you exercise is stupid and they will let you know, but most, I think, would care what others think, and might have some reverence for the instructor, thus prevent them from accurately self-reporting.

One of the things that made me laugh a bit was an example given of a text log file where the system told the learner that he was wrong but in a cryptic way. This reminds me of my early MS DOS days, when I was vising relatives who had Windows 3.1 (for workgroups!) and I was dumped from the GUI to a full window DOS environment.  I didn't know any commands, so I tried natural language commands...and I got the dreaded "error, retry, abort" and typing any three (or combination of those three) words did not work. Frustration! I thought I had broken the computer and no one was home!

Another thing that came to mind with these data collection methods is the golden triangle (time, quality, cost).  Not every method is equal to other methods of data collecting. For instance video coding is slowest, but it is replicable and precise.

Moving along, we talked a bit about  Feature Engineering (aka rational modeling, aka cognitive modeling ) which is the art of creating predictor variables. This is an art because it involves lore more than well defined principles. This is also an iterative process.  Personally I was ready to write this off but the art and iteration aspect is something that appeals to me rather than just cold hard black-boxes. The idea with this is that you go for quantity at first, not quality, and then you iterate forward, further defining your variables.  Just like in other projects and research you can build off the ideas of others; there are many papers out there for what has worked and what hasn't (seems like advice I was also given at my EDDE 801 seminar this past summer).  Software you can use for this process include Excel (pivot tables for example) and OpenRefine (previously Google Refine). A good thing to remember is that feature engineering can over-fit, so we're going back to last week where we said that everything over-fits to some extent.

Finally we have diagnostic metrics. My eyes started glazing over a bit with this.  I think part of it was that I didn't have my own examples to work with so it was all a bit abstract (which is fine). I am looking forward to the spring 2015 Big Data in Education MOOC to go more a bit in depth with this.  So what are the diagnostic metrics mentioned? (might need a more detailed cheat-sheet for these)
  • ROC -- Receiver operating Characteristic curve good for a two-value prediction (on/off, true/false, etc.)
  • A' -- related to ROC - probability that if the mode is give an example from a category, it can identify which category it came from.  A' more difficult to compute compared to kappa and only works with two categories. Easy to interpret statistically.
  • Precision -- probability a data point that is classified as true, is really true
  • Recall -- probability that a data point is actually true when classified as true

We also covered Regressors such as:
  • Linear Correlation -- if X's values change, do Y's values change as well?  Correlation is vulnerable to outliers.
  • R-squared -- correlation squared. also a measure of what percentage of variance in dependent measure is explained by a model.  Its usage depends on which community has really adopted it.
  • Mean Absolute Error (MAE) tells you avg amt of which the prediction deviate from actual values
  • Root Mean Square (RMSE) does the same but penalizes large deviations

Finally, there are different types of validity (this brings me back to my days in my first research methods course):
  • Construct validity -- Does your model measure what it says it measures?
  • Predictive validity -- Does your model predict the future as well as the present?
  • Substantive validity -- Do the results matter? (or as Pat Fahy would say "so what?" )
  • Content Validity -- Does the test cover the full domain it's meant to cover?
  •  Conclusion validity -- Are conclusions justified based on the results?

So, that was week 6 in a nutshell.  What stood out for you all?


SIDENOTE:
† Image from movie Minority Report (department of precrime)
‡ Granted, I need to go and read more articles on gaming behaviors to know all the details, this was just an initial reaction.
§ There is a free android app for Field Observations that they've developed

Tuesday, November 25, 2014

DALMOOC episode 7: Look into your crystal ball

Whooooa! What is all this?


Alright, we're in Week six of DALMOOC, but as usual I am posting a week behind.  In previous weeks I was having a top of fun playing with Gephi and Tableau. Even thought the source material wasn't that meaningful to me I was having fun exploring the potential of these tools for analytics. This week we got our hands on Rapidminer a free(mium) piece of software that provides an environment for machine learning, data mining and predictive analysis. 

Sounds pretty cool, doesn't it?  I do have to say that the drag and drop aspect of the application does make it ridiculously easy quickly put together some blocks to analyze a chunk of data. The caveat is that you need to know what the heck you are doing (and obviously I didn't ;-) ).  I was having loads of issues navigating the application, and I somehow managed to not get some windows that I needed in order to input information to, and I couldn't find where to find the functions that I needed...  Luckily one of my colleagues was visiting who is actually working on machine learning and was able to give me a quick primer on Rapidminder - crisis averted.  I did end up attempting the assignment on my own, but I wasn't getting the right answer.  With other things to do, I gave up on the optional assignment ;-)

With that software experience this past week, what is the use of prediction modeling in education? Well (if you can get your software working ;--)  ), the goal is to develop (and presumably use) a model which can infer something (a predicted variable) from some combination of other aspects of data that you have on hand (a.k.a. predictor variables).  Sometimes this is used to predict the future, and sometimes it is used to make inferences about the here and now. An example of this might be using a learner's previous grades in courses as predictors for future success.  To some extent this is what SATs and GREs are (and I've got my own issues with these types of tests - perhaps something for another post).  The key thing here is that there are so many variables in predicting future success. It is not just about past grades, so take that one with a grain of salt.

Something that goes along with modeling is Regression: You use this when there is something you want to predict and it is numerical in nature. Examples of this might be number of student help requests, how long it takes to answer questions, how much of an article was read by a learner, prediction of test scores, etc. A regressor is a number that predicts another number.  A training model is when you use data that you already know the answers from and try to build a model to teach the algorithm.

There are different types of regressions.  A linear regression is flexible (surprisingly so according to video), and it's a speedster.  It's often more accurate than more complex models (especially ones you cross-validate). It's feasible to understand your model (with some caveats).

In watching the videos last week, some examples of regression algorithms I got conceptually from a logic perspective, but some just seem to go right over my head.  I guess I need a little more experience here to really "get it" (at least from an applied sense)

Another way to create a model is Classification: You use this when there is something you want to predict (label) and that prediction is categorical, in other words it is not a number, but a category such as right and wrong; or will drop, or persevere through course. Regardless of the model you create, you always need to cross validate the model you are using for the level you are using it in (e.g. new students? new schools? new demographics?) otherwise your model might not be giving you the information you think it's giving you.

This week, for me, was yet another reminder that I am not a maths person.  Don't get me wrong, I appreciate the elegance of mathematics, but I honestly don't care about optimizing my algorithms through maths.  I'd like to just know that these certain x-algorithms work for these y-scenarios, and I want easy ways to use them :)  Anything beyond that, for me, is overkill.  This is probably why I didn't like my undergraduate education as much as I've enjoyed my graduate education:  I wanted to build things, but my program was focusing on the nitty gritty and engine performance :)




SIDENOTES
  • Alternative episode title: Outlook hazy, try again later
  • Neural Networks have not been successful methods (hmmm...no one has told this to scifi writers ;-) sounds cool, even though they are inconsistent in their results)

Monday, November 24, 2014

Designing in the Open (and in connected ways)

Wow, hard to believe, but we've reached the final module of Connected Courses (and boy is my brain tired!).  I found out last week that there may be a slim chance of me being able to teach Introduction to Instructional Design (INSDSG 601, a graduate course) at some point in the new future. This is something that was offered to me a couple of summers ago, but being away on vacation at the time (with questionable internet access) it didn't seem like a good idea to be teaching an online course.

I've been poking around the course shell, here and there, over the past couple of years (even since teaching this course was a remote possibility) to get ideas about how to teach the course.  The previous instructor, who had been teaching this course for the past 10 years but recently refocused on other things, did a good job with the visual design of the course. It's easy to know what you are are supposed to do each week.  Then again, from the design of the course I can see that the the focus of the course each week seems to center around the instructor (each week has lectures in addition to chapter readings), and we saw in the cited literature in #dalmooc that this isn't pedagogically effective.  This is something I've been wanting to change.  The other thing that I don't like is the reliance on the Dick & Carey textbook. Granted, this textbook seems to be a seminal book in the field, but it is not the easiest thing to read for a novice learner (who is also figuring other things out about the ID field) and in my experience most learners read it, but don't really get the fine grain elements. This book, in my opinion, is a good reference book, but not necessarily a good instruction book†. The thing that really convinced me to scrap this course and start from scratch with a new design is that the assignments seem to all be assignments (50% of final grade) that built on top of one another culminating in a final project (the other 50% of final grade) are all taking place in the forums.  The project-based aspect I like, and I also like the peer review aspect.  However, I don't like this double-counting of points, and the closed nature of the course (everything happening in an LMS). So, here we go with a re-design (if I know I am teaching the course)!

The learning objectives (that I can't really mess with) are as follows:
  • State the reason for using an Instructional Design Model. 
  • Identify and describe the purpose of each component of the Dick and Carey Model of Instructional Design. 
  • Develop instructional (performance) objectives that include behavior, condition and criteria.
  • Develop an assessment strategy for an instructional event. 
  • Develop assessment items that map to instructional objectives. 
  • Develop an instructional strategy that maps to learner needs and performance objectives. 
  • Plan a formative evaluation strategy to assess instructional materials. 
  • Compare the Dick & Carey ISD model with other models
Since this is an intro course, my own additional objectives for this course are to (1) setup learners to be able to find and retrieve sources from our academic library, and (2) begin creating their own repository (aka "toolbox") of resources that they can make reference to not only as they progress through the program, but also as they become working professionals.

I have some ideas for assignments to reach these goals, however I am a bit stuck.  I want my course design to be 100% (or at least 90% if I can't reach 100%) open access materials.  Students would be free to go and find and retrieve textbooks, articles, and resources from pay-walled sources, but the materials I provide need to be 100% open access. This means I need a new textbook (or an un-textbook).  What would you recommend for resources for an introductory course in instructional design as far as open resources go?  Dick & Carey are having me do some mental gymnastics (ADDIE seems to have more free/open resources on the web than D&C).

As far as lectures go, I am thinking that lectures in the course are automatically out.  The current lectures all start with "Hello everyone, I am Dr. so-and-so". Since I am not Dr. so-and-so, this is an unnecessary cognitive barrier for learners, and in all honesty I don't want to sit down and do 13 weeks worth of lectures. I think there are much more fun ways to spend my time, and help my learners navigate the subject, than 30-45 minute lectures each week.  If I had enough buy-in I'd love to get onto a Google Hangout and have a recorded discussions with some of the great minds, and leaders, in instructional design to discuss topics of ID including mobile learning, distance education, corporate training, and so on  - you know, things that will get the learners thinking about how to structure the remainder of their studies, pick areas to focus on, and what they might want to be lifelong learners in.

So, initial brainstorming post - open resources!  What do you think kind reader?

In subsequent posts (if this goes forward) I think I am going to focus on activities, other materials, and flow of the course.  If you want me to write about other subjects as well leave a comment :)


SIDENOTES:
†other faculty of instructional design please feel free to chime in! I what to know what you think about Dick & Carey.

Thursday, November 20, 2014

Attack of the untext - my own stumbling blocks

It's been a while since Rhizo14 ended, but the community is going strong! Facebook may not be as active (or maybe facebook is  hiding most Rhizo posts from my timeline...that could be it...anyway), but we are still chugging along with the collaborative *graphy. I can't call it an ethnography, or autoethnography because variables have changed.  Some of us decided to get together and write an article for Hybrid Pedagogy on why the Collaborative *graphy article is taking so long (a meta-article if you will) but we got stuck there too (or it seems as though we are stuck).  I think others have written about their own personal views on this on their own blogs, so I've been working out what my own stumbling blocks are with this project. I think I have a way to explain things now!

So, when working collaboratively in previous collaborative work situations your final product feel unified.  The main analogy that I can give give is the main root of one plant which looks like this:

In this example (going with the rhizome metaphor) you have one main path, and all the side paths are footnotes, citations and references, and end-note commentary.  The coloring is all the same because regardless of where you have one author or many authors the final product sounds like a unified voice.  Many ideas have come into, and go out of, this main line (see expansion roots in the model), but at the end of the day those side roots don't overtake the main idea or argument.

The original project started as a collaborative autoethnography (CollabAE).  This eventually became an issue because some people stepped back from the  project and thus is was no longer an autoethnography for the entire MOOC, but rather an multi-author ethnography (MAE) of the MOOC. We could use other people's anonimized data, assuming that we had their permission. At that point it wasn't introspective (auto-ethnography) but rather analytic - but this seemed to lack the rhizomatic aspect (to some extent anyway) that made the CAE unique in this aspect, and there were issues of silencing voices (or inadvertantly silencing voices since some people didn't want to be authors, or weren't comfortable with their views being part of this analysis). Things got busy with school, work, and others pursuits that I lost track of the CAE.

The CAE, at least the way we collected data, looks like the image above.  Each color represents a different author, and each other has (probably) a main point and certain supporting literature, tangents, side-points and so on that they made in their original write up. Some authors connect to other author's writings, and this is visualized above as roots crossing through other root's paths.  As chaotic as this may look, it does make sense. I think the closest analogy for this would be George Veletsiano's Student Experiences of MOOCs eBook. To some extend (being a bunch of experimental academics ;-) ) we may have over-thought this CAE.  In hindsight, I would say that this should be a multi-step process.  Perhaps the first step in the process, with a deliverable, would be an eBook, similar to Veletsiano's style, of our Rhizo experiences.  Here people can write anonymously or  eponymously.  Submitted chapters could go through peer review, but not the traditional academic peer review - the peer review that aims to disambiguate and seeks to grow those side-roots a bit in case eventual readers want to do down the paths.  There could be a foreword (Dave Cormier perhaps?) but the reader would be left to read, process, and make sense of each individual story.  As such this could be not a collaborative AE  but a cooperative AE (CoopAE), people working concurrently, but not necessarily together to get this done.  One big, overall, document, but each chapter can stand on its own.

So since the CollabAE wasn't going far, a couple of people thought we could sit down and write an article about what's up with this process.  Why are things taking so long?   The visual for this untext looks something like this (according to me anyway).

Whereas the CollabAE has separate, but distinct, stories where others commented on, but didn't necessarily write-over the text, in our meta-analysis I am seeing both original (concurrent) threads emerging (two or more people writing at the same time but not about the same message). This is represented by different color main-roots.  Then I am also seeing people expanding on those main-roots (different color sub-roots) by either adding onto the document, or having side conversations.  I have to admit that this is fascinating as a brainstorming piece, and it could be considered by some as a performance piece or something alternative like #remixthediss.

That said, however, the problem is that we don't have an audience.  A document as chaotic as this one is helpful to us as authors to help us better understand our own positions on things, and to better help us understand or analyze our own lived experiences in the MOOC.  However, I am not convinced that this is geared toward a reading audience. It's not necessarily something that they expect, and I am not sure how a reader will process this chaos.  For me, at the end of the day, I go back to my goal.  What is the goal of the *graphy project (decided to change it's name since CollabAE and CoopAE seem to not describe it)?  What is the goal of the untext about the *graphy project? Is the goal something for the internal constituents? Something for the public?  If it's for both, what's the overlap space where the final published product would be useful (and comprehensible) to both?  Good questions.  I've got my own answers, but as a group...I don't know :)

As a side note, this seems like an interesting application of co-learning (see connected courses for more details)




Monday, November 17, 2014

DALMOOC episode 6: Armchair Analyst

Week 6 CCK11 blog connections
I was trying for a smarter title for this episode of #dalmooc thoughts, but I guess I have to go with Armchair Analyst since I ended up not spending a ton of time with either Gephi or Tableau last week. So, the reflection for week 4 is mostly on theoretical grounds; things I've been thinking about (with regard to learning analytics) and "a ha" moments from the videos posted.

I think week 3 and week 4 blend together for me.  For example, in looking at analytics the advice, or recommendation, given is that an exploration of a chunk of data should be question driven rather than data-driven.  Just because you have the data it doesn't necessarily mean that you'll get something out of it.  I agree with this in principle, and many times I think that this is true.  For instance, looking back at one of our previous weeks, we saw the analytics cycle.  We see that questions we want to ask (and hopefully answer) inform what sort of data we collect and potentially how we go about collecting it.  Just having data doesn't mean that you have the data that you need in order to answer specific questions.

On the other hand, I do think that there are perfectly good use cases where you might be given a data-dump and not have any questions.  Granted, this makes analysis a bit hard, like it did for me the last couple of weeks.  This data (anonymized CCK11 data, and sample evaluation data for Tableau) didn't really mean much to me, so it was hard to come up with questions to ask.  On another level I've been disconnected from the data, so it's not personally meaningful as a learner (CCK11 data was anonymized), and since I didn't have a hand in organizing, offering, and running CCK11 it's not as useful for me as a designer.  However as a researchers, I could use this data dump to get some initial hypotheses going.  Why do things look the way they look?  What sort of additional, or different, data do I need to go out and test my hypothesis?  How might I analyze this new data?  As such, a data-driven approach might not be useful for answering specific questions, however it might be a spark to catalyze subsequent inquiry into something we think might be happening; Thus helping us formulate questions and go out and collect what we need to collect to do our work.

So, for example, I have just started my EdD program at Athabasca University.  I have a lot of ideas running through my head at the moment as to what I can research for a dissertation in 3 years†. As I keep reading, I keep changing and modifying my thoughts as to what do to.  I may be able to employ learning analytics as a tool in a case study research approach.  For instance, I teach the same course each spring and fall semester, an online graduate instructional design course on the design and instruction of online courses (very meta, I know). The current method of teaching is quite scaffolded, and as Dragan was describing last week (or this week?) I tend to be the central node in the first few weeks, but my aim is to just blend in as another node as the semester progresses. This process is facilitated through the use of the Conrad & Donaldson Phases of Engagement Model (PDF).

So, one semester I can use this model to teach the course and another semester I might create an Open Online Course based on the principles of connectivism and run the course like that. I'd have to make some changes to the content to make sure that most course content be Open Access content, that way I would be eliminating some variables, but let's assume I've done this and I'm just testing Connectivism vs "regular" graduate teaching (whatever that is).  I can then use SNA, as one of my tools, to see what's happening in these two course iterations. I can see how people are clustering together, forming communities (or not), how they are engaging with one another and so on. This analysis could be an element of the study of efficacy of connectivism as employed in a graduate course‡.

On the other side of things, if I were to just stick with my traditional online course, I could still use SNA to improve my course.  One of the things that I notice is that some groups tend to form and stay together early on in the semester.  These seem to be informal groups (person X commenting on person Y's posts throughout the semester more than they do for person Z). Since the semester is 13 weeks long, a JIT dashboard of course connections would be useful to both encourage people to find study groups, but also to engage more with people that they don't normally engage in.  People who usually post late in the forums (at least in my experience) don't often get many responses to their posts, which is a real pity since they often bring some interesting thoughts to the discussion.

A good example of this is the image above, the CCK11 blogs from Week 6.  I see a number of disconnected blogs.  Were these blogs never read (as measured by the click-through rate on the gRSShopper Daily)? Were they never commented on by anyone? Some of the blogs may not speak to anyone in the course, but in a course of 1131 participants (citation), assuming a 80% drop off by week 6, that's still around 200 people active in the MOOC, why is not one connecting with these posts, and can we do anything to spur participation?  Maybe an adopt a blog post campaign?  This is also where the quantitative aspects of SNA mesh with the qualitative aspects of research. Here we could also do an analysis of what gets picked up (those connected nodes) to what doesn't get picked up, and do an analysis of the text. This might help us see patterns that we can't see with SNA alone.

That's it for week 4.  And now I am all caught up.  Welcome to week 5!  Your thoughts on Week 4?


SIDENOTES:
† The more I think about this, the more I am learning toward a pragmatic dissertation rather than a "blow your mind" dissertation. I see it more as an exercise that will add some knowledge in this world, but given that doctoral dissertations are rarely cited, I am less interested in going all out, and more interested in demonstrating pragmatics of research through a topic of interest. Thoughts on this? I definitely don't want to stick around in dissertation purgatory.
‡ I'm pretty sure that someone (or quite a few) have written about this, especially with regard to CCK08, but let's just roll with this example.

Thursday, November 13, 2014

DALMOOC episode5: Fun with Gephi

CCK11 Tweet visualization
Alright, after a few days of being sidelined with a seasonal cold, I'm back on #dalmooc.  Still catching up, but I have a feeling I am getting closer to being at the same pace as the rest of the MOOC ;-)  In any case, this is a reflection on week 3 where we started messing around with social network analysis (SNA).  This is cool because it's something that I had started doing on another MOOC on coursera, with Gephi, so it was an opportunity to get back on and messing with the tool.

So, what is SNA?  SNA is the use of network theory to analyze social networks.  Each person in this network is represented by a node (or edge), and nodes can be connected to other nodes with a vertex (or many vertices). These connections can indicate a variety of things (depending on what you are examing), however for my usage in educational contexts I am thinking of vertices as indicators of message flow, who sends messages to whom in a network, and also who refers to whom in a network. I think this latter one is interesting from an academic citation point of view as well.

As was pointed out in week 3, SNA can help discover patterns of interaction in online learning environments. I think that it can also help up discover patterns in physical environments, however this is harder because we don't have big brother watching the physical environments as much as we can collect data about patterns of participation in virtual environments. It's much more labor intensive to keep accurate track in a physical environment.

An interesting application of SNA is its use in understanding learning design (Lockyer et al - in video). We can use SNA to analyze patterns of interaction in courses that we design an implement, thus we can (to some extent) see how our designs are affecting the learner's patterns of participation.  While this is feasible, I think that it's also hard to keep the variables pinned down so that there are no confounding errors. If you've designed an online course (that is NOT self-paced) you can see the same course design taught different ways if you put different faculty in the driver's seat.  As such I think that in studies using SNA to analyze course design (and/or teaching methods) it's important to account for all variables.

Other interesting things from Week 3:

An Instructor-centered network is one where the instructor is central node in network. These are recognized in literature as only leading to lower levels of knowledge construction (see Bloom's taxonomy). Related to this type of network is having one learner have a dominant role in a course, thus the instructor is replaced (or shares the spotlight) with a dominant learner.  This is also not desirable from a pedagogical point of view. One can start with an instructor-centered environment and facilitate the change to a P2P interaction. Students will need scaffolding in order to reach that P2P network.

Sense of community is predictor of success in educational endeavors. A common way of collecting this type of data is questionnaires, and I think that in education this can be both in-class as part of a mid-term temperature check in the course, but also in the final course evaluation.  I am wondering, however, how accurate this self-reporting is. Is this just an affective measure? Or can learners feel like they are lacking a sense of community but in reality have it but not get as much as they feel they need?

Network brokers are nodes that connect two or more communities in a network and have a high degree of centrality.  These network brokers can see information across many different communities, and as such can have access to many different ideas flow through them. Network brokers are associated with high levels of achievement and creativity. So, in an educational setting it's good to be a network broker.

Cross-Class networks are latent ties by attending the same events, so even though I am not connected with many people in #dalmooc (at least on twitter I don't retweet or write to many others in the MOOC - maybe I should...) I am connected to other people through the course hashtag and by attending the same event. In a traditional learning setting this could be likened to participating in a learner community such as UMassID.com (our instructional design community) or the Athabasca University Landing network.

CCK11 Blogs, week 6
Next up, the Gephi portion of this post.  I've been messing around with Gephi data from CCK11. I was quite excited to get my hands on the CCK11 data to mess around with in Gephi until I remembered that I didn't tweet all that much in CCK11...D'oh! I was curious to see where I was  in the network of connections.  Even if I were active I don't think I'd be able to see myself there because the data appears to be anonymized (and rightfully so).

I did run some analysis of the blog connections in CCK11 using Gephi again (part of the data dump available in #dalmooc) and here was a place where I expected to see myself and see who I was connecting to, however, again, the data was anonymized. My question entering into this analysis was more about analyzing my own patterns of interaction.  I was new to MOOCs back in 2011 and CCK11 was the MOOC where I really started learning about connecting with others for academic purposes. Thus, I wanted to see what my initial and developing connected literacies pointed to. Alas, this was not to be :-)


As Dragan mentioned in one of the videos of this week, analytics should be question-driven, not data-driven. Just because you have data, it doesn't mean that you should use it, or that you will find anything meaningful in it.  This was the case with me and this data. There were some interesting visualizations, but I wanted to have a look at the people involved, who connected to whom, and look more at the qualitative connections: what posts, what ideas, what types of know-how got distributed throughout the network and by whom. It's a little hard to do this with anonymized data, so you really need to abstract and think at higher levels when working with this.  If we had data from other MOOCs, this type of anonymized data could be useful to compare patterns of participation of one MOOC to another.

Thus concludes my week 3 experiences.  What were your thoughts?

Wednesday, November 12, 2014

Questions about Co-Learning

What do you get when you mix connected courses, thinking about academia, and cold medicine?  The answer is a blog post (which I hope makes sense) :-)

As I was jotting down my initial thoughts on co-learning in the previous post I completely forgot to address some of the initial thinking questions for this module.  Here are some initial thoughts on co-learning and how I would address these questions:

What is co-learning and why employ it?
For me co-learning is when two or more people are working together to solve a problem and learn something new.  As I wrote in my previous post, the individuals in this community do not all need to start from the same point. There can, and will, be learners that are more advanced in certain areas as compared to others.  This is perfectly fine, and it's realistic to expect this.  This can be a community of practice, it can be a broad network of learning, or a loosely connected network of learning that centers around a hashtag.  The reason to co-learn is, for me, three-fold.  First you have a variety of learners in the classroom their lived experiences, and previous knowledge, can be beneficial in this learning experience. Second, by having learners co-learn (and in my mind co-teach) they are learning not just the material but they are deconstructing it so that they can explain it to others. This act of deconstruction allows a closer analysis of the subject matter and, hopefully, a more critical view of it.  Finally, this is something that came to mind when engaging in #dalmooc this week - when looking at Social Network graphs of courses, in some cases we see the instructor as a central node, which is a quite privileged position. However this isn't good for learning, so having a course where there is a high degree of connections among many nodes, and the instructor becomes just another node in the network, this spells out good things for learning (or so research says - don't ask me to cite anything, I wasn't taking detailed notes when I was viewing Dragan's presentations)


How can teachers empower students as co-learners?
This, for me, has been the most difficult thing. I teach a course that is an upper level graduate course, which means that students come to my course late in their studies and thus their habits are formed.  Most expect weekly asynchronous discussions with the familiar 1-post, 2-reply scheme.  Many students seem to go beyond this (anecdotal evidence from teaching this course over the last 3 years), however some do not, and there are many reasons for that.  Having co-learning occur means that learners need to be more present, and to some extent their schedule isn't fully their own.  They need to see what their peers are doing so that they can bounce off those messages, riff off them, respond to them, and, when necessary, pertrube them (in educational ways).  I think teachers can empower students to be co-learners by slowly stepping back and scaffolding students to take on that role.  How quickly or slowly you step back depends on the group of learners that are in the classroom.  I don't think that there is a magic formula here, however we are all beholden to the academic calendar, so I would say that it happens somewhere between week 1 and 6 (for a 13-week semester). Even as instructors step-back, it's important to maintain a noticeable teaching presence, and a social presence.  Nothing annoys learners more (I find) than having an instructor that's not there.


How does this pedagogy differ from traditional methods of teaching and learning?  How does the instructor support a co-learning environment? What obstacles might educators encounter in this paradigm shift?  What obstacles might students encounter in this paradigm shift?
I guess here it depends on how one defined "traditional". If traditional means lecture then this approach of co-learning is like night and day compared to lectures.  However, if we encompass Vygotsky's social constructivism, or concepts like Wegner's Communities of Practice as "traditional" then I don't think that co-learning varies a ton from these.  I think that co-learning is a natural extension to constructivism, connectivism, and communities of practice.

I think the key thing, as I wrote above, for support is that sense of social and teacher presence going back to the community of inquiry model. The idea here is that an instructor is just a node  in this learning network.  Sure,the instructor by virtue of being older and having had more learning experiences (and time to read and digest more) is a more knowledgeable other in this aspect. However, his knowledge and voice isn't what drowns out the voices of the learners.  The instructor is there to help people navigate the network, wayfind, provide appropriate scaffolds, advise, and when necessary promote certain content. I don't think we can get away from content and certain "core" knowledge, so the instructor as an MKO in this area has a responsibility of sharing what they know with others, without being overbearing. 

The trick here is having that sense of when to share something and when to let learners struggle a bit. Again research points to the fact that when learners struggle a bit they tend to learn better. I think this is also an area where the instructor might potentially face some obstacles by the learners themselves or their own superiors.  If the learners want content (or *gasp* lectures), then there might be a push from the learners to ask the instructor for nicely packaged answers to their questions. I have seen this in exit evaluations at my own department.  Since we are a department of applied linguistics we don't deal with classroom management (our students are, for the most part, teaching professionals or they go into teaching). We provide the applied linguistics theory, and a space to think about it, criticize it, deconstruct it, and utilize it. However our faculty don't provide cookie-cutter solutions to language learning problems because the answer (as usual) is "it depends".  However learners, in their previous learning experiences, are used to getting nicely packaged data bits, such as "World War I started on ____" or "The first president of the United States was _____" and so on.

This obstacle is something that also affects learners because they need to discover ways in which to not only take the knowledge that they gain in their courses now, but to be able to continuously go out, read the updated literature in the field, deconstruct it, analyze it, and put it back together in meaningful ways to solve their own problems.  The classroom environment provides a nice laboratory where co-learning can be practiced, however once students graduate they need to discover networks in which they can continue to actively co-learn.  This is a literacy that we, as educators, need to help our learners cultivate.

I think that's it for co-learning for now.  Thoughts?






Monday, November 10, 2014

Active Co-Learning

I took a small hiatus from Connected Courses in the last module because everything sort of piled on at the same time and  I had little space to breathe.  Yes, I've been dalmoocing, so I guess everything is a choice ;-).  I guess that was my jump-out week of connected courses, and now I am dipping in again. I love the language of cMOOCs ;-)  The truth is that I've felt a little fatigued with #ccourses.  I am not sure if it's the length, or the time I've been engaged with it (7 weeks if you consider the pre-course and that's before we got to Diversity, Equity, and Access), so I guess I needed a little mental break.  I don't think this is an issue unique to MOOCs because I've been feeling a mild case of senioritis in my first EdD course. Luckily I've done all of my deliverables, submitted them, and have gotten feedback, so now I am participating with my peers and engaging in the participation aspect of the course.

Anyway, these next two weeks are about Co-Learning in #ccourses and worlds have collided!  Connected courses has collided with my EdD course to produce a thinking storm (in my head). I am not going to talk a lot about the resources shared this week (oddly enough I have shared some of these with my own class in the past!), but I wanted to talk a bit about my little connected moment.

So, as we are discussing LMS mining and learning analytics in EDDE 801 one of my classmates mentions that he sees learning as something social. I don't know if he is also on #ccourses or if this is a happy coincidence, but this got me thinking.  I think that learning can be social, and many types of successful learning can be social, but learning is not exclusively social.  For instance, I can sit down with a book, or some MOOC videos, and read or view them.  If I am paying attention and the material is at my level then chances are that I will learn something.  That said, I don't think that all learning works this way.  I do think that in many cases learning is social.  The construct that comes to mind is Vygotsky's More Knowledgeable Other. 

If we are all in a group, let's say in #ccourses, and we are all tackling the topic of this module (co-learning), I would say that we don't all come to the learning environment with the same background, know-how, and knowledge.  We may have some similar experience and background, but the specifics matter.  Thus, as we are learning together I may be able to teach someone a small nugget of knowledge (or know-how) or vice versa. The teaching aspect may not be reciprocal between any two given interlocutors, but it doesn't have to be.  This is when the community comes in. If we are all members of a community and we get each other's daily posts, tweets, delicious links (that relate to this course), then we are partly learning from other's contributions, even if they don't directly learn something from our contributions.  Thus, the act of co-learning is also an act of teaching, at least as defined by Wiley (in the TEDx video this week) when he defines Education as a relationship of sharing. A successful educator, according to Wiley, is someone who shares fully with their students.  In a co-learning environment we are all learners and we are all educators. 

 So, here is a question that popped up while I was pondering this: what is the difference between an "aha" moment when you are by yourself (reading a book, or watching a MOOC video) and "learning" in a social environment?


SIDENOTES:
  • Even though I sat out the module on Diversity, Equity, and Access, I think that the videos on Feminism, Technology and Race; and wikistorming, are interesting to watch and think about. If you haven't watched them, I encourage you to do so :)
  • This week Alec Couros asked "what endures" when thinking about technologies.  The answer was that technologies come and go, but it is the social connections that endure (thus, I would paraphrase this as reach out and talk to someone in your social network, don't just consume).  This is quite true.  Remind me one of these days to expand upon this and Elliniko Kafeneio ;-)

Sunday, November 9, 2014

Teachers on Wheels

An interesting documentary shared by one of my EdD classmates.


Wednesday, November 5, 2014

MOOCs in a nutshell (assignment for class)

One of the things that has been keeping me busy this semester has been my inaugural semester as a Doctoral student at Athabasca University's Center for Distance Education.  The semester isn't over yet,but I am slowly working at hammering out some assignments for the course.  I've tried to be pro-active so that I can get the foundational reading done early in the semester so I can focus on reading some additional articles on MOOCs that have been on my to-read pile for a while.  I ended up getting all the readings done (the ones assigned by the faculty anyway), but I've been side-tracked reading interesting things that my classmates post :-)

In any case, for the third assignment for my inaugural class I looked at MOOCs (no surprises there), and I discussed very briefly the historical overview of MOOCs (keep an eye out in December for the special issue of the CIEE journal, some good articles coming out on the topic of MOOCs), I discussed a bit some work I am doing with a colleague in Greece about issues with MOOCs, and I critiqued a project, currently in the pilot phase, which aims to offer Greek MOOCs (see OpenCourses.gr).

The paper and the presentation are on SlideShare. I thought that I uploaded the document on Scribd, but I guess I goofed.  Feedback is welcomed :)



 




Tuesday, November 4, 2014

DALMOOC, Episode 4: policy, planning, deployment and fun with analytics

Continuing with my exploration of DALMOOC, we've reached the end of Week 2 (only a few days late ;-)  ).  I've been playing with Tableau, which I can describe as Pivot Tables on steroids.  I briefly explored the idea of getting some IPEDS data to mess around with, however that proved to be a bit more challenging than I had anticipated. So, I ended up using the sample data of course evaluations to figure out how to work Tableau.  The following are some interesting visualizations of the data that I had:




The one thing I realized, as I was playing around with the data, is that it's really important to really know what your data means.  I thought I knew what the categories meant, because I thought that institutions of higher education used similar lingo.  The more I played with the data, the more I realized that some things weren't what I was expecting them to be.  Thus, in order to know what is being described and portrayed through the visualizations one needs to know the underlying data categories really well.  The other thing that came to mind is that you can't just produce a visualization and call it a day.  A picture may be worth a thousand words, however succinct textual explanations, analyses of the visuals, will go a long way to clue people into what's happening.

Another aspect of week 2 revolved around policy, planning, and deployment of analytics.  This actually came up in my EDDE 801 course as well as we are discussing an article†  around learning analytics. The issue that has come up is around the ethics of analytics.  A classmate of our has posted the OU's policy on the Ethical Use of Student Data for Learning Analytics. I have not read this yet (it's short, but this was posted to the course forums as I was writing this post) but it's certainly on my list of things to read.  This trepidation around learning analytics on the part of some learners I think may be due to a perceived big brother aspect of the institution.  How are these people who are looking at my digital footprints and for what reasons? I think that if any institution is interested in setting up a learning analytics initiative, it would be important to establish protocols at the institutional level for what types of data will be collected, from which sources, for what purposes, and (quite important) who's got access to this data. These policies should keep an eye on laws, such as FERPA in the USA, to make sure that data collection and data utilization policies are in compliance with those laws. I know that institutional research collects data about various aspects of the university, so coming with appropriate policies might not be a major issue.

As far as planning and deployment go, I think that the crucial thing will be front-end tools (like Tableau for instance) as well as training for those who use these tools.  Just going in and creating nice graphics isn't enough.  There will need to be a firm understanding of what the underlying data is, how it's collected, and what are any limitations that might exist with this data. I've met a number of people in my professional career who seem to have stats in mind, without really acknowledging what the stats mean. "We've had fewer enrollments in x-program this year". OK, so what I might answer. Enrollments are just one metric, what else is happening that might influence those enrollments? What role do  departments, physical plant, faculty and other students play in attracting and retaining students to any given program? We can't just look at the raw numbers for students enrollments and think that we are coming to meaningful conclusions.  The same is true about our learning analytics data.







SIDENOTES
  • The assignment bank is an interesting concept, something that I came across in DS106 a few years ago.  The only issue I have with the assignment bank is that I stumbled upon it by accident (here is the link if anyone else is interested).  I've submitted one of my blogs for one of the assignments.  I only realized today, though, that I was addressing the wrong assignment - #facepalm :)
  • ProSolo is interesting, however there is one thing that I stumbled upon last week, that I didn't bookmark, and now I can't find it again: the calendar of published materials.  There is some sort of "daily-like" (see CCK11 daily as an example) notification that is part of ProSolo (or it seemed like it).  Quite useful if you want to check up on what's occured in DALMOOC in the previous 24 hours.  Where the heck did I find it though?
  • I wonder why weeks 3 through 8 showed up all at once when previous weeks were done one at a  time...
  • † Macfadyen, , L. & Dawson, D. (2010) Mining LMS data to develop an early warning system for educators: a proof of concept. Computers and Education, 54, 588-599.

Monday, November 3, 2014

DALMOOC episode 3: Screenchomping the analytics cycle description

I've had this app on my iPad, by TechSmith, for the past few years, but I've never really used it.  The App is called ScreenChomp and it allows you to have a digital whiteboard that you can use to write and narrate.  I through that a plain text description of the learning analytics cycle (still catching up on week 2 of DALMOOC) would probably be confusing, and using PowerPoint and Adobe Presenter would be too static.  So, I applied the learning analytics cycle to a course I teach, and I decided to hand-write everything. Heck I attempted to draw as well, but my lack of artistic talent shows ;-)

Direct link to the screenchomp (if the embed doesn't work):  http://www.screenchomp.com/t/qE1lplho



DALMOOC Week 2, Description of the Data Analytics Cycle from Apostolos K. on Vimeo.


How does this cycle apply to your courses?

Saturday, November 1, 2014

DALMOOC, episode 2: Of tools and definitions

My Twitter Analytics, 10/2014
Another day, another #dalmooc post :)  Don't worry, I won't spam my blog with DALMOOC posts (even if you want me to), I don't have that much time.  I think over the next few days I'll be posting more than usual in order to catch up a bit.   This post reflects a bit of the week 1 (last week's) course content and prodding questions. I am still exploring ProSolo, so no news there (except that I was surprised that my twitter feed comes into ProSolo.  I hope others don't mind seeing non-DALMOOC posts on my ProSolo profile.

Week 1 seemed to be all about on-boarding, of tools and definitions.  So what is learning analytics?  According to the SOLAR definition, "Learning Analytics is the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs." It's a nice, succint, definition - which I had honestly forgotten about since I was in LAK11.

Analytics has interesting potential for assisting in learning and teaching. Data collected from social interactions in the various learning spaces (the LMS comes to mind as the main one, but that's not necessarily the only one, external-to-the-LMS and internal-to-the-LMS spaces can also count as learning spaces in their own right), learning content (learner-content interaction for instance), and the data on the effects of various interventions and course changes can potentially yield useful insights.

These insights might be about the learning that might be happening, participant's patterns of interaction, their feeling and attitudes toward people and non-animate resources, how people learn, and what they might be doing next based on what they've already done (predictive analytics?).  My main issue with learning analytics is that those with whom I've interacted about analytics seem to feel like this is a magic bullet, that analytics will be some sort of panacea that will help us teach better and help our learners learn.  A similar thing which we've seen with the MOOC-hype mind you. 

The truth is certain this cannot be quantified yet, and things that are quantified can't always tell us what's going on.  As an example, I had a conversation with a colleague recently who came to me because of my background in applied linguistics and educational technology. The query was about text response length (presumably in discussion forums?) and student achievement; were there any studies around this topic?  The answer (at least according to my knowledge of the field) was no, there aren't studies like that (that I know of).  That said, even if someone wanted to do a study around this, I think that the study is flawed if you only look at textual comments in a discussion forum from a quantitative perspective.  Length doesn't really tell you much about the quality and relevance of the posted text, other dimensions, qualitative ones, need to be examined in order to come to better conclusions (good ol' Grice comes to mind as another possible analysis dimension). Don't get me wrong, I think there probably is some positive correlation between text length in a goldilocks zone for response length, but response length isn't the end-all-be-all determinant of student achievement. If the only rubric for me getting an "A" is an essay of 4000 words, I'll just give you Lorem Ipsum text :-)

Another thing pointed out in week 1 was that there are Ethical implications and privacy issues around the use of analytics.  I think that this is a much larger topic.  If it comes up in a future week I'll write about it (or if you really want me to write about my thoughts on this earlier, just leave a note).

So, those were the definitions. Now for some tools! There were a number of tools discussed such as
NodeXL (free, Social Network Analysis tool), Pentaho (30 day trial, integrated suite), IBM Analytics suite (integrated suite, definitely not free), SAS (integrated suite - also not free), R Language (free), Weka (free, java based).  R is something that we use in Corpus Linguistics analysis.  I haven't delved too much into that field, but I am considering it since there are analytics related corpus projects that might be of interest.  One of my colleagues might be teaching this course in the spring semester so I'll see if I can sit in (if I have time.  Not sure how much time EDDE 802 will take). SNAPP (free) was another tool mentioned, and this is something I remember from LAK11.  I've tried to get this installed on our Blackboard server over the last few years, but I've been unsuccessful at convincing the powers that be.  I'd love to run SNAPP in my courses to see how connections are formed and maintained amongst the learners in my classes.  This is one of the issues when you don't run your own servers, you're waiting for someone else to approve the installation of a Bb extension.  Oh well... Maybe in 2015. 

Anyway, those are all the tools that we won't be using directly in DALMOOC.  These are the tools that we will be using: Tableau (paid, but free for us until January 2015), Gephi (free), RapidMiner (has a free version) and, LightSide (free).  Gephi I already downloaded and installed because I was auditing the Coursera Social Network Analysis course that they are currently running.  I'll be going back to those videos in January (or next summer, it all depends on EDDE 802) and messing around more with it then. I know we'll be using it here, but I am no sure to what extent.  Tableau I already downloaded and installed last week on my work machine.  I'll be messing around with Week 2 data when I get back in the office on Monday.  This looks pretty interesting!

Finally (for this post anyway), DALMOOC has a bazaar assignment each week. Here is the description:
In this collaborative activity, we will reflect on what you have learned about the field of learning analytics. We would like you to do this portion of the assignment online with a partner student we will assign to you. You will use the Bazaar Collaborative Chat tool. To access the chat tool, click on the link below. You will log in using your EdX ID. When you log in, you will enter a lobby program that will assign you to a partner. If it turns out that a partner student is not available, after 5 minutes it will suggest that you try again later.  

For experimentation purposes I know I should give this a try, but I probably won't do these bazaar assignments. I have an affinity for asynchronous learning (as Maha Bali put it in one of her posts) :)



SIDENOTES:
1) Great to put a face to a name.  Realized that Carolyn Rose and Matt Crosslin are part of this MOOC. Carolyn is writing a piece of the upcoming special issue of CIEE (this used to be the Great Big MOOC Book), and Matt and co-authoring a piece of the special issue  of CIEE journal for summer 2015 on the Instructional Design of MOOCs

2) Carolyn mentioned in one of the videos that statistics are pretty cool.  I've been lukewarm on them since I was a college undergraduate, mostly because I mess up the math and my numbers don't make sense ;-)