Sunday, December 21, 2014

DALMOOC Episode 10: Is that binary for 2? We've reached recursion!

Hey!  We've made it! It's the final blog post about #dalmooc... well... the final blog post with regard to the paced course on Edx anyway :)  Since we're now in vacation territory, I've decided to combine Weeks 9 and 10 of DALMOOC into one week.   These last two weeks have been a little light on the DALMOOC side, at least for me.  Work, and other work-related pursuits, made my experimentation with LightSIDE a little light (no pun intended).  I did go through the videos for these two weeks and I did pick out some interesting things to keep in mind as I move through this field.

First, the challenges with this sort of endeavor: First we have data preparation. This part is important since you can't just dump from a database into programs like LightSIDE. Data needs some massaging before we can do anything with it.  I think this was covered in a previous week, but I think it needs to be mentioned again since there is no magic involved, just hard work!

The other challenge mentioned this week was labeling the data. Sometimes you get the labels from the provider of the data, as was the case with the poll example used in one of the videos for week 9. To do some machine learning the rule of thumb, at least according to dalmooc, is at least 1000 instances of labeled data are needed to get some machine learning  - more or less labelled data would be needed depending on individual circumstances.  For those of you keeping track at home Carolyn recommends the following breakdown:
200 pieces of labelled data for development
700 pieces of labelled data for cross-validation
100 pieces of labelled data for final testing

Another thing to keep in mind, and I think I've mentioned this in previous weeks, is that Machine learning won't do the analysis for you (silly human ;-) ).  The important thing here is that you need to be prepared to do some work, some intepretation, and of course, to have a sense of what your data is. If you don't know what your data is, and if you don't have a frame through which you are viewing it, you are not going to get results that are useful. I guess the old saying garbage in, garbage out is a good thing that we need to be reminded of.

So, DALMOOC is over, and where do we go from here?  Well, my curiosity is a bit more piqued. I've been thinking about what to do a dissertation on (entering my second semester as a doctoral student) and I have all next summer to do some work on the literature review.  I still am thinking about something MOOC related, some of my initial topics seem to already be topics of current inquiry and of recent publications, so I am not sure where my niche will be.  The other fly in the ointment is that the course I regularly teach seems to have fewer students in it, so a Design Based Research on that course (that course as a MOOC I should say) may not be an option in a couple of years. Thus, there is a need for Plan B: I am actually thinking of going back to my roots (in a sense) and looking at interactions in a MOOC environment.  The MRT and I have written a little about this, looking at tweets and discussions forums, so why not do something a little more encompassing?  I guess I'll wait until the end of EDDE 802 to start to settle on a topic.

What will you use your newly found DALMOOC skills on?





Monday, December 15, 2014

First semester done!

Hurray!

The first semester of my doctoral studies is done!  Well, it was done last week, but as I wrote in the previous post (on #dalmooc) it's been one crazy semester.  I had hoped that I would blog once a week on the topic of EDDE 801, getting some interesting nuggets of information each week to share , but between MOOC like #ccourses, work, and regular EDDE 801 work, no such luck.  I felt I was putting in enough time in EDDE 801 and that I gave everything into the closed system that is Moodle rather than on the blog.  So, here's one blog post to try to re-capture some thoughts I had while the semester was in progress.

Early on one of the things I really dreaded were the synchronous sessions, every Tuesday at 8PM (my time).  My previous experience with synchronous sessions was not a good one, thus coloring my expectations for this course. Most of my previous experience has been one-way communication webinars (yaaaawn), or mandatory online course synchronous sessions for student presentations - for my Masters programs. The problem here is that no one provided any scaffolding to my fellow students on what constituted good online presentation skills, thus students would offer drone on and on (not really checking in with the audience) and they would often use up their allotted time, and then some. I don't blame my former classmates, just the system that got them into that situation.  So, here I was, getting ready for a snooze-fest.

I am glad to say that it wasn't like this. Most seminars were actual discussion, and Pat did prod and poke us to get the discussion going. Most of the guest speakers were lively and engaged with the audience in some fashion, and my classmates were good presenters.  If I yawned it was due to the time of day rather than boredom. So, final verdict is that synchronous sessions were done well, as compared to my previous experience. Am I a synchronous conferencing convert? Not yet.  Like Maha Bali I still have an affinity for asynchronous.

The one thing that gave me pause to think, with EDDE 801, were the discussion-board assignments.  In my previous experience, with no required weekly synchronous sessions, the bread-and-butter of the course were weekly discussion forums (sometimes 1, sometimes 2, rarely 3).  In 801 we had to do two literature reviews and facilitate 2 discussions based on those literature reviews.  We have 12 in our cohort, so that would be 24 discussions.  Initially I didn't think this would be "enough work" (yeah...I don't know what I was thinking), but as the semester progressed and people participated in the forums vigorously, near the end I got in a bit of a cognitive overload situation where I couldn't really read any more (sorry to the last 4 literature reviews posted, I couldn't really focus on them as I did in the early ones).

Finally, one thing I wanted to do this semester, but I really didn't get a chance to, was to make a sizable dent into the literature I've collected for a potential dissertation topic on MOOCs.  I did read some articles, in order to do my presentation for the course, but it didn't really end up being as big of a dent as I hoped to.  I was, initially, thinking that  I would do some in the break, but with the semester starting January 15, I'm thinking of rest and relaxation, and dissertation reading this summer.

All things considered, not a bad semester! 1/8 done with my doctorate lol ;-)




Friday, December 12, 2014

DALMOOC Episode 9: the one before 10

Hello to fellow #dalmooc participants, and those who are interested in my own explorations of #dalmooc and learning analytics in general.  It's been a crazy week at work with many things coming down all at the same time such as finishing advising, keeping an eye on student course registrations, and new student matriculations, making sure that our December graduates are ready to take the comprehensive exam...and many, many more things. This past week I really needed a clone of myself to keep up ;-)  As such, I am a week behind on dalmooc (so for those keeping score at home, these are my musings for Week 7).

In week 7 we are tackling Text Mining, a combination of my two previous disciplines: computer science and linguistics (yay!). This module brought back some fond memories of corpus linguistics exploration that I had done a while while I was doing my MA in applied linguistics. This is something I want to get back to, at some point - perhaps when I am done with my doctorate and I have some free time ;-).  In any case to start off, I'd like to quote Carolyn Rose when she says that Machine learning isn't magic ;-) Machine learning won't do the job for you, but it can be used as a tool to identify meaningful patterns. When designing your machine learning, you need to think about the features you are pulling from data before you start your machine learning process, otherwise you end up with output that doesn't make a ton of sense, so the old adage in computer science "garbage in, garbage out" is still quite true in this case.

In examining some features of language, we were introduced to a study of low level features of conversation in tutorial dialogue. There were features of turn length, conversation length, number of student questions, student initiative, student-to-tutor word ratios. The final analysis was that this is not where the action is at. What needs to be examined in discourse situations in learning are the cognitive factors and underlying cognitive processes that are happening while we are learning. This reminds me of a situation, this year, where a colleague asked me if I knew of research that indicated whether response length in online discussion forum could be used, in a learning analytics environment, to predict learner success.  I sort of looked at my colleague as if they had two heads because, even though I didn't have the vocabulary to explain that these were low level features I was already thinking that they weren't as useful as looking at other factors.  So, to bring this back to dalmooc, shallow approaches to analysis of discussion are limited to their ability to be generalized. What we should be looking at are Theory-driven approaches which have been demonstrated to be more effective at generalizing. 

In the theoretical framework we look at a few things (borrowing from Sociolinguistics of course):  (1) Power and Social distance explain social processes in interactions; (2) Social processes are reflected through patters in language variation; (3) so our hope is that Models that embody these structes will be able to predict social processes from interaction data.

One of the things mentioned this week was Transactivity (Berkowitz & Gibbs, 1983) which is a contribution on an idea expressed in a conversation, using a reasoning statement.  This work is based on the ideas of Piaget (1963) and cognitive conflict.  Kruger and Tomasello (1986) added Power Balance to the equation of Transactivity.  In 1993 Azmitia & Montgomery looked at Friendship, Transactivity and Learning. In Friend pairs there there is higher transactivity and higher learning (not surprising since the power level is around the same between both people).
.



Finally this week I messed around with LightSIDE, without reading the manual ;-).  According to Carolyn the manual is a must read (D'oh ;-)  I hate reading manuals).  I did go through the mechanical steps that were provided on edx to get familiar with LightSIDE, but I was left with a "so what" feeling after.  The screenshots are from the work that I did.  I fed LighSIDE some data, pulled some virtual levers, pushed some virtual buttons, and turned from virtual knobs, and I got some numbers back.  I think this falls inline with the simple text mining process of having raw data, then extracting some features, then modeling, and finally classifying. Perhaps this is much more exciting for friends of mine who are more stats and math oriented, but I didn't get the satisfaction I was expecting - I was more satisfied with the previous tools we used. Maybe next week there is much more fun to be had with LighSIDE :-)

So, how did you fare with Week 7?  Any big take-aways?






Friday, November 28, 2014

DALMOOC episode 8: Bureau of pre-learning

I see a lot of WTF behavior from learners. This is bad... or is it?
Oh hey!  It's week 6 in DALMOOC and I am actually "on time" this time!  Even if I weren't it's perfectly OK since there are cohorts starting all throughout the duration of the MOOC (or so I suspect), so whoever is reading this: Hello!

This week the topic of DALMOOC is looking at behavior detectors (types of prediction models).  Behavior detection is a type of model (or types of models) that we can infer from the data collected in the system, or set of systems, that we discussed in previous weeks (like the LMS for example).  Some of these are behaviors like off-task behavior such as playing candy crush during class or doodling when you're supposed to be solving for x. Other behaviors are gaming the system, disengaged behaviors, careless errors, and WTF behaviors (without thinking fastidiously?  or...work time fun? you decide ;-) ). WTF behavior is working on the system but not the task specified.  As I was listening to the videos this week I was thinking about gaming behaviors‡ I was thinking that not all gaming behavior is bad.  If I am stuck in a system, I'm more apt to game it so that I can move and, and try to salvage any learning, rather than just get stuck and say eff-it-all.  I wonder what others think about this.

Some related problems to behavior detectors are sensor free affect detection of boredom, fun, frustration, or delight.  Even with sensors, I'd say that I'd have problems identifying delight. Maybe my brain looks a certain way in a MRI machine when I get a sense of delight, but I as a human this is a concept that would be hard to pin down.

Anyway - another things discussed this week is Ground Truth. The idea is that all data is going to be noisy so it won't be one "truth" but there is "ground truth". I guess the idea here is that there is no one answer to life, the Universe and everything, so we look our data to determine an approximation of what might be going on.   Where to do you get data for this? Self-reports from learners, Field Observations§, text analysis, and video coding. The thing I was considering (and I think this was mentioned) is that self-reporting isn't that great for behaviors students, after all most of us don't want to admit that we are gaming the system or doing something to subvert the system. Some people might just do it because they don't care, or because they think that you exercise is stupid and they will let you know, but most, I think, would care what others think, and might have some reverence for the instructor, thus prevent them from accurately self-reporting.

One of the things that made me laugh a bit was an example given of a text log file where the system told the learner that he was wrong but in a cryptic way. This reminds me of my early MS DOS days, when I was vising relatives who had Windows 3.1 (for workgroups!) and I was dumped from the GUI to a full window DOS environment.  I didn't know any commands, so I tried natural language commands...and I got the dreaded "error, retry, abort" and typing any three (or combination of those three) words did not work. Frustration! I thought I had broken the computer and no one was home!

Another thing that came to mind with these data collection methods is the golden triangle (time, quality, cost).  Not every method is equal to other methods of data collecting. For instance video coding is slowest, but it is replicable and precise.

Moving along, we talked a bit about  Feature Engineering (aka rational modeling, aka cognitive modeling ) which is the art of creating predictor variables. This is an art because it involves lore more than well defined principles. This is also an iterative process.  Personally I was ready to write this off but the art and iteration aspect is something that appeals to me rather than just cold hard black-boxes. The idea with this is that you go for quantity at first, not quality, and then you iterate forward, further defining your variables.  Just like in other projects and research you can build off the ideas of others; there are many papers out there for what has worked and what hasn't (seems like advice I was also given at my EDDE 801 seminar this past summer).  Software you can use for this process include Excel (pivot tables for example) and OpenRefine (previously Google Refine). A good thing to remember is that feature engineering can over-fit, so we're going back to last week where we said that everything over-fits to some extent.

Finally we have diagnostic metrics. My eyes started glazing over a bit with this.  I think part of it was that I didn't have my own examples to work with so it was all a bit abstract (which is fine). I am looking forward to the spring 2015 Big Data in Education MOOC to go more a bit in depth with this.  So what are the diagnostic metrics mentioned? (might need a more detailed cheat-sheet for these)
  • ROC -- Receiver operating Characteristic curve good for a two-value prediction (on/off, true/false, etc.)
  • A' -- related to ROC - probability that if the mode is give an example from a category, it can identify which category it came from.  A' more difficult to compute compared to kappa and only works with two categories. Easy to interpret statistically.
  • Precision -- probability a data point that is classified as true, is really true
  • Recall -- probability that a data point is actually true when classified as true

We also covered Regressors such as:
  • Linear Correlation -- if X's values change, do Y's values change as well?  Correlation is vulnerable to outliers.
  • R-squared -- correlation squared. also a measure of what percentage of variance in dependent measure is explained by a model.  Its usage depends on which community has really adopted it.
  • Mean Absolute Error (MAE) tells you avg amt of which the prediction deviate from actual values
  • Root Mean Square (RMSE) does the same but penalizes large deviations

Finally, there are different types of validity (this brings me back to my days in my first research methods course):
  • Construct validity -- Does your model measure what it says it measures?
  • Predictive validity -- Does your model predict the future as well as the present?
  • Substantive validity -- Do the results matter? (or as Pat Fahy would say "so what?" )
  • Content Validity -- Does the test cover the full domain it's meant to cover?
  •  Conclusion validity -- Are conclusions justified based on the results?

So, that was week 6 in a nutshell.  What stood out for you all?


SIDENOTE:
† Image from movie Minority Report (department of precrime)
‡ Granted, I need to go and read more articles on gaming behaviors to know all the details, this was just an initial reaction.
§ There is a free android app for Field Observations that they've developed

Tuesday, November 25, 2014

DALMOOC episode 7: Look into your crystal ball

Whooooa! What is all this?


Alright, we're in Week six of DALMOOC, but as usual I am posting a week behind.  In previous weeks I was having a top of fun playing with Gephi and Tableau. Even thought the source material wasn't that meaningful to me I was having fun exploring the potential of these tools for analytics. This week we got our hands on Rapidminer a free(mium) piece of software that provides an environment for machine learning, data mining and predictive analysis. 

Sounds pretty cool, doesn't it?  I do have to say that the drag and drop aspect of the application does make it ridiculously easy quickly put together some blocks to analyze a chunk of data. The caveat is that you need to know what the heck you are doing (and obviously I didn't ;-) ).  I was having loads of issues navigating the application, and I somehow managed to not get some windows that I needed in order to input information to, and I couldn't find where to find the functions that I needed...  Luckily one of my colleagues was visiting who is actually working on machine learning and was able to give me a quick primer on Rapidminder - crisis averted.  I did end up attempting the assignment on my own, but I wasn't getting the right answer.  With other things to do, I gave up on the optional assignment ;-)

With that software experience this past week, what is the use of prediction modeling in education? Well (if you can get your software working ;--)  ), the goal is to develop (and presumably use) a model which can infer something (a predicted variable) from some combination of other aspects of data that you have on hand (a.k.a. predictor variables).  Sometimes this is used to predict the future, and sometimes it is used to make inferences about the here and now. An example of this might be using a learner's previous grades in courses as predictors for future success.  To some extent this is what SATs and GREs are (and I've got my own issues with these types of tests - perhaps something for another post).  The key thing here is that there are so many variables in predicting future success. It is not just about past grades, so take that one with a grain of salt.

Something that goes along with modeling is Regression: You use this when there is something you want to predict and it is numerical in nature. Examples of this might be number of student help requests, how long it takes to answer questions, how much of an article was read by a learner, prediction of test scores, etc. A regressor is a number that predicts another number.  A training model is when you use data that you already know the answers from and try to build a model to teach the algorithm.

There are different types of regressions.  A linear regression is flexible (surprisingly so according to video), and it's a speedster.  It's often more accurate than more complex models (especially ones you cross-validate). It's feasible to understand your model (with some caveats).

In watching the videos last week, some examples of regression algorithms I got conceptually from a logic perspective, but some just seem to go right over my head.  I guess I need a little more experience here to really "get it" (at least from an applied sense)

Another way to create a model is Classification: You use this when there is something you want to predict (label) and that prediction is categorical, in other words it is not a number, but a category such as right and wrong; or will drop, or persevere through course. Regardless of the model you create, you always need to cross validate the model you are using for the level you are using it in (e.g. new students? new schools? new demographics?) otherwise your model might not be giving you the information you think it's giving you.

This week, for me, was yet another reminder that I am not a maths person.  Don't get me wrong, I appreciate the elegance of mathematics, but I honestly don't care about optimizing my algorithms through maths.  I'd like to just know that these certain x-algorithms work for these y-scenarios, and I want easy ways to use them :)  Anything beyond that, for me, is overkill.  This is probably why I didn't like my undergraduate education as much as I've enjoyed my graduate education:  I wanted to build things, but my program was focusing on the nitty gritty and engine performance :)




SIDENOTES
  • Alternative episode title: Outlook hazy, try again later
  • Neural Networks have not been successful methods (hmmm...no one has told this to scifi writers ;-) sounds cool, even though they are inconsistent in their results)