Tuesday, December 30, 2014

Connecting the dots...thoughts about working in academia

[warning: lengthier post than usual] Before I left for December my mini vacation I had a holiday themed catch-up with a number of friends and colleagues on campus. With the semester winding down, and with the holidays as an excuse it was a good opportunity for people to get together and share some news about what had transpired over the past semester, share notes, best practices, and so on. One of my colleagues inquired how things are going in the office as far as admissions go. There seems to be some doom and gloom over falling admission on campus, but that's a topic for another day. Things are going well in my department (knock on wood), so much so that we are not able to admit all qualified applicants since we don't have enough people to teach for us.

My colleague's solution (my colleague is a full time instructional designer, for what it's worth) was that we need to "change the model," instead of relying on tenure stream professors to teach our courses, we could have subject matter experts design the online courses and hire and army of adjuncts to teach for us, thus the tenured professors would have a final say on the content and the adjunct, who costs less would teach to that content. This, after all, seems to be the model that other schools employ, especially those with online programs, so the message seemed to be that we need to get with the program and move from an outdated model.  Now tenure may have its issues but I think that swinging the pendulum mostly the other direction is the wrong solution. My bullshit alarm (for lack of a better term) starts to go off when I hear about some of these "new models" in the same ways by BS alarm went off when I was hearing about sub-prime mortgages and derivatives when I was an MBA student (you remember those?).

I don't know how I found myself in higher education administration, but I did end up here. As a matter of fact I am coming up to three years in my current job (closing in on that 10,000 hours that Malcolm Gladwell wrote about!) The thing that became abundantly clear to me is that there is a compartmentalization of information, know-how, and most importantly understanding of what needs to happen in a large organization, such as a university, so simplistic solutions, such as "changing" the model become the norm in thinking. This is quite detrimental, in my opinion, to the overall longevity of programs. These simplistic solutions may come from the best of intentions, but when one doesn't have the entire information at their disposal it's easy to come to bad solutions.

First, we have an assumption that we don't have an overall curriculum, thus bringing the point of "master courses" that any ol' adjunct can teach. The fact is that we do have extensive program level outcomes in our program, and somewhat set curriculum.  At the broad level it is set, but at the day to day level there is flexibility for subject matter expertise.  I don't want to get into the issue of academic freedom, I find that this term gets abused to mean (almost) anything that faculty members want it to mean. However, in this case I do want to draw upon it to illustrate the point that at the day-to-day level of class, so long as faculty are meeting the learning objectives of the course, the readings that they choose as substitutes to the agreed-upon curriculum of the course (especially if more than 2 people are in charge of teaching the same course) is are not put under the microscope, and faculty aren't prevented from exercising their professional license.

Secondly, and most importantly, simplistic (and often cheap for the institution) solutions to expand capacity treat all adjuncts as the same an interchangeable. This is patently wrong on so many levels. The way I see is there are two types of adjuncts (those of you who study higher education administration may have more - please feel free to comment). The first type are the people who the adjunct system was "built" for.  Those are people like me: people who have a day-job somewhere, they enjoy what they do, and they share their practice with those who are training to enter our profession. Our day-job essentially pays our wages and what we do we do as service to the profession and for the love of teaching. This way the (usually) small payment per course can be really seen as an honorarium rather than as payment for services rendered.  The second type of adjunct is the person who is doing it as their day-job and they thus need to teach many courses (perhaps at multiple institutions) to make ends meet.  This second type of adjunct is probably what is most prevalent in academia today, at least from what I read.  Regardless of whether they are of type 1 or type 2, Adjuncts who teach, both for our institution and elsewhere, are professionals who have earned their PhDs, in many cases conduct research, and are active in their fields in one way or another; but most of all they are human beings. By coming to the table with the mentality that they are interchangeable, just give them a pre-made course shell and let them run with it, you are not only undermining their humanity but also their expertise in the field - after all someone you crank up and let them run doesn't necessarily have a voice to help your department improve their course offerings and their programs. You are shutting them out.

Now, at the moment, as a case study, let's take my program.  I would estimate that depending on the semester anywhere from 75%-90% of the online courses are taught by adjuncts.  In the summers (optional semesters) the ratio is actually the inverse. By hiring more adjucts, in order to matriculate more students, the tenure to non-tenure ratio gets more skewed. This, to me, is problematic.  A degree program isn't just about the 10 courses you take in order to complete you degree.  A degree program is about more than this, and tenure stream faculty (i.e. permanent faculty) are vital to the health of degree programs and to the success of learners in that degree program. Adjuncts, as seasonal employees are only hired to teach the courses that they are hired to teach, and nothing else. This represents a big issue for programs. Here is my list of six issues with over-reliance on adjunct labor

Issue 1: Advising

I must admit my own experience with advising, throughout my entire learner experience has been spotty at best.  Some students don't take advantage of advising, we think we know better and we know all the answers.  Some advisors treat advising as a period to get students signed up for courses.  Both attitudes are wrong.  Advising is about relationships. It's about getting to know the student, their goals, their intents, and their weaknesses and working with them to address those issues. At the end of a student's studies, the advising that occurred during the student's period of study should help them get to the next leg of where they are going to be, on their own.  Through this type of relationship building advisors get to know their advisees and can even provide references for them if they decide to move on to the next level of study, or if they require a reference for work. Even if one compensated adjuncts for advising, how do you quantify the pay?  Do you do it in terms of hours? That's kind of hard to do.  Even if you derived at a fair and equitable pay for the work, adjunct hiring is subject to volatility, you don't make a long term commitment to them, and they don't necessarily make it to you! (see issue 3).  This is no way to build an advising relationship.

Issue 2: committee work

This second issue brings us back to those master courses that my colleague talked about.  These things are decided by committee on the grand scheme since curriculum needs to make sense - it's not a hodgepodge of a little-bit-of-this and a little-bit-of-that. Faculty are not hourly employees, but adjuncts are sort of treated as hourly employees if we decide to compensate them for this type of work. It may work, but it might require punching a card.  For people who are basically paid honoraria do you really want to nickel and dime them? Sometimes committees meet for their usual x-hours per month and things are done fairly quickly, and other times committees meet many hours in preparation for accreditation, just as an example. This, of course, assumes that adjunct faculty members can do committee work for some additional pay (which usually isn't a lot). What if they can't? What if they have other priorities? If this is the case all of the work falls upon the few tenure-stream people in the department. This has the effect of both keeping adjuncts away from critical decisions and implementations made by the department, and it dumps more on the full time people in the department. Adding more adjuncts to the payroll would most likely serve to amplify this, and to add to the factory model of producing academic products.


Issue 3: department stability: vis-à-vis perpetual hiring

When you hire a full-time staff member chances are high that they will be around for a while if they are worth their salt. If you hire a faculty member, on the tenure stream, chances are that this is a career move and that this person won't be leaving any time soon.  This provides the department with stability in many ways.  It providers a core group of people to shepherd the department, its curriculum, and most importantly the students.  With adjuncts, given their semester to semester nature (i.e. no long term contract with the institution) it makes sense that these individuals will most likely be working elsewhere and have other commitments; or they might just be looking for a full time gig. In which case your institution or department will come second.  This isn't good, and if adjunct instructors leave your department you need to look for replacement. This adds to the workload of the few full-time faculty who need to start a search, review CVs, and go through and interview people.  This isn't a job for one person, but rather a job for a committee of at least 3 members to vet and verify what's on CVs and conduct the interviews.

Once the hiring is complete there is some mentoring that goes on to make sure that they are successful, and even then you aren't guaranteed that these new hires will work out. I'd say that you need at least 2, of not 3, semester to be able to get an accurate idea of how well these new hires teach, work, and fit in with your institutional culture. If things work out, great! Then you pray that they won't leave you in the lurch when something better comes along.  If it doesn't work out not only do you have to start the search again (which is time and energy consuming), you may have issues with your learners; it may have been the case that these new hires were awful and as such did a major disservice to your learners. This is something that needs mending, both from a content perspective and a human relations perspective.  Again, this takes time and effort.  Yes, I hear some of you say that this is also the case with tenure stream faculty.  This is true! It's true for all new hires. There is a period of  trial-and-error, acclimation, and kicking the tires that happens, both by the new hire's side and the department's side. However, once a new hire passes their 4th year review and they are reasonably certain of tenure, that's basically it, you don't generally need to worry that you are going to lose them and you need to start your search all over again. Not so the case with adjuncts. Commitment is a two-way street.

Issue 4: quality of adjuncts

The issue of quality of adjuncts cuts in a number of ways.  If luck out and find someone good in your search, you'll know within a semester or two if they pass the muster (and they will know if they are a good fit for your department). It is risky having any new hire, especially one with so much power over the learning of a group of students, as I mentioned above.  There are, however, other dimensions of quality. One of my considerations for quality is how current are people in their fields?  I generally do not like people who myopically focus on their own research as the cutting edge of what's out there in the field, but this is one of the legitimate ways of keeping current.

Many departments that I've been in contact with use one measurement for adjunct quality: course evaluations.  I am the first to say that I am not an expert in this arena since I have not studied it, but I think this is complete bunk.  As I like to say, you can have an instructor who is Mr. or Ms. Congeniality and basically bamboozle students into thinking that they have learned something relevant and worthwhile. Thus the students are more apt to give good reviews to bad instructors. Those people are then hired to continue teaching to the detriment of future learners. As an aside, I just read a story on NPR on course evaluations. Pretty interesting read - course evaluation apparently are bad measurement instruments.

Finally, just to wrap this section up, another issue I've seen is course-creep.  Someone is hired specifically to teach one course, CRS 100 for example, and then due to many, and varying reasons, they are given courses CRS 150, 200, 350, 400, 420, and 450.  The person may not really be a subject expert in these fields, and may not even have enough time to catch up on the latest developments for their own sake and the sake of their learners, but due to inadequate quality measurement instruments those people get to teach more and more courses in their respective programs.  As a side note, it seems as though accreditors might be taking notice of the increased reliance on adjunct faculty.

Issue 5:  disproportionate representation of faculty by teaching more courses, and issues of diversity

So, we've come to a point in our discussion (with my instructional design colleague) where the suggestion is to just create additional sections for the instructors that have proven themselves over the years.  First this assumes that the people we hire can teach additional courses for us. This, generally speaking, is not usually the case.  The people who teach for us have day-jobs. They are professors at their own institutions and they have responsibilities to their own home departments.  Adding more courses to their teaching roster simply isn't feasible from a logistics point of view.   Even if it were possible, departments don't grow by simply hiring more of the same.  The way organizations grow is through diversification. New faculty hires would be able, surely, to teach some intro level courses in our program, however they would bring in their own expertise.  This expertise would allow the department to create additional tracks of study, offer different electives, provide seminar series for diverse interests to current students and alumni.  The more of the same approach may work short term, however it's not a great long term strategy.

Still, some departments do expand someone's course-load to include more courses.  As we saw in issue #4, this is an issue of quality.  It is also an issue of lack of diversity and disproportionate representation of one faculty member.  I would feel very odd if I were teaching and students were doing 1/4, or /3, or 1/2 of their courses with me because it was compulsory.  If students really opted to take more courses with me, then more power to them, they've made an informed decision.  However, if courses are required and students only have 1 faculty member to choose from, then that is bad for them in the long run because they don't get a diversity of views, opinions, expertise, and diverse know-how from the field (if the adjuncts are from a more practical background).

Issue 6: research of Tenure stream faculty

Now, as I wrote above, I really don't like it when faculty drone on and on about their research, and their research agenda, and look for ways to get out of teaching. Being a faculty member is often compared to being a three-legged stool: teaching, research, and service. You can't extend one leg, shorten another and expect to have balance.  If you wish to be a researcher then by all means, quit your academia job and go find a research-only job.  That said, research, and being up-to-date, is important.  For me it connects with a measurement of quality. Adjuncts are only hired, and paid for, teaching.  Since there is no research requirement in their jobs research and continuous quality improvement may not be something that they undertake. This is bad not only for the students, but also for the department.  One of the ways that we are able to attract students to our respective programs is through name-brand recognition.  In a recent open house my department had published books by our faculty.  Several students commented on the fact that we had that Donaldo Macedo who worked with the Paulo Freire in our department. Yes, we have that Charles Meyer, who's a pioneer in corpus linguistics. These are just two examples, but it gets people to pay attention to you.  Even with my own studies, one of the reasons I chose Athabasca was the fact that I had read work by Fahy, Anderson, Dron, Ally, and Siemens. I was familiar with the CoI framework and the work done on that, and I am a reader of IRRODL.  The fact that AU is the place where all these things are happening was a catalyst for me to apply and attend. All of this stuff comes directly from the research work and public outreach of the full time faculty of this institution. Adding more adjuncts to the payroll doesn't get you this in the long term. Again, you invest in your faculty and your get paid back with dividends!


Conclusion

To wrap this up, in this big organization that we all work in, we all have many different jobs, little communication, and no one has a big picture. I consider myself lucky. Having worked as a media technician, a library systems person, a library reference and training person, an instructional designer, as an adjunct faculty member, and now a program manager, I've seen all of the different levels of what's going on in academia.  I have a more complete picture, much more so than any of my colleagues who are in the same job/career path. The upper administration is still a bit of a mystery to me, but I guess I still have room to grow. I am grateful that friends and colleagues want to help out with growing our program, but without having all of the information, I am afraid that "changing the model" is simply code for do it quicker and cheaper and churn out more students.  Students need mentors, advisors, and role models. The adjuncts we've had teaching for us for the past 3 years (or more) are great and do, unofficially, provide that out our learners. However, you can't grow a program on adjuncts. What it comes down, for me, is recognizing the humanity of adjuncts, compensating them well, getting them into the fold as valuable contributors to the department, and investing long-term in programs.  Figure out what you need tenure stream people for, what you need Lecturers for (adjuncts with long-term contracts) and work strategically. Semester-to-semester, and adjunct majority, is not the way forward.


Your thoughts?

Sunday, December 28, 2014

MOOC thoughts closing out 2014

It's the final stretch of 2014! This makes it my fourth year in exploring MOOCs - boy does time fly!  When I started off with LAK11 I was really just looking for ways to continue learning for free.  While I do get a tuition benefit at work, this also involves standard semesters of 13 weeks, getting work-release time (since online learning isn't covered by the benefit) and retaining the motivation to keep going through a predefined course and syllabus.  Even when MobiMOOC happened and we formed the MobiMOOC research team I really didn't foresee that the, oddly named, MOOC would catch on fire the way it did.  At the time I was eager to get some initial thoughts together on how to put together a MOOC (now they are called cMOOCs) and put together a Great Big MOOC Book, with others, that was a right mix of research and practice.  Since the MOOC has really expanded a lot over the years, with many different things being called a "MOOC" the original idea might be better renamed to The Great Big Book on Open Online Learning (if there are any takers on this, you know my email and twitter - it should be a fun little project, licensed under creative commons of course).

Each year with my involvement in MOOCs I meet some great new people, get re-acquainted with some old, trusty, MOOCers, and learn more about my own behavior about learning in these open spaces.  In addition to the cMOOC (connected courses) and the rMOOC (Rhizo14), there are a few things that I explored in the xMOOC world this year that made me ponder and still keep me thinking. Here is a high level overview of four things that stood out to me:

Languages other than English:

This year I experimented with MOOC providers whose primary language was something other than English.  Those were MiriadaX (Spanish), France Universite Numerique MOOC (France), and OpenCourseWorld (German). Even though I never studied Spanish in a classroom, the amount that I self-studied, and my knowledge of other romance languages, made it possible to go through a number of MOOCs on this platform. On the one hand I think it's great to have content in another language, but the paradigm that they are using (video lecture, textual materials, quizzes) seems fundamentally flawed to me for "deep" learning. There were some MOOCs that I really enjoyed (the 3rd Golden Age of TV for example) but this was probably because of the camera work for the videos, the on-screen chemistry of the presenters, and the analysis of the topic.  The white-screen and voice-over-powerpoint had me yawning.  I wanted to pay more attention, but I found the visuals distracting me from paying attention to the language that I didn't speak well, so the lack of motivation became a language comprehension issue.

I only attempted one MOOC at FUN, which was basically a how to run your own MOOC from soup to nuts. The FUN platform is based on Open EdX which made it familiar. The interesting thing about this MOOC were the multiple ways of going through the MOOC.  It was basically broken down by ADDIE and you could pick any track to complete the course.  Some newbies would focus on the A and D part, while others could work more on implementation.  Due to time constraints I didn't "finish" this MOOC, but I did like it a lot as a way to practice my French.  The thing I found out is that on MiriadaX, when submitting things in passable Spanish (or English!) I would get OK feedback, whereas on FUN I would be docked points on assignments for bad French. I haven't written detailed French for a while now.  I think the last time I did it was for a cMOOC, on this blog, so it's probably not that good.  This was an interesting social experience for me (grading with a language barrier).

Certification - M'eh

In previous years, when certificates of completion on the various MOOC platforms were easier or free to get I actually cared more about "passing" the course and getting that little piece of digital paper.  I know it's silly, but I would enroll in fewer MOOCs, do all the assignments (no matter how silly or non-applicable some of them might seem to me) in order to get the certificate.  Basically if I wanted to do some assignments because I thought they were cool, if I was close percentage-wise to the minimum mark for a certificate of completion I would get myself to do the ones I didn't care much for because I was so close to that certificate.  This year, with the advent of verified certificates, and the lack of a basic and free certificate for those courses, I decided that I could dispense with the assignments altogether. Basically, what it boils down to, is that since there was no chance of getting a prize at the end of race, why bother staying on the path?  This year my xMOOC approach (at least with coursera, where courses may become inaccessible at the conclusion of the course) has been to enroll in anything that seems interesting and download all resources while I still have them available.  The load them onto an iPod and go through them when there is an opportunity to do so.  This means that I am taking back control of my time and deciding when to learn, and what to learn, on my own time.  The only exception to this tactic has been edx.  Their courses still award a free certificate (so I am still hooked), and I make an attempt to participate in DALMOOC which was a topic of interest, but also tried to blend the cMOOC and xMOOC format in a way.  Not so sure how well it did (based on my cursory observations) but I am looking forward to any post-mortem research on this course!

Who is vetting these things?

Even back in 2008, when Siemens, Downes and Cormier worked on CCK, there were academic names attached to MOOCs such as the University of Manitoba, University of Prince Edward Island, and the National Research Council of Canada.  The thing that I have noticed this year is that more and more non-academia entities are entering the MOOC space.  Even if you discount the non-MOOC MOOC provider Udemy, there are MOOC providers outside of North America that are accepting MOOCs from non-academic entities, such as firms on brand image, gamification, and so on.  That's fine, there are many fine folks outside of academia who do research on these things and want to share their passion, but sometimes I feel like I am being sold to when I am taking a MOOC that is not affiliated with a University.  Maybe this is just a perception issue, but I see MOOC offerings by Universities as a Public Service, while MOOCs from a business entity as something that is Freemium, and if I want more (or more substantive things) I ought to buy their books, software, or services.

Research is here!

For the past few years articles on MOOCs have been few and far between.  It was always great to get a new issue of IRRODL, or JOLT, or any other open access publication and see an article on MOOCs.  The surprise factor was great, but it wasn't so great that we didn't get a ton of research into this area.  A lot was opinion (informed and uninformed) and speculation. In 2014 I think we saw the tide change a bit with more research coming out on MOOCs.  I hope that this trend continues!

So, that's it for me and MOOCs in 2014.  What are your highlights (or low-lights) with MOOCs this year?

Sunday, December 21, 2014

DALMOOC Episode 10: Is that binary for 2? We've reached recursion!

Hey!  We've made it! It's the final blog post about #dalmooc... well... the final blog post with regard to the paced course on Edx anyway :)  Since we're now in vacation territory, I've decided to combine Weeks 9 and 10 of DALMOOC into one week.   These last two weeks have been a little light on the DALMOOC side, at least for me.  Work, and other work-related pursuits, made my experimentation with LightSIDE a little light (no pun intended).  I did go through the videos for these two weeks and I did pick out some interesting things to keep in mind as I move through this field.

First, the challenges with this sort of endeavor: First we have data preparation. This part is important since you can't just dump from a database into programs like LightSIDE. Data needs some massaging before we can do anything with it.  I think this was covered in a previous week, but I think it needs to be mentioned again since there is no magic involved, just hard work!

The other challenge mentioned this week was labeling the data. Sometimes you get the labels from the provider of the data, as was the case with the poll example used in one of the videos for week 9. To do some machine learning the rule of thumb, at least according to dalmooc, is at least 1000 instances of labeled data are needed to get some machine learning  - more or less labelled data would be needed depending on individual circumstances.  For those of you keeping track at home Carolyn recommends the following breakdown:
200 pieces of labelled data for development
700 pieces of labelled data for cross-validation
100 pieces of labelled data for final testing

Another thing to keep in mind, and I think I've mentioned this in previous weeks, is that Machine learning won't do the analysis for you (silly human ;-) ).  The important thing here is that you need to be prepared to do some work, some intepretation, and of course, to have a sense of what your data is. If you don't know what your data is, and if you don't have a frame through which you are viewing it, you are not going to get results that are useful. I guess the old saying garbage in, garbage out is a good thing that we need to be reminded of.

So, DALMOOC is over, and where do we go from here?  Well, my curiosity is a bit more piqued. I've been thinking about what to do a dissertation on (entering my second semester as a doctoral student) and I have all next summer to do some work on the literature review.  I still am thinking about something MOOC related, some of my initial topics seem to already be topics of current inquiry and of recent publications, so I am not sure where my niche will be.  The other fly in the ointment is that the course I regularly teach seems to have fewer students in it, so a Design Based Research on that course (that course as a MOOC I should say) may not be an option in a couple of years. Thus, there is a need for Plan B: I am actually thinking of going back to my roots (in a sense) and looking at interactions in a MOOC environment.  The MRT and I have written a little about this, looking at tweets and discussions forums, so why not do something a little more encompassing?  I guess I'll wait until the end of EDDE 802 to start to settle on a topic.

What will you use your newly found DALMOOC skills on?





Monday, December 15, 2014

First semester done!

Hurray!

The first semester of my doctoral studies is done!  Well, it was done last week, but as I wrote in the previous post (on #dalmooc) it's been one crazy semester.  I had hoped that I would blog once a week on the topic of EDDE 801, getting some interesting nuggets of information each week to share , but between MOOC like #ccourses, work, and regular EDDE 801 work, no such luck.  I felt I was putting in enough time in EDDE 801 and that I gave everything into the closed system that is Moodle rather than on the blog.  So, here's one blog post to try to re-capture some thoughts I had while the semester was in progress.

Early on one of the things I really dreaded were the synchronous sessions, every Tuesday at 8PM (my time).  My previous experience with synchronous sessions was not a good one, thus coloring my expectations for this course. Most of my previous experience has been one-way communication webinars (yaaaawn), or mandatory online course synchronous sessions for student presentations - for my Masters programs. The problem here is that no one provided any scaffolding to my fellow students on what constituted good online presentation skills, thus students would offer drone on and on (not really checking in with the audience) and they would often use up their allotted time, and then some. I don't blame my former classmates, just the system that got them into that situation.  So, here I was, getting ready for a snooze-fest.

I am glad to say that it wasn't like this. Most seminars were actual discussion, and Pat did prod and poke us to get the discussion going. Most of the guest speakers were lively and engaged with the audience in some fashion, and my classmates were good presenters.  If I yawned it was due to the time of day rather than boredom. So, final verdict is that synchronous sessions were done well, as compared to my previous experience. Am I a synchronous conferencing convert? Not yet.  Like Maha Bali I still have an affinity for asynchronous.

The one thing that gave me pause to think, with EDDE 801, were the discussion-board assignments.  In my previous experience, with no required weekly synchronous sessions, the bread-and-butter of the course were weekly discussion forums (sometimes 1, sometimes 2, rarely 3).  In 801 we had to do two literature reviews and facilitate 2 discussions based on those literature reviews.  We have 12 in our cohort, so that would be 24 discussions.  Initially I didn't think this would be "enough work" (yeah...I don't know what I was thinking), but as the semester progressed and people participated in the forums vigorously, near the end I got in a bit of a cognitive overload situation where I couldn't really read any more (sorry to the last 4 literature reviews posted, I couldn't really focus on them as I did in the early ones).

Finally, one thing I wanted to do this semester, but I really didn't get a chance to, was to make a sizable dent into the literature I've collected for a potential dissertation topic on MOOCs.  I did read some articles, in order to do my presentation for the course, but it didn't really end up being as big of a dent as I hoped to.  I was, initially, thinking that  I would do some in the break, but with the semester starting January 15, I'm thinking of rest and relaxation, and dissertation reading this summer.

All things considered, not a bad semester! 1/8 done with my doctorate lol ;-)




Friday, December 12, 2014

DALMOOC Episode 9: the one before 10

Hello to fellow #dalmooc participants, and those who are interested in my own explorations of #dalmooc and learning analytics in general.  It's been a crazy week at work with many things coming down all at the same time such as finishing advising, keeping an eye on student course registrations, and new student matriculations, making sure that our December graduates are ready to take the comprehensive exam...and many, many more things. This past week I really needed a clone of myself to keep up ;-)  As such, I am a week behind on dalmooc (so for those keeping score at home, these are my musings for Week 7).

In week 7 we are tackling Text Mining, a combination of my two previous disciplines: computer science and linguistics (yay!). This module brought back some fond memories of corpus linguistics exploration that I had done a while while I was doing my MA in applied linguistics. This is something I want to get back to, at some point - perhaps when I am done with my doctorate and I have some free time ;-).  In any case to start off, I'd like to quote Carolyn Rose when she says that Machine learning isn't magic ;-) Machine learning won't do the job for you, but it can be used as a tool to identify meaningful patterns. When designing your machine learning, you need to think about the features you are pulling from data before you start your machine learning process, otherwise you end up with output that doesn't make a ton of sense, so the old adage in computer science "garbage in, garbage out" is still quite true in this case.

In examining some features of language, we were introduced to a study of low level features of conversation in tutorial dialogue. There were features of turn length, conversation length, number of student questions, student initiative, student-to-tutor word ratios. The final analysis was that this is not where the action is at. What needs to be examined in discourse situations in learning are the cognitive factors and underlying cognitive processes that are happening while we are learning. This reminds me of a situation, this year, where a colleague asked me if I knew of research that indicated whether response length in online discussion forum could be used, in a learning analytics environment, to predict learner success.  I sort of looked at my colleague as if they had two heads because, even though I didn't have the vocabulary to explain that these were low level features I was already thinking that they weren't as useful as looking at other factors.  So, to bring this back to dalmooc, shallow approaches to analysis of discussion are limited to their ability to be generalized. What we should be looking at are Theory-driven approaches which have been demonstrated to be more effective at generalizing. 

In the theoretical framework we look at a few things (borrowing from Sociolinguistics of course):  (1) Power and Social distance explain social processes in interactions; (2) Social processes are reflected through patters in language variation; (3) so our hope is that Models that embody these structes will be able to predict social processes from interaction data.

One of the things mentioned this week was Transactivity (Berkowitz & Gibbs, 1983) which is a contribution on an idea expressed in a conversation, using a reasoning statement.  This work is based on the ideas of Piaget (1963) and cognitive conflict.  Kruger and Tomasello (1986) added Power Balance to the equation of Transactivity.  In 1993 Azmitia & Montgomery looked at Friendship, Transactivity and Learning. In Friend pairs there there is higher transactivity and higher learning (not surprising since the power level is around the same between both people).
.



Finally this week I messed around with LightSIDE, without reading the manual ;-).  According to Carolyn the manual is a must read (D'oh ;-)  I hate reading manuals).  I did go through the mechanical steps that were provided on edx to get familiar with LightSIDE, but I was left with a "so what" feeling after.  The screenshots are from the work that I did.  I fed LighSIDE some data, pulled some virtual levers, pushed some virtual buttons, and turned from virtual knobs, and I got some numbers back.  I think this falls inline with the simple text mining process of having raw data, then extracting some features, then modeling, and finally classifying. Perhaps this is much more exciting for friends of mine who are more stats and math oriented, but I didn't get the satisfaction I was expecting - I was more satisfied with the previous tools we used. Maybe next week there is much more fun to be had with LighSIDE :-)

So, how did you fare with Week 7?  Any big take-aways?






Friday, November 28, 2014

DALMOOC episode 8: Bureau of pre-learning

I see a lot of WTF behavior from learners. This is bad... or is it?
Oh hey!  It's week 6 in DALMOOC and I am actually "on time" this time!  Even if I weren't it's perfectly OK since there are cohorts starting all throughout the duration of the MOOC (or so I suspect), so whoever is reading this: Hello!

This week the topic of DALMOOC is looking at behavior detectors (types of prediction models).  Behavior detection is a type of model (or types of models) that we can infer from the data collected in the system, or set of systems, that we discussed in previous weeks (like the LMS for example).  Some of these are behaviors like off-task behavior such as playing candy crush during class or doodling when you're supposed to be solving for x. Other behaviors are gaming the system, disengaged behaviors, careless errors, and WTF behaviors (without thinking fastidiously?  or...work time fun? you decide ;-) ). WTF behavior is working on the system but not the task specified.  As I was listening to the videos this week I was thinking about gaming behaviors‡ I was thinking that not all gaming behavior is bad.  If I am stuck in a system, I'm more apt to game it so that I can move and, and try to salvage any learning, rather than just get stuck and say eff-it-all.  I wonder what others think about this.

Some related problems to behavior detectors are sensor free affect detection of boredom, fun, frustration, or delight.  Even with sensors, I'd say that I'd have problems identifying delight. Maybe my brain looks a certain way in a MRI machine when I get a sense of delight, but I as a human this is a concept that would be hard to pin down.

Anyway - another things discussed this week is Ground Truth. The idea is that all data is going to be noisy so it won't be one "truth" but there is "ground truth". I guess the idea here is that there is no one answer to life, the Universe and everything, so we look our data to determine an approximation of what might be going on.   Where to do you get data for this? Self-reports from learners, Field Observations§, text analysis, and video coding. The thing I was considering (and I think this was mentioned) is that self-reporting isn't that great for behaviors students, after all most of us don't want to admit that we are gaming the system or doing something to subvert the system. Some people might just do it because they don't care, or because they think that you exercise is stupid and they will let you know, but most, I think, would care what others think, and might have some reverence for the instructor, thus prevent them from accurately self-reporting.

One of the things that made me laugh a bit was an example given of a text log file where the system told the learner that he was wrong but in a cryptic way. This reminds me of my early MS DOS days, when I was vising relatives who had Windows 3.1 (for workgroups!) and I was dumped from the GUI to a full window DOS environment.  I didn't know any commands, so I tried natural language commands...and I got the dreaded "error, retry, abort" and typing any three (or combination of those three) words did not work. Frustration! I thought I had broken the computer and no one was home!

Another thing that came to mind with these data collection methods is the golden triangle (time, quality, cost).  Not every method is equal to other methods of data collecting. For instance video coding is slowest, but it is replicable and precise.

Moving along, we talked a bit about  Feature Engineering (aka rational modeling, aka cognitive modeling ) which is the art of creating predictor variables. This is an art because it involves lore more than well defined principles. This is also an iterative process.  Personally I was ready to write this off but the art and iteration aspect is something that appeals to me rather than just cold hard black-boxes. The idea with this is that you go for quantity at first, not quality, and then you iterate forward, further defining your variables.  Just like in other projects and research you can build off the ideas of others; there are many papers out there for what has worked and what hasn't (seems like advice I was also given at my EDDE 801 seminar this past summer).  Software you can use for this process include Excel (pivot tables for example) and OpenRefine (previously Google Refine). A good thing to remember is that feature engineering can over-fit, so we're going back to last week where we said that everything over-fits to some extent.

Finally we have diagnostic metrics. My eyes started glazing over a bit with this.  I think part of it was that I didn't have my own examples to work with so it was all a bit abstract (which is fine). I am looking forward to the spring 2015 Big Data in Education MOOC to go more a bit in depth with this.  So what are the diagnostic metrics mentioned? (might need a more detailed cheat-sheet for these)
  • ROC -- Receiver operating Characteristic curve good for a two-value prediction (on/off, true/false, etc.)
  • A' -- related to ROC - probability that if the mode is give an example from a category, it can identify which category it came from.  A' more difficult to compute compared to kappa and only works with two categories. Easy to interpret statistically.
  • Precision -- probability a data point that is classified as true, is really true
  • Recall -- probability that a data point is actually true when classified as true

We also covered Regressors such as:
  • Linear Correlation -- if X's values change, do Y's values change as well?  Correlation is vulnerable to outliers.
  • R-squared -- correlation squared. also a measure of what percentage of variance in dependent measure is explained by a model.  Its usage depends on which community has really adopted it.
  • Mean Absolute Error (MAE) tells you avg amt of which the prediction deviate from actual values
  • Root Mean Square (RMSE) does the same but penalizes large deviations

Finally, there are different types of validity (this brings me back to my days in my first research methods course):
  • Construct validity -- Does your model measure what it says it measures?
  • Predictive validity -- Does your model predict the future as well as the present?
  • Substantive validity -- Do the results matter? (or as Pat Fahy would say "so what?" )
  • Content Validity -- Does the test cover the full domain it's meant to cover?
  •  Conclusion validity -- Are conclusions justified based on the results?

So, that was week 6 in a nutshell.  What stood out for you all?


SIDENOTE:
† Image from movie Minority Report (department of precrime)
‡ Granted, I need to go and read more articles on gaming behaviors to know all the details, this was just an initial reaction.
§ There is a free android app for Field Observations that they've developed

Tuesday, November 25, 2014

DALMOOC episode 7: Look into your crystal ball

Whooooa! What is all this?


Alright, we're in Week six of DALMOOC, but as usual I am posting a week behind.  In previous weeks I was having a top of fun playing with Gephi and Tableau. Even thought the source material wasn't that meaningful to me I was having fun exploring the potential of these tools for analytics. This week we got our hands on Rapidminer a free(mium) piece of software that provides an environment for machine learning, data mining and predictive analysis. 

Sounds pretty cool, doesn't it?  I do have to say that the drag and drop aspect of the application does make it ridiculously easy quickly put together some blocks to analyze a chunk of data. The caveat is that you need to know what the heck you are doing (and obviously I didn't ;-) ).  I was having loads of issues navigating the application, and I somehow managed to not get some windows that I needed in order to input information to, and I couldn't find where to find the functions that I needed...  Luckily one of my colleagues was visiting who is actually working on machine learning and was able to give me a quick primer on Rapidminder - crisis averted.  I did end up attempting the assignment on my own, but I wasn't getting the right answer.  With other things to do, I gave up on the optional assignment ;-)

With that software experience this past week, what is the use of prediction modeling in education? Well (if you can get your software working ;--)  ), the goal is to develop (and presumably use) a model which can infer something (a predicted variable) from some combination of other aspects of data that you have on hand (a.k.a. predictor variables).  Sometimes this is used to predict the future, and sometimes it is used to make inferences about the here and now. An example of this might be using a learner's previous grades in courses as predictors for future success.  To some extent this is what SATs and GREs are (and I've got my own issues with these types of tests - perhaps something for another post).  The key thing here is that there are so many variables in predicting future success. It is not just about past grades, so take that one with a grain of salt.

Something that goes along with modeling is Regression: You use this when there is something you want to predict and it is numerical in nature. Examples of this might be number of student help requests, how long it takes to answer questions, how much of an article was read by a learner, prediction of test scores, etc. A regressor is a number that predicts another number.  A training model is when you use data that you already know the answers from and try to build a model to teach the algorithm.

There are different types of regressions.  A linear regression is flexible (surprisingly so according to video), and it's a speedster.  It's often more accurate than more complex models (especially ones you cross-validate). It's feasible to understand your model (with some caveats).

In watching the videos last week, some examples of regression algorithms I got conceptually from a logic perspective, but some just seem to go right over my head.  I guess I need a little more experience here to really "get it" (at least from an applied sense)

Another way to create a model is Classification: You use this when there is something you want to predict (label) and that prediction is categorical, in other words it is not a number, but a category such as right and wrong; or will drop, or persevere through course. Regardless of the model you create, you always need to cross validate the model you are using for the level you are using it in (e.g. new students? new schools? new demographics?) otherwise your model might not be giving you the information you think it's giving you.

This week, for me, was yet another reminder that I am not a maths person.  Don't get me wrong, I appreciate the elegance of mathematics, but I honestly don't care about optimizing my algorithms through maths.  I'd like to just know that these certain x-algorithms work for these y-scenarios, and I want easy ways to use them :)  Anything beyond that, for me, is overkill.  This is probably why I didn't like my undergraduate education as much as I've enjoyed my graduate education:  I wanted to build things, but my program was focusing on the nitty gritty and engine performance :)




SIDENOTES
  • Alternative episode title: Outlook hazy, try again later
  • Neural Networks have not been successful methods (hmmm...no one has told this to scifi writers ;-) sounds cool, even though they are inconsistent in their results)

Monday, November 24, 2014

Designing in the Open (and in connected ways)

Wow, hard to believe, but we've reached the final module of Connected Courses (and boy is my brain tired!).  I found out last week that there may be a slim chance of me being able to teach Introduction to Instructional Design (INSDSG 601, a graduate course) at some point in the new future. This is something that was offered to me a couple of summers ago, but being away on vacation at the time (with questionable internet access) it didn't seem like a good idea to be teaching an online course.

I've been poking around the course shell, here and there, over the past couple of years (even since teaching this course was a remote possibility) to get ideas about how to teach the course.  The previous instructor, who had been teaching this course for the past 10 years but recently refocused on other things, did a good job with the visual design of the course. It's easy to know what you are are supposed to do each week.  Then again, from the design of the course I can see that the the focus of the course each week seems to center around the instructor (each week has lectures in addition to chapter readings), and we saw in the cited literature in #dalmooc that this isn't pedagogically effective.  This is something I've been wanting to change.  The other thing that I don't like is the reliance on the Dick & Carey textbook. Granted, this textbook seems to be a seminal book in the field, but it is not the easiest thing to read for a novice learner (who is also figuring other things out about the ID field) and in my experience most learners read it, but don't really get the fine grain elements. This book, in my opinion, is a good reference book, but not necessarily a good instruction book†. The thing that really convinced me to scrap this course and start from scratch with a new design is that the assignments seem to all be assignments (50% of final grade) that built on top of one another culminating in a final project (the other 50% of final grade) are all taking place in the forums.  The project-based aspect I like, and I also like the peer review aspect.  However, I don't like this double-counting of points, and the closed nature of the course (everything happening in an LMS). So, here we go with a re-design (if I know I am teaching the course)!

The learning objectives (that I can't really mess with) are as follows:
  • State the reason for using an Instructional Design Model. 
  • Identify and describe the purpose of each component of the Dick and Carey Model of Instructional Design. 
  • Develop instructional (performance) objectives that include behavior, condition and criteria.
  • Develop an assessment strategy for an instructional event. 
  • Develop assessment items that map to instructional objectives. 
  • Develop an instructional strategy that maps to learner needs and performance objectives. 
  • Plan a formative evaluation strategy to assess instructional materials. 
  • Compare the Dick & Carey ISD model with other models
Since this is an intro course, my own additional objectives for this course are to (1) setup learners to be able to find and retrieve sources from our academic library, and (2) begin creating their own repository (aka "toolbox") of resources that they can make reference to not only as they progress through the program, but also as they become working professionals.

I have some ideas for assignments to reach these goals, however I am a bit stuck.  I want my course design to be 100% (or at least 90% if I can't reach 100%) open access materials.  Students would be free to go and find and retrieve textbooks, articles, and resources from pay-walled sources, but the materials I provide need to be 100% open access. This means I need a new textbook (or an un-textbook).  What would you recommend for resources for an introductory course in instructional design as far as open resources go?  Dick & Carey are having me do some mental gymnastics (ADDIE seems to have more free/open resources on the web than D&C).

As far as lectures go, I am thinking that lectures in the course are automatically out.  The current lectures all start with "Hello everyone, I am Dr. so-and-so". Since I am not Dr. so-and-so, this is an unnecessary cognitive barrier for learners, and in all honesty I don't want to sit down and do 13 weeks worth of lectures. I think there are much more fun ways to spend my time, and help my learners navigate the subject, than 30-45 minute lectures each week.  If I had enough buy-in I'd love to get onto a Google Hangout and have a recorded discussions with some of the great minds, and leaders, in instructional design to discuss topics of ID including mobile learning, distance education, corporate training, and so on  - you know, things that will get the learners thinking about how to structure the remainder of their studies, pick areas to focus on, and what they might want to be lifelong learners in.

So, initial brainstorming post - open resources!  What do you think kind reader?

In subsequent posts (if this goes forward) I think I am going to focus on activities, other materials, and flow of the course.  If you want me to write about other subjects as well leave a comment :)


SIDENOTES:
†other faculty of instructional design please feel free to chime in! I what to know what you think about Dick & Carey.

Thursday, November 20, 2014

Attack of the untext - my own stumbling blocks

It's been a while since Rhizo14 ended, but the community is going strong! Facebook may not be as active (or maybe facebook is  hiding most Rhizo posts from my timeline...that could be it...anyway), but we are still chugging along with the collaborative *graphy. I can't call it an ethnography, or autoethnography because variables have changed.  Some of us decided to get together and write an article for Hybrid Pedagogy on why the Collaborative *graphy article is taking so long (a meta-article if you will) but we got stuck there too (or it seems as though we are stuck).  I think others have written about their own personal views on this on their own blogs, so I've been working out what my own stumbling blocks are with this project. I think I have a way to explain things now!

So, when working collaboratively in previous collaborative work situations your final product feel unified.  The main analogy that I can give give is the main root of one plant which looks like this:

In this example (going with the rhizome metaphor) you have one main path, and all the side paths are footnotes, citations and references, and end-note commentary.  The coloring is all the same because regardless of where you have one author or many authors the final product sounds like a unified voice.  Many ideas have come into, and go out of, this main line (see expansion roots in the model), but at the end of the day those side roots don't overtake the main idea or argument.

The original project started as a collaborative autoethnography (CollabAE).  This eventually became an issue because some people stepped back from the  project and thus is was no longer an autoethnography for the entire MOOC, but rather an multi-author ethnography (MAE) of the MOOC. We could use other people's anonimized data, assuming that we had their permission. At that point it wasn't introspective (auto-ethnography) but rather analytic - but this seemed to lack the rhizomatic aspect (to some extent anyway) that made the CAE unique in this aspect, and there were issues of silencing voices (or inadvertantly silencing voices since some people didn't want to be authors, or weren't comfortable with their views being part of this analysis). Things got busy with school, work, and others pursuits that I lost track of the CAE.

The CAE, at least the way we collected data, looks like the image above.  Each color represents a different author, and each other has (probably) a main point and certain supporting literature, tangents, side-points and so on that they made in their original write up. Some authors connect to other author's writings, and this is visualized above as roots crossing through other root's paths.  As chaotic as this may look, it does make sense. I think the closest analogy for this would be George Veletsiano's Student Experiences of MOOCs eBook. To some extend (being a bunch of experimental academics ;-) ) we may have over-thought this CAE.  In hindsight, I would say that this should be a multi-step process.  Perhaps the first step in the process, with a deliverable, would be an eBook, similar to Veletsiano's style, of our Rhizo experiences.  Here people can write anonymously or  eponymously.  Submitted chapters could go through peer review, but not the traditional academic peer review - the peer review that aims to disambiguate and seeks to grow those side-roots a bit in case eventual readers want to do down the paths.  There could be a foreword (Dave Cormier perhaps?) but the reader would be left to read, process, and make sense of each individual story.  As such this could be not a collaborative AE  but a cooperative AE (CoopAE), people working concurrently, but not necessarily together to get this done.  One big, overall, document, but each chapter can stand on its own.

So since the CollabAE wasn't going far, a couple of people thought we could sit down and write an article about what's up with this process.  Why are things taking so long?   The visual for this untext looks something like this (according to me anyway).

Whereas the CollabAE has separate, but distinct, stories where others commented on, but didn't necessarily write-over the text, in our meta-analysis I am seeing both original (concurrent) threads emerging (two or more people writing at the same time but not about the same message). This is represented by different color main-roots.  Then I am also seeing people expanding on those main-roots (different color sub-roots) by either adding onto the document, or having side conversations.  I have to admit that this is fascinating as a brainstorming piece, and it could be considered by some as a performance piece or something alternative like #remixthediss.

That said, however, the problem is that we don't have an audience.  A document as chaotic as this one is helpful to us as authors to help us better understand our own positions on things, and to better help us understand or analyze our own lived experiences in the MOOC.  However, I am not convinced that this is geared toward a reading audience. It's not necessarily something that they expect, and I am not sure how a reader will process this chaos.  For me, at the end of the day, I go back to my goal.  What is the goal of the *graphy project (decided to change it's name since CollabAE and CoopAE seem to not describe it)?  What is the goal of the untext about the *graphy project? Is the goal something for the internal constituents? Something for the public?  If it's for both, what's the overlap space where the final published product would be useful (and comprehensible) to both?  Good questions.  I've got my own answers, but as a group...I don't know :)

As a side note, this seems like an interesting application of co-learning (see connected courses for more details)




Monday, November 17, 2014

DALMOOC episode 6: Armchair Analyst

Week 6 CCK11 blog connections
I was trying for a smarter title for this episode of #dalmooc thoughts, but I guess I have to go with Armchair Analyst since I ended up not spending a ton of time with either Gephi or Tableau last week. So, the reflection for week 4 is mostly on theoretical grounds; things I've been thinking about (with regard to learning analytics) and "a ha" moments from the videos posted.

I think week 3 and week 4 blend together for me.  For example, in looking at analytics the advice, or recommendation, given is that an exploration of a chunk of data should be question driven rather than data-driven.  Just because you have the data it doesn't necessarily mean that you'll get something out of it.  I agree with this in principle, and many times I think that this is true.  For instance, looking back at one of our previous weeks, we saw the analytics cycle.  We see that questions we want to ask (and hopefully answer) inform what sort of data we collect and potentially how we go about collecting it.  Just having data doesn't mean that you have the data that you need in order to answer specific questions.

On the other hand, I do think that there are perfectly good use cases where you might be given a data-dump and not have any questions.  Granted, this makes analysis a bit hard, like it did for me the last couple of weeks.  This data (anonymized CCK11 data, and sample evaluation data for Tableau) didn't really mean much to me, so it was hard to come up with questions to ask.  On another level I've been disconnected from the data, so it's not personally meaningful as a learner (CCK11 data was anonymized), and since I didn't have a hand in organizing, offering, and running CCK11 it's not as useful for me as a designer.  However as a researchers, I could use this data dump to get some initial hypotheses going.  Why do things look the way they look?  What sort of additional, or different, data do I need to go out and test my hypothesis?  How might I analyze this new data?  As such, a data-driven approach might not be useful for answering specific questions, however it might be a spark to catalyze subsequent inquiry into something we think might be happening; Thus helping us formulate questions and go out and collect what we need to collect to do our work.

So, for example, I have just started my EdD program at Athabasca University.  I have a lot of ideas running through my head at the moment as to what I can research for a dissertation in 3 years†. As I keep reading, I keep changing and modifying my thoughts as to what do to.  I may be able to employ learning analytics as a tool in a case study research approach.  For instance, I teach the same course each spring and fall semester, an online graduate instructional design course on the design and instruction of online courses (very meta, I know). The current method of teaching is quite scaffolded, and as Dragan was describing last week (or this week?) I tend to be the central node in the first few weeks, but my aim is to just blend in as another node as the semester progresses. This process is facilitated through the use of the Conrad & Donaldson Phases of Engagement Model (PDF).

So, one semester I can use this model to teach the course and another semester I might create an Open Online Course based on the principles of connectivism and run the course like that. I'd have to make some changes to the content to make sure that most course content be Open Access content, that way I would be eliminating some variables, but let's assume I've done this and I'm just testing Connectivism vs "regular" graduate teaching (whatever that is).  I can then use SNA, as one of my tools, to see what's happening in these two course iterations. I can see how people are clustering together, forming communities (or not), how they are engaging with one another and so on. This analysis could be an element of the study of efficacy of connectivism as employed in a graduate course‡.

On the other side of things, if I were to just stick with my traditional online course, I could still use SNA to improve my course.  One of the things that I notice is that some groups tend to form and stay together early on in the semester.  These seem to be informal groups (person X commenting on person Y's posts throughout the semester more than they do for person Z). Since the semester is 13 weeks long, a JIT dashboard of course connections would be useful to both encourage people to find study groups, but also to engage more with people that they don't normally engage in.  People who usually post late in the forums (at least in my experience) don't often get many responses to their posts, which is a real pity since they often bring some interesting thoughts to the discussion.

A good example of this is the image above, the CCK11 blogs from Week 6.  I see a number of disconnected blogs.  Were these blogs never read (as measured by the click-through rate on the gRSShopper Daily)? Were they never commented on by anyone? Some of the blogs may not speak to anyone in the course, but in a course of 1131 participants (citation), assuming a 80% drop off by week 6, that's still around 200 people active in the MOOC, why is not one connecting with these posts, and can we do anything to spur participation?  Maybe an adopt a blog post campaign?  This is also where the quantitative aspects of SNA mesh with the qualitative aspects of research. Here we could also do an analysis of what gets picked up (those connected nodes) to what doesn't get picked up, and do an analysis of the text. This might help us see patterns that we can't see with SNA alone.

That's it for week 4.  And now I am all caught up.  Welcome to week 5!  Your thoughts on Week 4?


SIDENOTES:
† The more I think about this, the more I am learning toward a pragmatic dissertation rather than a "blow your mind" dissertation. I see it more as an exercise that will add some knowledge in this world, but given that doctoral dissertations are rarely cited, I am less interested in going all out, and more interested in demonstrating pragmatics of research through a topic of interest. Thoughts on this? I definitely don't want to stick around in dissertation purgatory.
‡ I'm pretty sure that someone (or quite a few) have written about this, especially with regard to CCK08, but let's just roll with this example.

Thursday, November 13, 2014

DALMOOC episode5: Fun with Gephi

CCK11 Tweet visualization
Alright, after a few days of being sidelined with a seasonal cold, I'm back on #dalmooc.  Still catching up, but I have a feeling I am getting closer to being at the same pace as the rest of the MOOC ;-)  In any case, this is a reflection on week 3 where we started messing around with social network analysis (SNA).  This is cool because it's something that I had started doing on another MOOC on coursera, with Gephi, so it was an opportunity to get back on and messing with the tool.

So, what is SNA?  SNA is the use of network theory to analyze social networks.  Each person in this network is represented by a node (or edge), and nodes can be connected to other nodes with a vertex (or many vertices). These connections can indicate a variety of things (depending on what you are examing), however for my usage in educational contexts I am thinking of vertices as indicators of message flow, who sends messages to whom in a network, and also who refers to whom in a network. I think this latter one is interesting from an academic citation point of view as well.

As was pointed out in week 3, SNA can help discover patterns of interaction in online learning environments. I think that it can also help up discover patterns in physical environments, however this is harder because we don't have big brother watching the physical environments as much as we can collect data about patterns of participation in virtual environments. It's much more labor intensive to keep accurate track in a physical environment.

An interesting application of SNA is its use in understanding learning design (Lockyer et al - in video). We can use SNA to analyze patterns of interaction in courses that we design an implement, thus we can (to some extent) see how our designs are affecting the learner's patterns of participation.  While this is feasible, I think that it's also hard to keep the variables pinned down so that there are no confounding errors. If you've designed an online course (that is NOT self-paced) you can see the same course design taught different ways if you put different faculty in the driver's seat.  As such I think that in studies using SNA to analyze course design (and/or teaching methods) it's important to account for all variables.

Other interesting things from Week 3:

An Instructor-centered network is one where the instructor is central node in network. These are recognized in literature as only leading to lower levels of knowledge construction (see Bloom's taxonomy). Related to this type of network is having one learner have a dominant role in a course, thus the instructor is replaced (or shares the spotlight) with a dominant learner.  This is also not desirable from a pedagogical point of view. One can start with an instructor-centered environment and facilitate the change to a P2P interaction. Students will need scaffolding in order to reach that P2P network.

Sense of community is predictor of success in educational endeavors. A common way of collecting this type of data is questionnaires, and I think that in education this can be both in-class as part of a mid-term temperature check in the course, but also in the final course evaluation.  I am wondering, however, how accurate this self-reporting is. Is this just an affective measure? Or can learners feel like they are lacking a sense of community but in reality have it but not get as much as they feel they need?

Network brokers are nodes that connect two or more communities in a network and have a high degree of centrality.  These network brokers can see information across many different communities, and as such can have access to many different ideas flow through them. Network brokers are associated with high levels of achievement and creativity. So, in an educational setting it's good to be a network broker.

Cross-Class networks are latent ties by attending the same events, so even though I am not connected with many people in #dalmooc (at least on twitter I don't retweet or write to many others in the MOOC - maybe I should...) I am connected to other people through the course hashtag and by attending the same event. In a traditional learning setting this could be likened to participating in a learner community such as UMassID.com (our instructional design community) or the Athabasca University Landing network.

CCK11 Blogs, week 6
Next up, the Gephi portion of this post.  I've been messing around with Gephi data from CCK11. I was quite excited to get my hands on the CCK11 data to mess around with in Gephi until I remembered that I didn't tweet all that much in CCK11...D'oh! I was curious to see where I was  in the network of connections.  Even if I were active I don't think I'd be able to see myself there because the data appears to be anonymized (and rightfully so).

I did run some analysis of the blog connections in CCK11 using Gephi again (part of the data dump available in #dalmooc) and here was a place where I expected to see myself and see who I was connecting to, however, again, the data was anonymized. My question entering into this analysis was more about analyzing my own patterns of interaction.  I was new to MOOCs back in 2011 and CCK11 was the MOOC where I really started learning about connecting with others for academic purposes. Thus, I wanted to see what my initial and developing connected literacies pointed to. Alas, this was not to be :-)


As Dragan mentioned in one of the videos of this week, analytics should be question-driven, not data-driven. Just because you have data, it doesn't mean that you should use it, or that you will find anything meaningful in it.  This was the case with me and this data. There were some interesting visualizations, but I wanted to have a look at the people involved, who connected to whom, and look more at the qualitative connections: what posts, what ideas, what types of know-how got distributed throughout the network and by whom. It's a little hard to do this with anonymized data, so you really need to abstract and think at higher levels when working with this.  If we had data from other MOOCs, this type of anonymized data could be useful to compare patterns of participation of one MOOC to another.

Thus concludes my week 3 experiences.  What were your thoughts?

Wednesday, November 12, 2014

Questions about Co-Learning

What do you get when you mix connected courses, thinking about academia, and cold medicine?  The answer is a blog post (which I hope makes sense) :-)

As I was jotting down my initial thoughts on co-learning in the previous post I completely forgot to address some of the initial thinking questions for this module.  Here are some initial thoughts on co-learning and how I would address these questions:

What is co-learning and why employ it?
For me co-learning is when two or more people are working together to solve a problem and learn something new.  As I wrote in my previous post, the individuals in this community do not all need to start from the same point. There can, and will, be learners that are more advanced in certain areas as compared to others.  This is perfectly fine, and it's realistic to expect this.  This can be a community of practice, it can be a broad network of learning, or a loosely connected network of learning that centers around a hashtag.  The reason to co-learn is, for me, three-fold.  First you have a variety of learners in the classroom their lived experiences, and previous knowledge, can be beneficial in this learning experience. Second, by having learners co-learn (and in my mind co-teach) they are learning not just the material but they are deconstructing it so that they can explain it to others. This act of deconstruction allows a closer analysis of the subject matter and, hopefully, a more critical view of it.  Finally, this is something that came to mind when engaging in #dalmooc this week - when looking at Social Network graphs of courses, in some cases we see the instructor as a central node, which is a quite privileged position. However this isn't good for learning, so having a course where there is a high degree of connections among many nodes, and the instructor becomes just another node in the network, this spells out good things for learning (or so research says - don't ask me to cite anything, I wasn't taking detailed notes when I was viewing Dragan's presentations)


How can teachers empower students as co-learners?
This, for me, has been the most difficult thing. I teach a course that is an upper level graduate course, which means that students come to my course late in their studies and thus their habits are formed.  Most expect weekly asynchronous discussions with the familiar 1-post, 2-reply scheme.  Many students seem to go beyond this (anecdotal evidence from teaching this course over the last 3 years), however some do not, and there are many reasons for that.  Having co-learning occur means that learners need to be more present, and to some extent their schedule isn't fully their own.  They need to see what their peers are doing so that they can bounce off those messages, riff off them, respond to them, and, when necessary, pertrube them (in educational ways).  I think teachers can empower students to be co-learners by slowly stepping back and scaffolding students to take on that role.  How quickly or slowly you step back depends on the group of learners that are in the classroom.  I don't think that there is a magic formula here, however we are all beholden to the academic calendar, so I would say that it happens somewhere between week 1 and 6 (for a 13-week semester). Even as instructors step-back, it's important to maintain a noticeable teaching presence, and a social presence.  Nothing annoys learners more (I find) than having an instructor that's not there.


How does this pedagogy differ from traditional methods of teaching and learning?  How does the instructor support a co-learning environment? What obstacles might educators encounter in this paradigm shift?  What obstacles might students encounter in this paradigm shift?
I guess here it depends on how one defined "traditional". If traditional means lecture then this approach of co-learning is like night and day compared to lectures.  However, if we encompass Vygotsky's social constructivism, or concepts like Wegner's Communities of Practice as "traditional" then I don't think that co-learning varies a ton from these.  I think that co-learning is a natural extension to constructivism, connectivism, and communities of practice.

I think the key thing, as I wrote above, for support is that sense of social and teacher presence going back to the community of inquiry model. The idea here is that an instructor is just a node  in this learning network.  Sure,the instructor by virtue of being older and having had more learning experiences (and time to read and digest more) is a more knowledgeable other in this aspect. However, his knowledge and voice isn't what drowns out the voices of the learners.  The instructor is there to help people navigate the network, wayfind, provide appropriate scaffolds, advise, and when necessary promote certain content. I don't think we can get away from content and certain "core" knowledge, so the instructor as an MKO in this area has a responsibility of sharing what they know with others, without being overbearing. 

The trick here is having that sense of when to share something and when to let learners struggle a bit. Again research points to the fact that when learners struggle a bit they tend to learn better. I think this is also an area where the instructor might potentially face some obstacles by the learners themselves or their own superiors.  If the learners want content (or *gasp* lectures), then there might be a push from the learners to ask the instructor for nicely packaged answers to their questions. I have seen this in exit evaluations at my own department.  Since we are a department of applied linguistics we don't deal with classroom management (our students are, for the most part, teaching professionals or they go into teaching). We provide the applied linguistics theory, and a space to think about it, criticize it, deconstruct it, and utilize it. However our faculty don't provide cookie-cutter solutions to language learning problems because the answer (as usual) is "it depends".  However learners, in their previous learning experiences, are used to getting nicely packaged data bits, such as "World War I started on ____" or "The first president of the United States was _____" and so on.

This obstacle is something that also affects learners because they need to discover ways in which to not only take the knowledge that they gain in their courses now, but to be able to continuously go out, read the updated literature in the field, deconstruct it, analyze it, and put it back together in meaningful ways to solve their own problems.  The classroom environment provides a nice laboratory where co-learning can be practiced, however once students graduate they need to discover networks in which they can continue to actively co-learn.  This is a literacy that we, as educators, need to help our learners cultivate.

I think that's it for co-learning for now.  Thoughts?






Monday, November 10, 2014

Active Co-Learning

I took a small hiatus from Connected Courses in the last module because everything sort of piled on at the same time and  I had little space to breathe.  Yes, I've been dalmoocing, so I guess everything is a choice ;-).  I guess that was my jump-out week of connected courses, and now I am dipping in again. I love the language of cMOOCs ;-)  The truth is that I've felt a little fatigued with #ccourses.  I am not sure if it's the length, or the time I've been engaged with it (7 weeks if you consider the pre-course and that's before we got to Diversity, Equity, and Access), so I guess I needed a little mental break.  I don't think this is an issue unique to MOOCs because I've been feeling a mild case of senioritis in my first EdD course. Luckily I've done all of my deliverables, submitted them, and have gotten feedback, so now I am participating with my peers and engaging in the participation aspect of the course.

Anyway, these next two weeks are about Co-Learning in #ccourses and worlds have collided!  Connected courses has collided with my EdD course to produce a thinking storm (in my head). I am not going to talk a lot about the resources shared this week (oddly enough I have shared some of these with my own class in the past!), but I wanted to talk a bit about my little connected moment.

So, as we are discussing LMS mining and learning analytics in EDDE 801 one of my classmates mentions that he sees learning as something social. I don't know if he is also on #ccourses or if this is a happy coincidence, but this got me thinking.  I think that learning can be social, and many types of successful learning can be social, but learning is not exclusively social.  For instance, I can sit down with a book, or some MOOC videos, and read or view them.  If I am paying attention and the material is at my level then chances are that I will learn something.  That said, I don't think that all learning works this way.  I do think that in many cases learning is social.  The construct that comes to mind is Vygotsky's More Knowledgeable Other. 

If we are all in a group, let's say in #ccourses, and we are all tackling the topic of this module (co-learning), I would say that we don't all come to the learning environment with the same background, know-how, and knowledge.  We may have some similar experience and background, but the specifics matter.  Thus, as we are learning together I may be able to teach someone a small nugget of knowledge (or know-how) or vice versa. The teaching aspect may not be reciprocal between any two given interlocutors, but it doesn't have to be.  This is when the community comes in. If we are all members of a community and we get each other's daily posts, tweets, delicious links (that relate to this course), then we are partly learning from other's contributions, even if they don't directly learn something from our contributions.  Thus, the act of co-learning is also an act of teaching, at least as defined by Wiley (in the TEDx video this week) when he defines Education as a relationship of sharing. A successful educator, according to Wiley, is someone who shares fully with their students.  In a co-learning environment we are all learners and we are all educators. 

 So, here is a question that popped up while I was pondering this: what is the difference between an "aha" moment when you are by yourself (reading a book, or watching a MOOC video) and "learning" in a social environment?


SIDENOTES:
  • Even though I sat out the module on Diversity, Equity, and Access, I think that the videos on Feminism, Technology and Race; and wikistorming, are interesting to watch and think about. If you haven't watched them, I encourage you to do so :)
  • This week Alec Couros asked "what endures" when thinking about technologies.  The answer was that technologies come and go, but it is the social connections that endure (thus, I would paraphrase this as reach out and talk to someone in your social network, don't just consume).  This is quite true.  Remind me one of these days to expand upon this and Elliniko Kafeneio ;-)