Tuesday, December 30, 2014

Connecting the dots...thoughts about working in academia

[warning: lengthier post than usual] Before I left for December my mini vacation I had a holiday themed catch-up with a number of friends and colleagues on campus. With the semester winding down, and with the holidays as an excuse it was a good opportunity for people to get together and share some news about what had transpired over the past semester, share notes, best practices, and so on. One of my colleagues inquired how things are going in the office as far as admissions go. There seems to be some doom and gloom over falling admission on campus, but that's a topic for another day. Things are going well in my department (knock on wood), so much so that we are not able to admit all qualified applicants since we don't have enough people to teach for us.

My colleague's solution (my colleague is a full time instructional designer, for what it's worth) was that we need to "change the model," instead of relying on tenure stream professors to teach our courses, we could have subject matter experts design the online courses and hire and army of adjuncts to teach for us, thus the tenured professors would have a final say on the content and the adjunct, who costs less would teach to that content. This, after all, seems to be the model that other schools employ, especially those with online programs, so the message seemed to be that we need to get with the program and move from an outdated model.  Now tenure may have its issues but I think that swinging the pendulum mostly the other direction is the wrong solution. My bullshit alarm (for lack of a better term) starts to go off when I hear about some of these "new models" in the same ways by BS alarm went off when I was hearing about sub-prime mortgages and derivatives when I was an MBA student (you remember those?).

I don't know how I found myself in higher education administration, but I did end up here. As a matter of fact I am coming up to three years in my current job (closing in on that 10,000 hours that Malcolm Gladwell wrote about!) The thing that became abundantly clear to me is that there is a compartmentalization of information, know-how, and most importantly understanding of what needs to happen in a large organization, such as a university, so simplistic solutions, such as "changing" the model become the norm in thinking. This is quite detrimental, in my opinion, to the overall longevity of programs. These simplistic solutions may come from the best of intentions, but when one doesn't have the entire information at their disposal it's easy to come to bad solutions.

First, we have an assumption that we don't have an overall curriculum, thus bringing the point of "master courses" that any ol' adjunct can teach. The fact is that we do have extensive program level outcomes in our program, and somewhat set curriculum.  At the broad level it is set, but at the day to day level there is flexibility for subject matter expertise.  I don't want to get into the issue of academic freedom, I find that this term gets abused to mean (almost) anything that faculty members want it to mean. However, in this case I do want to draw upon it to illustrate the point that at the day-to-day level of class, so long as faculty are meeting the learning objectives of the course, the readings that they choose as substitutes to the agreed-upon curriculum of the course (especially if more than 2 people are in charge of teaching the same course) is are not put under the microscope, and faculty aren't prevented from exercising their professional license.

Secondly, and most importantly, simplistic (and often cheap for the institution) solutions to expand capacity treat all adjuncts as the same an interchangeable. This is patently wrong on so many levels. The way I see is there are two types of adjuncts (those of you who study higher education administration may have more - please feel free to comment). The first type are the people who the adjunct system was "built" for.  Those are people like me: people who have a day-job somewhere, they enjoy what they do, and they share their practice with those who are training to enter our profession. Our day-job essentially pays our wages and what we do we do as service to the profession and for the love of teaching. This way the (usually) small payment per course can be really seen as an honorarium rather than as payment for services rendered.  The second type of adjunct is the person who is doing it as their day-job and they thus need to teach many courses (perhaps at multiple institutions) to make ends meet.  This second type of adjunct is probably what is most prevalent in academia today, at least from what I read.  Regardless of whether they are of type 1 or type 2, Adjuncts who teach, both for our institution and elsewhere, are professionals who have earned their PhDs, in many cases conduct research, and are active in their fields in one way or another; but most of all they are human beings. By coming to the table with the mentality that they are interchangeable, just give them a pre-made course shell and let them run with it, you are not only undermining their humanity but also their expertise in the field - after all someone you crank up and let them run doesn't necessarily have a voice to help your department improve their course offerings and their programs. You are shutting them out.

Now, at the moment, as a case study, let's take my program.  I would estimate that depending on the semester anywhere from 75%-90% of the online courses are taught by adjuncts.  In the summers (optional semesters) the ratio is actually the inverse. By hiring more adjucts, in order to matriculate more students, the tenure to non-tenure ratio gets more skewed. This, to me, is problematic.  A degree program isn't just about the 10 courses you take in order to complete you degree.  A degree program is about more than this, and tenure stream faculty (i.e. permanent faculty) are vital to the health of degree programs and to the success of learners in that degree program. Adjuncts, as seasonal employees are only hired to teach the courses that they are hired to teach, and nothing else. This represents a big issue for programs. Here is my list of six issues with over-reliance on adjunct labor

Issue 1: Advising

I must admit my own experience with advising, throughout my entire learner experience has been spotty at best.  Some students don't take advantage of advising, we think we know better and we know all the answers.  Some advisors treat advising as a period to get students signed up for courses.  Both attitudes are wrong.  Advising is about relationships. It's about getting to know the student, their goals, their intents, and their weaknesses and working with them to address those issues. At the end of a student's studies, the advising that occurred during the student's period of study should help them get to the next leg of where they are going to be, on their own.  Through this type of relationship building advisors get to know their advisees and can even provide references for them if they decide to move on to the next level of study, or if they require a reference for work. Even if one compensated adjuncts for advising, how do you quantify the pay?  Do you do it in terms of hours? That's kind of hard to do.  Even if you derived at a fair and equitable pay for the work, adjunct hiring is subject to volatility, you don't make a long term commitment to them, and they don't necessarily make it to you! (see issue 3).  This is no way to build an advising relationship.

Issue 2: committee work

This second issue brings us back to those master courses that my colleague talked about.  These things are decided by committee on the grand scheme since curriculum needs to make sense - it's not a hodgepodge of a little-bit-of-this and a little-bit-of-that. Faculty are not hourly employees, but adjuncts are sort of treated as hourly employees if we decide to compensate them for this type of work. It may work, but it might require punching a card.  For people who are basically paid honoraria do you really want to nickel and dime them? Sometimes committees meet for their usual x-hours per month and things are done fairly quickly, and other times committees meet many hours in preparation for accreditation, just as an example. This, of course, assumes that adjunct faculty members can do committee work for some additional pay (which usually isn't a lot). What if they can't? What if they have other priorities? If this is the case all of the work falls upon the few tenure-stream people in the department. This has the effect of both keeping adjuncts away from critical decisions and implementations made by the department, and it dumps more on the full time people in the department. Adding more adjuncts to the payroll would most likely serve to amplify this, and to add to the factory model of producing academic products.


Issue 3: department stability: vis-à-vis perpetual hiring

When you hire a full-time staff member chances are high that they will be around for a while if they are worth their salt. If you hire a faculty member, on the tenure stream, chances are that this is a career move and that this person won't be leaving any time soon.  This provides the department with stability in many ways.  It providers a core group of people to shepherd the department, its curriculum, and most importantly the students.  With adjuncts, given their semester to semester nature (i.e. no long term contract with the institution) it makes sense that these individuals will most likely be working elsewhere and have other commitments; or they might just be looking for a full time gig. In which case your institution or department will come second.  This isn't good, and if adjunct instructors leave your department you need to look for replacement. This adds to the workload of the few full-time faculty who need to start a search, review CVs, and go through and interview people.  This isn't a job for one person, but rather a job for a committee of at least 3 members to vet and verify what's on CVs and conduct the interviews.

Once the hiring is complete there is some mentoring that goes on to make sure that they are successful, and even then you aren't guaranteed that these new hires will work out. I'd say that you need at least 2, of not 3, semester to be able to get an accurate idea of how well these new hires teach, work, and fit in with your institutional culture. If things work out, great! Then you pray that they won't leave you in the lurch when something better comes along.  If it doesn't work out not only do you have to start the search again (which is time and energy consuming), you may have issues with your learners; it may have been the case that these new hires were awful and as such did a major disservice to your learners. This is something that needs mending, both from a content perspective and a human relations perspective.  Again, this takes time and effort.  Yes, I hear some of you say that this is also the case with tenure stream faculty.  This is true! It's true for all new hires. There is a period of  trial-and-error, acclimation, and kicking the tires that happens, both by the new hire's side and the department's side. However, once a new hire passes their 4th year review and they are reasonably certain of tenure, that's basically it, you don't generally need to worry that you are going to lose them and you need to start your search all over again. Not so the case with adjuncts. Commitment is a two-way street.

Issue 4: quality of adjuncts

The issue of quality of adjuncts cuts in a number of ways.  If luck out and find someone good in your search, you'll know within a semester or two if they pass the muster (and they will know if they are a good fit for your department). It is risky having any new hire, especially one with so much power over the learning of a group of students, as I mentioned above.  There are, however, other dimensions of quality. One of my considerations for quality is how current are people in their fields?  I generally do not like people who myopically focus on their own research as the cutting edge of what's out there in the field, but this is one of the legitimate ways of keeping current.

Many departments that I've been in contact with use one measurement for adjunct quality: course evaluations.  I am the first to say that I am not an expert in this arena since I have not studied it, but I think this is complete bunk.  As I like to say, you can have an instructor who is Mr. or Ms. Congeniality and basically bamboozle students into thinking that they have learned something relevant and worthwhile. Thus the students are more apt to give good reviews to bad instructors. Those people are then hired to continue teaching to the detriment of future learners. As an aside, I just read a story on NPR on course evaluations. Pretty interesting read - course evaluation apparently are bad measurement instruments.

Finally, just to wrap this section up, another issue I've seen is course-creep.  Someone is hired specifically to teach one course, CRS 100 for example, and then due to many, and varying reasons, they are given courses CRS 150, 200, 350, 400, 420, and 450.  The person may not really be a subject expert in these fields, and may not even have enough time to catch up on the latest developments for their own sake and the sake of their learners, but due to inadequate quality measurement instruments those people get to teach more and more courses in their respective programs.  As a side note, it seems as though accreditors might be taking notice of the increased reliance on adjunct faculty.

Issue 5:  disproportionate representation of faculty by teaching more courses, and issues of diversity

So, we've come to a point in our discussion (with my instructional design colleague) where the suggestion is to just create additional sections for the instructors that have proven themselves over the years.  First this assumes that the people we hire can teach additional courses for us. This, generally speaking, is not usually the case.  The people who teach for us have day-jobs. They are professors at their own institutions and they have responsibilities to their own home departments.  Adding more courses to their teaching roster simply isn't feasible from a logistics point of view.   Even if it were possible, departments don't grow by simply hiring more of the same.  The way organizations grow is through diversification. New faculty hires would be able, surely, to teach some intro level courses in our program, however they would bring in their own expertise.  This expertise would allow the department to create additional tracks of study, offer different electives, provide seminar series for diverse interests to current students and alumni.  The more of the same approach may work short term, however it's not a great long term strategy.

Still, some departments do expand someone's course-load to include more courses.  As we saw in issue #4, this is an issue of quality.  It is also an issue of lack of diversity and disproportionate representation of one faculty member.  I would feel very odd if I were teaching and students were doing 1/4, or /3, or 1/2 of their courses with me because it was compulsory.  If students really opted to take more courses with me, then more power to them, they've made an informed decision.  However, if courses are required and students only have 1 faculty member to choose from, then that is bad for them in the long run because they don't get a diversity of views, opinions, expertise, and diverse know-how from the field (if the adjuncts are from a more practical background).

Issue 6: research of Tenure stream faculty

Now, as I wrote above, I really don't like it when faculty drone on and on about their research, and their research agenda, and look for ways to get out of teaching. Being a faculty member is often compared to being a three-legged stool: teaching, research, and service. You can't extend one leg, shorten another and expect to have balance.  If you wish to be a researcher then by all means, quit your academia job and go find a research-only job.  That said, research, and being up-to-date, is important.  For me it connects with a measurement of quality. Adjuncts are only hired, and paid for, teaching.  Since there is no research requirement in their jobs research and continuous quality improvement may not be something that they undertake. This is bad not only for the students, but also for the department.  One of the ways that we are able to attract students to our respective programs is through name-brand recognition.  In a recent open house my department had published books by our faculty.  Several students commented on the fact that we had that Donaldo Macedo who worked with the Paulo Freire in our department. Yes, we have that Charles Meyer, who's a pioneer in corpus linguistics. These are just two examples, but it gets people to pay attention to you.  Even with my own studies, one of the reasons I chose Athabasca was the fact that I had read work by Fahy, Anderson, Dron, Ally, and Siemens. I was familiar with the CoI framework and the work done on that, and I am a reader of IRRODL.  The fact that AU is the place where all these things are happening was a catalyst for me to apply and attend. All of this stuff comes directly from the research work and public outreach of the full time faculty of this institution. Adding more adjuncts to the payroll doesn't get you this in the long term. Again, you invest in your faculty and your get paid back with dividends!


Conclusion

To wrap this up, in this big organization that we all work in, we all have many different jobs, little communication, and no one has a big picture. I consider myself lucky. Having worked as a media technician, a library systems person, a library reference and training person, an instructional designer, as an adjunct faculty member, and now a program manager, I've seen all of the different levels of what's going on in academia.  I have a more complete picture, much more so than any of my colleagues who are in the same job/career path. The upper administration is still a bit of a mystery to me, but I guess I still have room to grow. I am grateful that friends and colleagues want to help out with growing our program, but without having all of the information, I am afraid that "changing the model" is simply code for do it quicker and cheaper and churn out more students.  Students need mentors, advisors, and role models. The adjuncts we've had teaching for us for the past 3 years (or more) are great and do, unofficially, provide that out our learners. However, you can't grow a program on adjuncts. What it comes down, for me, is recognizing the humanity of adjuncts, compensating them well, getting them into the fold as valuable contributors to the department, and investing long-term in programs.  Figure out what you need tenure stream people for, what you need Lecturers for (adjuncts with long-term contracts) and work strategically. Semester-to-semester, and adjunct majority, is not the way forward.


Your thoughts?

Sunday, December 28, 2014

MOOC thoughts closing out 2014

It's the final stretch of 2014! This makes it my fourth year in exploring MOOCs - boy does time fly!  When I started off with LAK11 I was really just looking for ways to continue learning for free.  While I do get a tuition benefit at work, this also involves standard semesters of 13 weeks, getting work-release time (since online learning isn't covered by the benefit) and retaining the motivation to keep going through a predefined course and syllabus.  Even when MobiMOOC happened and we formed the MobiMOOC research team I really didn't foresee that the, oddly named, MOOC would catch on fire the way it did.  At the time I was eager to get some initial thoughts together on how to put together a MOOC (now they are called cMOOCs) and put together a Great Big MOOC Book, with others, that was a right mix of research and practice.  Since the MOOC has really expanded a lot over the years, with many different things being called a "MOOC" the original idea might be better renamed to The Great Big Book on Open Online Learning (if there are any takers on this, you know my email and twitter - it should be a fun little project, licensed under creative commons of course).

Each year with my involvement in MOOCs I meet some great new people, get re-acquainted with some old, trusty, MOOCers, and learn more about my own behavior about learning in these open spaces.  In addition to the cMOOC (connected courses) and the rMOOC (Rhizo14), there are a few things that I explored in the xMOOC world this year that made me ponder and still keep me thinking. Here is a high level overview of four things that stood out to me:

Languages other than English:

This year I experimented with MOOC providers whose primary language was something other than English.  Those were MiriadaX (Spanish), France Universite Numerique MOOC (France), and OpenCourseWorld (German). Even though I never studied Spanish in a classroom, the amount that I self-studied, and my knowledge of other romance languages, made it possible to go through a number of MOOCs on this platform. On the one hand I think it's great to have content in another language, but the paradigm that they are using (video lecture, textual materials, quizzes) seems fundamentally flawed to me for "deep" learning. There were some MOOCs that I really enjoyed (the 3rd Golden Age of TV for example) but this was probably because of the camera work for the videos, the on-screen chemistry of the presenters, and the analysis of the topic.  The white-screen and voice-over-powerpoint had me yawning.  I wanted to pay more attention, but I found the visuals distracting me from paying attention to the language that I didn't speak well, so the lack of motivation became a language comprehension issue.

I only attempted one MOOC at FUN, which was basically a how to run your own MOOC from soup to nuts. The FUN platform is based on Open EdX which made it familiar. The interesting thing about this MOOC were the multiple ways of going through the MOOC.  It was basically broken down by ADDIE and you could pick any track to complete the course.  Some newbies would focus on the A and D part, while others could work more on implementation.  Due to time constraints I didn't "finish" this MOOC, but I did like it a lot as a way to practice my French.  The thing I found out is that on MiriadaX, when submitting things in passable Spanish (or English!) I would get OK feedback, whereas on FUN I would be docked points on assignments for bad French. I haven't written detailed French for a while now.  I think the last time I did it was for a cMOOC, on this blog, so it's probably not that good.  This was an interesting social experience for me (grading with a language barrier).

Certification - M'eh

In previous years, when certificates of completion on the various MOOC platforms were easier or free to get I actually cared more about "passing" the course and getting that little piece of digital paper.  I know it's silly, but I would enroll in fewer MOOCs, do all the assignments (no matter how silly or non-applicable some of them might seem to me) in order to get the certificate.  Basically if I wanted to do some assignments because I thought they were cool, if I was close percentage-wise to the minimum mark for a certificate of completion I would get myself to do the ones I didn't care much for because I was so close to that certificate.  This year, with the advent of verified certificates, and the lack of a basic and free certificate for those courses, I decided that I could dispense with the assignments altogether. Basically, what it boils down to, is that since there was no chance of getting a prize at the end of race, why bother staying on the path?  This year my xMOOC approach (at least with coursera, where courses may become inaccessible at the conclusion of the course) has been to enroll in anything that seems interesting and download all resources while I still have them available.  The load them onto an iPod and go through them when there is an opportunity to do so.  This means that I am taking back control of my time and deciding when to learn, and what to learn, on my own time.  The only exception to this tactic has been edx.  Their courses still award a free certificate (so I am still hooked), and I make an attempt to participate in DALMOOC which was a topic of interest, but also tried to blend the cMOOC and xMOOC format in a way.  Not so sure how well it did (based on my cursory observations) but I am looking forward to any post-mortem research on this course!

Who is vetting these things?

Even back in 2008, when Siemens, Downes and Cormier worked on CCK, there were academic names attached to MOOCs such as the University of Manitoba, University of Prince Edward Island, and the National Research Council of Canada.  The thing that I have noticed this year is that more and more non-academia entities are entering the MOOC space.  Even if you discount the non-MOOC MOOC provider Udemy, there are MOOC providers outside of North America that are accepting MOOCs from non-academic entities, such as firms on brand image, gamification, and so on.  That's fine, there are many fine folks outside of academia who do research on these things and want to share their passion, but sometimes I feel like I am being sold to when I am taking a MOOC that is not affiliated with a University.  Maybe this is just a perception issue, but I see MOOC offerings by Universities as a Public Service, while MOOCs from a business entity as something that is Freemium, and if I want more (or more substantive things) I ought to buy their books, software, or services.

Research is here!

For the past few years articles on MOOCs have been few and far between.  It was always great to get a new issue of IRRODL, or JOLT, or any other open access publication and see an article on MOOCs.  The surprise factor was great, but it wasn't so great that we didn't get a ton of research into this area.  A lot was opinion (informed and uninformed) and speculation. In 2014 I think we saw the tide change a bit with more research coming out on MOOCs.  I hope that this trend continues!

So, that's it for me and MOOCs in 2014.  What are your highlights (or low-lights) with MOOCs this year?

Sunday, December 21, 2014

DALMOOC Episode 10: Is that binary for 2? We've reached recursion!

Hey!  We've made it! It's the final blog post about #dalmooc... well... the final blog post with regard to the paced course on Edx anyway :)  Since we're now in vacation territory, I've decided to combine Weeks 9 and 10 of DALMOOC into one week.   These last two weeks have been a little light on the DALMOOC side, at least for me.  Work, and other work-related pursuits, made my experimentation with LightSIDE a little light (no pun intended).  I did go through the videos for these two weeks and I did pick out some interesting things to keep in mind as I move through this field.

First, the challenges with this sort of endeavor: First we have data preparation. This part is important since you can't just dump from a database into programs like LightSIDE. Data needs some massaging before we can do anything with it.  I think this was covered in a previous week, but I think it needs to be mentioned again since there is no magic involved, just hard work!

The other challenge mentioned this week was labeling the data. Sometimes you get the labels from the provider of the data, as was the case with the poll example used in one of the videos for week 9. To do some machine learning the rule of thumb, at least according to dalmooc, is at least 1000 instances of labeled data are needed to get some machine learning  - more or less labelled data would be needed depending on individual circumstances.  For those of you keeping track at home Carolyn recommends the following breakdown:
200 pieces of labelled data for development
700 pieces of labelled data for cross-validation
100 pieces of labelled data for final testing

Another thing to keep in mind, and I think I've mentioned this in previous weeks, is that Machine learning won't do the analysis for you (silly human ;-) ).  The important thing here is that you need to be prepared to do some work, some intepretation, and of course, to have a sense of what your data is. If you don't know what your data is, and if you don't have a frame through which you are viewing it, you are not going to get results that are useful. I guess the old saying garbage in, garbage out is a good thing that we need to be reminded of.

So, DALMOOC is over, and where do we go from here?  Well, my curiosity is a bit more piqued. I've been thinking about what to do a dissertation on (entering my second semester as a doctoral student) and I have all next summer to do some work on the literature review.  I still am thinking about something MOOC related, some of my initial topics seem to already be topics of current inquiry and of recent publications, so I am not sure where my niche will be.  The other fly in the ointment is that the course I regularly teach seems to have fewer students in it, so a Design Based Research on that course (that course as a MOOC I should say) may not be an option in a couple of years. Thus, there is a need for Plan B: I am actually thinking of going back to my roots (in a sense) and looking at interactions in a MOOC environment.  The MRT and I have written a little about this, looking at tweets and discussions forums, so why not do something a little more encompassing?  I guess I'll wait until the end of EDDE 802 to start to settle on a topic.

What will you use your newly found DALMOOC skills on?





Monday, December 15, 2014

First semester done!

Hurray!

The first semester of my doctoral studies is done!  Well, it was done last week, but as I wrote in the previous post (on #dalmooc) it's been one crazy semester.  I had hoped that I would blog once a week on the topic of EDDE 801, getting some interesting nuggets of information each week to share , but between MOOC like #ccourses, work, and regular EDDE 801 work, no such luck.  I felt I was putting in enough time in EDDE 801 and that I gave everything into the closed system that is Moodle rather than on the blog.  So, here's one blog post to try to re-capture some thoughts I had while the semester was in progress.

Early on one of the things I really dreaded were the synchronous sessions, every Tuesday at 8PM (my time).  My previous experience with synchronous sessions was not a good one, thus coloring my expectations for this course. Most of my previous experience has been one-way communication webinars (yaaaawn), or mandatory online course synchronous sessions for student presentations - for my Masters programs. The problem here is that no one provided any scaffolding to my fellow students on what constituted good online presentation skills, thus students would offer drone on and on (not really checking in with the audience) and they would often use up their allotted time, and then some. I don't blame my former classmates, just the system that got them into that situation.  So, here I was, getting ready for a snooze-fest.

I am glad to say that it wasn't like this. Most seminars were actual discussion, and Pat did prod and poke us to get the discussion going. Most of the guest speakers were lively and engaged with the audience in some fashion, and my classmates were good presenters.  If I yawned it was due to the time of day rather than boredom. So, final verdict is that synchronous sessions were done well, as compared to my previous experience. Am I a synchronous conferencing convert? Not yet.  Like Maha Bali I still have an affinity for asynchronous.

The one thing that gave me pause to think, with EDDE 801, were the discussion-board assignments.  In my previous experience, with no required weekly synchronous sessions, the bread-and-butter of the course were weekly discussion forums (sometimes 1, sometimes 2, rarely 3).  In 801 we had to do two literature reviews and facilitate 2 discussions based on those literature reviews.  We have 12 in our cohort, so that would be 24 discussions.  Initially I didn't think this would be "enough work" (yeah...I don't know what I was thinking), but as the semester progressed and people participated in the forums vigorously, near the end I got in a bit of a cognitive overload situation where I couldn't really read any more (sorry to the last 4 literature reviews posted, I couldn't really focus on them as I did in the early ones).

Finally, one thing I wanted to do this semester, but I really didn't get a chance to, was to make a sizable dent into the literature I've collected for a potential dissertation topic on MOOCs.  I did read some articles, in order to do my presentation for the course, but it didn't really end up being as big of a dent as I hoped to.  I was, initially, thinking that  I would do some in the break, but with the semester starting January 15, I'm thinking of rest and relaxation, and dissertation reading this summer.

All things considered, not a bad semester! 1/8 done with my doctorate lol ;-)




Friday, December 12, 2014

DALMOOC Episode 9: the one before 10

Hello to fellow #dalmooc participants, and those who are interested in my own explorations of #dalmooc and learning analytics in general.  It's been a crazy week at work with many things coming down all at the same time such as finishing advising, keeping an eye on student course registrations, and new student matriculations, making sure that our December graduates are ready to take the comprehensive exam...and many, many more things. This past week I really needed a clone of myself to keep up ;-)  As such, I am a week behind on dalmooc (so for those keeping score at home, these are my musings for Week 7).

In week 7 we are tackling Text Mining, a combination of my two previous disciplines: computer science and linguistics (yay!). This module brought back some fond memories of corpus linguistics exploration that I had done a while while I was doing my MA in applied linguistics. This is something I want to get back to, at some point - perhaps when I am done with my doctorate and I have some free time ;-).  In any case to start off, I'd like to quote Carolyn Rose when she says that Machine learning isn't magic ;-) Machine learning won't do the job for you, but it can be used as a tool to identify meaningful patterns. When designing your machine learning, you need to think about the features you are pulling from data before you start your machine learning process, otherwise you end up with output that doesn't make a ton of sense, so the old adage in computer science "garbage in, garbage out" is still quite true in this case.

In examining some features of language, we were introduced to a study of low level features of conversation in tutorial dialogue. There were features of turn length, conversation length, number of student questions, student initiative, student-to-tutor word ratios. The final analysis was that this is not where the action is at. What needs to be examined in discourse situations in learning are the cognitive factors and underlying cognitive processes that are happening while we are learning. This reminds me of a situation, this year, where a colleague asked me if I knew of research that indicated whether response length in online discussion forum could be used, in a learning analytics environment, to predict learner success.  I sort of looked at my colleague as if they had two heads because, even though I didn't have the vocabulary to explain that these were low level features I was already thinking that they weren't as useful as looking at other factors.  So, to bring this back to dalmooc, shallow approaches to analysis of discussion are limited to their ability to be generalized. What we should be looking at are Theory-driven approaches which have been demonstrated to be more effective at generalizing. 

In the theoretical framework we look at a few things (borrowing from Sociolinguistics of course):  (1) Power and Social distance explain social processes in interactions; (2) Social processes are reflected through patters in language variation; (3) so our hope is that Models that embody these structes will be able to predict social processes from interaction data.

One of the things mentioned this week was Transactivity (Berkowitz & Gibbs, 1983) which is a contribution on an idea expressed in a conversation, using a reasoning statement.  This work is based on the ideas of Piaget (1963) and cognitive conflict.  Kruger and Tomasello (1986) added Power Balance to the equation of Transactivity.  In 1993 Azmitia & Montgomery looked at Friendship, Transactivity and Learning. In Friend pairs there there is higher transactivity and higher learning (not surprising since the power level is around the same between both people).
.



Finally this week I messed around with LightSIDE, without reading the manual ;-).  According to Carolyn the manual is a must read (D'oh ;-)  I hate reading manuals).  I did go through the mechanical steps that were provided on edx to get familiar with LightSIDE, but I was left with a "so what" feeling after.  The screenshots are from the work that I did.  I fed LighSIDE some data, pulled some virtual levers, pushed some virtual buttons, and turned from virtual knobs, and I got some numbers back.  I think this falls inline with the simple text mining process of having raw data, then extracting some features, then modeling, and finally classifying. Perhaps this is much more exciting for friends of mine who are more stats and math oriented, but I didn't get the satisfaction I was expecting - I was more satisfied with the previous tools we used. Maybe next week there is much more fun to be had with LighSIDE :-)

So, how did you fare with Week 7?  Any big take-aways?