DALMOOC Episode 9: the one before 10
Hello to fellow #dalmooc participants, and those who are interested in my own explorations of #dalmooc and learning analytics in general. It's been a crazy week at work with many things coming down all at the same time such as finishing advising, keeping an eye on student course registrations, and new student matriculations, making sure that our December graduates are ready to take the comprehensive exam...and many, many more things. This past week I really needed a clone of myself to keep up ;-) As such, I am a week behind on dalmooc (so for those keeping score at home, these are my musings for Week 7).
In week 7 we are tackling Text Mining, a combination of my two previous disciplines: computer science and linguistics (yay!). This module brought back some fond memories of corpus linguistics exploration that I had done a while while I was doing my MA in applied linguistics. This is something I want to get back to, at some point - perhaps when I am done with my doctorate and I have some free time ;-). In any case to start off, I'd like to quote Carolyn Rose when she says that Machine learning isn't magic ;-) Machine learning won't do the job for you, but it can be used as a tool to identify meaningful patterns. When designing your machine learning, you need to think about the features you are pulling from data before you start your machine learning process, otherwise you end up with output that doesn't make a ton of sense, so the old adage in computer science "garbage in, garbage out" is still quite true in this case.
In examining some features of language, we were introduced to a study of low level features of conversation in tutorial dialogue. There were features of turn length, conversation length, number of student questions, student initiative, student-to-tutor word ratios. The final analysis was that this is not where the action is at. What needs to be examined in discourse situations in learning are the cognitive factors and underlying cognitive processes that are happening while we are learning. This reminds me of a situation, this year, where a colleague asked me if I knew of research that indicated whether response length in online discussion forum could be used, in a learning analytics environment, to predict learner success. I sort of looked at my colleague as if they had two heads because, even though I didn't have the vocabulary to explain that these were low level features I was already thinking that they weren't as useful as looking at other factors. So, to bring this back to dalmooc, shallow approaches to analysis of discussion are limited to their ability to be generalized. What we should be looking at are Theory-driven approaches which have been demonstrated to be more effective at generalizing.
In the theoretical framework we look at a few things (borrowing from Sociolinguistics of course): (1) Power and Social distance explain social processes in interactions; (2) Social processes are reflected through patters in language variation; (3) so our hope is that Models that embody these structes will be able to predict social processes from interaction data.
One of the things mentioned this week was Transactivity (Berkowitz & Gibbs, 1983) which is a contribution on an idea expressed in a conversation, using a reasoning statement. This work is based on the ideas of Piaget (1963) and cognitive conflict. Kruger and Tomasello (1986) added Power Balance to the equation of Transactivity. In 1993 Azmitia & Montgomery looked at Friendship, Transactivity and Learning. In Friend pairs there there is higher transactivity and higher learning (not surprising since the power level is around the same between both people).
.
Finally this week I messed around with LightSIDE, without reading the manual ;-). According to Carolyn the manual is a must read (D'oh ;-) I hate reading manuals). I did go through the mechanical steps that were provided on edx to get familiar with LightSIDE, but I was left with a "so what" feeling after. The screenshots are from the work that I did. I fed LighSIDE some data, pulled some virtual levers, pushed some virtual buttons, and turned from virtual knobs, and I got some numbers back. I think this falls inline with the simple text mining process of having raw data, then extracting some features, then modeling, and finally classifying. Perhaps this is much more exciting for friends of mine who are more stats and math oriented, but I didn't get the satisfaction I was expecting - I was more satisfied with the previous tools we used. Maybe next week there is much more fun to be had with LighSIDE :-)
So, how did you fare with Week 7? Any big take-aways?
In week 7 we are tackling Text Mining, a combination of my two previous disciplines: computer science and linguistics (yay!). This module brought back some fond memories of corpus linguistics exploration that I had done a while while I was doing my MA in applied linguistics. This is something I want to get back to, at some point - perhaps when I am done with my doctorate and I have some free time ;-). In any case to start off, I'd like to quote Carolyn Rose when she says that Machine learning isn't magic ;-) Machine learning won't do the job for you, but it can be used as a tool to identify meaningful patterns. When designing your machine learning, you need to think about the features you are pulling from data before you start your machine learning process, otherwise you end up with output that doesn't make a ton of sense, so the old adage in computer science "garbage in, garbage out" is still quite true in this case.
In examining some features of language, we were introduced to a study of low level features of conversation in tutorial dialogue. There were features of turn length, conversation length, number of student questions, student initiative, student-to-tutor word ratios. The final analysis was that this is not where the action is at. What needs to be examined in discourse situations in learning are the cognitive factors and underlying cognitive processes that are happening while we are learning. This reminds me of a situation, this year, where a colleague asked me if I knew of research that indicated whether response length in online discussion forum could be used, in a learning analytics environment, to predict learner success. I sort of looked at my colleague as if they had two heads because, even though I didn't have the vocabulary to explain that these were low level features I was already thinking that they weren't as useful as looking at other factors. So, to bring this back to dalmooc, shallow approaches to analysis of discussion are limited to their ability to be generalized. What we should be looking at are Theory-driven approaches which have been demonstrated to be more effective at generalizing.
In the theoretical framework we look at a few things (borrowing from Sociolinguistics of course): (1) Power and Social distance explain social processes in interactions; (2) Social processes are reflected through patters in language variation; (3) so our hope is that Models that embody these structes will be able to predict social processes from interaction data.
One of the things mentioned this week was Transactivity (Berkowitz & Gibbs, 1983) which is a contribution on an idea expressed in a conversation, using a reasoning statement. This work is based on the ideas of Piaget (1963) and cognitive conflict. Kruger and Tomasello (1986) added Power Balance to the equation of Transactivity. In 1993 Azmitia & Montgomery looked at Friendship, Transactivity and Learning. In Friend pairs there there is higher transactivity and higher learning (not surprising since the power level is around the same between both people).
.
Finally this week I messed around with LightSIDE, without reading the manual ;-). According to Carolyn the manual is a must read (D'oh ;-) I hate reading manuals). I did go through the mechanical steps that were provided on edx to get familiar with LightSIDE, but I was left with a "so what" feeling after. The screenshots are from the work that I did. I fed LighSIDE some data, pulled some virtual levers, pushed some virtual buttons, and turned from virtual knobs, and I got some numbers back. I think this falls inline with the simple text mining process of having raw data, then extracting some features, then modeling, and finally classifying. Perhaps this is much more exciting for friends of mine who are more stats and math oriented, but I didn't get the satisfaction I was expecting - I was more satisfied with the previous tools we used. Maybe next week there is much more fun to be had with LighSIDE :-)
So, how did you fare with Week 7? Any big take-aways?
Comments