Badge MOOC Challenge 2: Define the Currency of an Ecosystem
It's week two (of six) in the #OpenBadgeMOOC and the challenge for this week is to think about and define the Currency of an Ecosystem. As with the first blog post in this series, this thought process relates to the #ESLMOOC that I am thinking of developing as part of a potential dissertation proposal, and the writing instructions for this challenge are posted in the first part of the blog post, followed by my brainstorming.
Challenge Instructions:
Challenge Assignment 2: Define the Currency of an Ecosystem
At the next level of complexity, we consider the ecosystem of four principal sets of stakeholders:
Assessors are responsible for valid and reliable assessments. In other words, assessors must ensure the assessment activities accurately reflect the targeted competencies, and that they do so consistently, regardless of who the job seeker is. When an assessment is successfully completed, the assessor issues a badge to the job seeker in recognition of that individual’s competence, and the now-badge holder adds that badge to his/her identity through a badge backpack or other methods of storing and displaying badges.
The means by which an individual acquired targeted competencies is not necessarily relevant to an employer. Two benefits derive from this:
Assessors as a role that can be differentiated from learning providers add a useful dimension to our analysis of our ecosystem, one that helps us articulate the exchange value of valid competency assessment. Implied in this new dimension is the premise that badges can improve the value exchange in an ecosystem. To consider this possibility, we first need to understand how the “currency,” exchange value, of the ecosystem currently works. For the ecosystem you defined in Challenge 1,
It's pretty hard to try to enumerate all possible entities that define competencies in English as a Second Language because there isn't one entity, but rather many different entities. For example, there are four types of tests (at least) that can vouch for a person's competency such as the TOEFL, IELTS, Michigan and Cambridge Exams. When thinking about Europe, there is the Common European Framework of Reference for Languages put together by the Council of Europe, and in the US you have the ACTFL National Standards. The Cambridge exam harmonizes with the CEFR standards. For the purposes of this project (#eslmooc) I will be focusing on the CEFR, and if time permits I might draw parallels to ACTFL as well. The reason to pick these two is because they are, technically, language agnostic, so it may be able to generalize any research findings (beyond the application of badges for this project) to other Language Learning MOOCs.
Now, as far as learners go, learners are scaffolded by teachers in both private and public school settings. The goal, in addition to acquiring the language, is to also be able to pass tests like the Cambridge and Michigan tests to be able to demonstrate proficiency in the language. Ways to scaffold learner's understanding and usage of the language include written, read, oral and aural modes of communication. Depending on the educational setting these might be integrated with one another, or separated into distinct units. Some instructors may have the academic freedom to structure their course in any way they see fit, while others may have to follow mandated state or national standards for language teaching, mandated curricula, or even ways of teaching. There seems to be quite a variety in this area.
Evaluation of learners is actually more disintermediated from instruction, or so it seems to me. While private language institutes, where learners take the first steps in learning a language, can be accredited or provisioned to run such examinations and ultimately evaluate them, there are other organizations that can offer learner evaluation such as the British Council and Private Testing Centers where the only thing they do is testing. Thus, it is conceivable that someone can, without having undergone formal instruction, go and take this proficiency test. Since I have not taken, or administered such tests my experience comes from friends and acquaintances that have taken or administered them. There seems to be both a written component to such test, as well as a verbal/interactional component, sometimes in the form of an interview.
It seems to me, being on the outside, that learner competencies are most likely recorded internally by grading the written exams, and perhaps through interlocutor notes (in the cases of interactional portions of the examinations) so that internally there is some record. Externally, I've seen people who've passed such exams get a certificate of some sort indicating the date that they took the test, their final grade, and the type of test that they passed. These records can then be presented to employers or schools as a demonstration of competency in the language.
There are some deficiencies that I can see in this system. First, and foremost, it seems to me that, like most standardized tests, learners who want to demonstrate their abilities they will most likely need to take some exam-prep workshop, or read a book about it, or take practice tests in order to pass the test. Sometimes it seems like in cases like these simply demonstrating your knowledge is not enough. You really need to learn how to "beat the test."
A second shortcoming, for me, is the ability to communicate in different environments. When you pass one of these examinations it doesn't necessarily prove that you can communicate successfully in a specific field. For example, a field might have specialized vocabulary or ways of writing and communicating that such a generalized test might not take into account. It also seems that the final grade for passing is a conglomeration of the various mode of communication (listening, speaking, writing), instead of breaking things down, thus it seems that it may make sense to break up this monolithic grade to a more nuanced breakdown of skills and levels in those skills.
The assessor persona in this case is has two potential options: Someone who was a teacher, but is also an assessor by virtue of having their language school certified to be an assessor; or the assessor can be someone who has never taught the learner and is only there to assess them. Here are two specific personas for these two cases:
Leon Broznic: Leon, as we saw in the previous blog, has been contracted by the University of Milan to get their faculty up to speed, as far as English Language Skills go, for teaching their subjects in English. In addition to his teaching roles he also has a contract job every so often to evaluate candidates who are taking the English Proficiency Examination through the local Private Language School. Since Leon also teaches, some of the candidates taking this exam are some of his former students. Since Leon has taught them before, if he had known that they were planning on taking the exam, he would have laid down some of the foundations for being successful in such exams. Leon administers and evaluates both the written portion of the exam and the interview (aural/oral exam) parts. From an evaluation part, Leon sometimes wishes that he could give more than just a Pass/Fail evaluation for certain aspects of the exam because his former students find him after the exam is over and ask specifically how they could improve. Since the final documentation is a certificate, it doesn't give the learner an idea of specific strengths and weaknesses. Leon would like some standardized token to show that certain aspects of the exam were attained with High, Low or Mid-Mastery and where they could improve.
Shannon Henry: Shannon works for the British Council and conducts only the examination portions of these proficiency exams. Shannon does not teach. She is usually seen by exam takers as an unknown person, and this raises some of their anxieties that they might have over the exam itself. Like Leon, she sometimes wishes that he could give more than just a Pass/Fail evaluation for certain aspects of the exam because exam takers who don't do well in all parts (and thus not pass) could potentially do well in some parts of the exam. By getting some sort of checkpoint for the parts that they have mastered, they can go and practice what they didn't do so well one. When the exam candidate comes back for a test re-take they can focus only on the areas they needed to show improvement, and not things that they already had mastered. This could, potentially, cut down on the amount of time it takes to grade exams that are re-take exams, but where the learners already demonstrated mastery in some parts of the exam.
Before Badges, as I wrote above, you have one monolithic certificate which is really a summing up (an average perhaps) of all the components that go into the assessment. Like a final grade for a course this doesn't really tell the employer or school how well a learner has mastered certain skills. The parallel to a course grade would be that the learner may have done really well in the first 10 weeks of but slacked off the last two. He still would receive an "A", but those skills in the last two weeks may be under-developed (or not developed at all). I see badges as a potential way to further show the specific levels and skills that a learner has demonstrated through some sort of assessment, so if for example a job or academic appointment requires better writing at the advanced level than speaking and interacting with the public, then an earned badge would be able to give you that information more readily than an "A" on a certificate. I guess this would be something along the lines of a micro-credential :-)
That's it for Week 2 (for now). Your thoughts? Any areas in this badge process where there is room for improvement (or are there people with dissenting views?)
Challenge Instructions:
Challenge Assignment 2: Define the Currency of an Ecosystem
At the next level of complexity, we consider the ecosystem of four principal sets of stakeholders:
- Learning Providers
- Assessors
- Job Seekers
- Employers
- Badges
Assessors are responsible for valid and reliable assessments. In other words, assessors must ensure the assessment activities accurately reflect the targeted competencies, and that they do so consistently, regardless of who the job seeker is. When an assessment is successfully completed, the assessor issues a badge to the job seeker in recognition of that individual’s competence, and the now-badge holder adds that badge to his/her identity through a badge backpack or other methods of storing and displaying badges.
The means by which an individual acquired targeted competencies is not necessarily relevant to an employer. Two benefits derive from this:
- Many different learning providers can offer pathways to the same skills/competency assessment. Often, assessors also offer courses or other learning opportunities that support learners/job seekers looking to undertake assessments.
- Assessment can be linked to formal/traditional education as well as to non-traditional learning situations.
Assessors as a role that can be differentiated from learning providers add a useful dimension to our analysis of our ecosystem, one that helps us articulate the exchange value of valid competency assessment. Implied in this new dimension is the premise that badges can improve the value exchange in an ecosystem. To consider this possibility, we first need to understand how the “currency,” exchange value, of the ecosystem currently works. For the ecosystem you defined in Challenge 1,
- How are competencies defined in your industry or community of practice, and by whom?
- What are the learning frameworks that guide learners toward achieving the competencies?
- Who assesses learners’ competencies? What evidence documents learners’ competencies, and who has access to this evidence? Can individuals assert competence without having undertaken a learning program, e.g., through tests or prior learning evaluation?
- How are learners’ competencies recorded?
- Who are the consumers of the records of competence?
- Outline the shortcomings and strengths of the current currency exchange in this ecosystem.
- Create a persona/archetype that represents an assessor stakeholder. This could be articulated as an additional role for your learning provider persona from Challenge 1.
- Elaborate on your “before badges” user stories from Challenge 1 to include more detail about the current state of the currency exchange.
Brainstorming Begins for Challenge #2
It's pretty hard to try to enumerate all possible entities that define competencies in English as a Second Language because there isn't one entity, but rather many different entities. For example, there are four types of tests (at least) that can vouch for a person's competency such as the TOEFL, IELTS, Michigan and Cambridge Exams. When thinking about Europe, there is the Common European Framework of Reference for Languages put together by the Council of Europe, and in the US you have the ACTFL National Standards. The Cambridge exam harmonizes with the CEFR standards. For the purposes of this project (#eslmooc) I will be focusing on the CEFR, and if time permits I might draw parallels to ACTFL as well. The reason to pick these two is because they are, technically, language agnostic, so it may be able to generalize any research findings (beyond the application of badges for this project) to other Language Learning MOOCs.
Now, as far as learners go, learners are scaffolded by teachers in both private and public school settings. The goal, in addition to acquiring the language, is to also be able to pass tests like the Cambridge and Michigan tests to be able to demonstrate proficiency in the language. Ways to scaffold learner's understanding and usage of the language include written, read, oral and aural modes of communication. Depending on the educational setting these might be integrated with one another, or separated into distinct units. Some instructors may have the academic freedom to structure their course in any way they see fit, while others may have to follow mandated state or national standards for language teaching, mandated curricula, or even ways of teaching. There seems to be quite a variety in this area.
Evaluation of learners is actually more disintermediated from instruction, or so it seems to me. While private language institutes, where learners take the first steps in learning a language, can be accredited or provisioned to run such examinations and ultimately evaluate them, there are other organizations that can offer learner evaluation such as the British Council and Private Testing Centers where the only thing they do is testing. Thus, it is conceivable that someone can, without having undergone formal instruction, go and take this proficiency test. Since I have not taken, or administered such tests my experience comes from friends and acquaintances that have taken or administered them. There seems to be both a written component to such test, as well as a verbal/interactional component, sometimes in the form of an interview.
It seems to me, being on the outside, that learner competencies are most likely recorded internally by grading the written exams, and perhaps through interlocutor notes (in the cases of interactional portions of the examinations) so that internally there is some record. Externally, I've seen people who've passed such exams get a certificate of some sort indicating the date that they took the test, their final grade, and the type of test that they passed. These records can then be presented to employers or schools as a demonstration of competency in the language.
There are some deficiencies that I can see in this system. First, and foremost, it seems to me that, like most standardized tests, learners who want to demonstrate their abilities they will most likely need to take some exam-prep workshop, or read a book about it, or take practice tests in order to pass the test. Sometimes it seems like in cases like these simply demonstrating your knowledge is not enough. You really need to learn how to "beat the test."
A second shortcoming, for me, is the ability to communicate in different environments. When you pass one of these examinations it doesn't necessarily prove that you can communicate successfully in a specific field. For example, a field might have specialized vocabulary or ways of writing and communicating that such a generalized test might not take into account. It also seems that the final grade for passing is a conglomeration of the various mode of communication (listening, speaking, writing), instead of breaking things down, thus it seems that it may make sense to break up this monolithic grade to a more nuanced breakdown of skills and levels in those skills.
The assessor persona in this case is has two potential options: Someone who was a teacher, but is also an assessor by virtue of having their language school certified to be an assessor; or the assessor can be someone who has never taught the learner and is only there to assess them. Here are two specific personas for these two cases:
Leon Broznic: Leon, as we saw in the previous blog, has been contracted by the University of Milan to get their faculty up to speed, as far as English Language Skills go, for teaching their subjects in English. In addition to his teaching roles he also has a contract job every so often to evaluate candidates who are taking the English Proficiency Examination through the local Private Language School. Since Leon also teaches, some of the candidates taking this exam are some of his former students. Since Leon has taught them before, if he had known that they were planning on taking the exam, he would have laid down some of the foundations for being successful in such exams. Leon administers and evaluates both the written portion of the exam and the interview (aural/oral exam) parts. From an evaluation part, Leon sometimes wishes that he could give more than just a Pass/Fail evaluation for certain aspects of the exam because his former students find him after the exam is over and ask specifically how they could improve. Since the final documentation is a certificate, it doesn't give the learner an idea of specific strengths and weaknesses. Leon would like some standardized token to show that certain aspects of the exam were attained with High, Low or Mid-Mastery and where they could improve.
Shannon Henry: Shannon works for the British Council and conducts only the examination portions of these proficiency exams. Shannon does not teach. She is usually seen by exam takers as an unknown person, and this raises some of their anxieties that they might have over the exam itself. Like Leon, she sometimes wishes that he could give more than just a Pass/Fail evaluation for certain aspects of the exam because exam takers who don't do well in all parts (and thus not pass) could potentially do well in some parts of the exam. By getting some sort of checkpoint for the parts that they have mastered, they can go and practice what they didn't do so well one. When the exam candidate comes back for a test re-take they can focus only on the areas they needed to show improvement, and not things that they already had mastered. This could, potentially, cut down on the amount of time it takes to grade exams that are re-take exams, but where the learners already demonstrated mastery in some parts of the exam.
Before Badges, as I wrote above, you have one monolithic certificate which is really a summing up (an average perhaps) of all the components that go into the assessment. Like a final grade for a course this doesn't really tell the employer or school how well a learner has mastered certain skills. The parallel to a course grade would be that the learner may have done really well in the first 10 weeks of but slacked off the last two. He still would receive an "A", but those skills in the last two weeks may be under-developed (or not developed at all). I see badges as a potential way to further show the specific levels and skills that a learner has demonstrated through some sort of assessment, so if for example a job or academic appointment requires better writing at the advanced level than speaking and interacting with the public, then an earned badge would be able to give you that information more readily than an "A" on a certificate. I guess this would be something along the lines of a micro-credential :-)
That's it for Week 2 (for now). Your thoughts? Any areas in this badge process where there is room for improvement (or are there people with dissenting views?)
Comments
ielts mock test