I find it interesting that peer, colleague, and potentially mentor, evaluations are mentioned here because it's not something that I've come across often in instructional design contexts. Usually most instructional design is iterative, you reach the evaluation stage once your run the course, gather feedback and go back to the drawing board in order to improve your course :-) I actually like the idea of bouncing ideas off colleagues because it means that you can get feedback before you actually run a course, fix any issues that were in your blind-spots, and iterate more rapidly.
- How will you know whether your blended learning course is sound prior to teaching it?
- How will you know whether your teaching of the course was effective once it has concluded?
- With which of your trusted colleagues might you discuss effective teaching of blended learning courses? Is there someone you might ask to review your course materials prior to teaching your blended course? How will you make it easy for this colleague to provide helpful feedback?
- How are “quality” and “success” in blended learning operationally defined by those whose opinions matter to you? Has your institution adopted standards to guide formal/informal evaluation?
- Which articulations of quality from existing course standards and course review forms might prove helpful to you and your colleagues as you prepare to teach blended learning courses?
I like the statement from Singh & Reed (2001) “Little formal research exists on how to construct the most effective blended program designs” (p. 6) [in this week's reading]. It brings me back to week 1 on Blendkit2012 when I was thinking out loud about the blend, and the potential conflicts of goals for blended courses between college administrators and college instructors. The admins probably want to see a standardized 50-50 blended course so they can get the most use out of physical locations and utilities; while instructors need to think about what the right blend is for optimal learning experiences. This, of course, may mean that the utilization of the physical campus locations may not be optimal, as compared to fully on-campus courses; so begins a dance to find the right "mix" for blended courses to make sure that they are both pedagogically superior and making appropriate uses of the campus without imposing a prescribed meeting space and time for courses.
Finally (back to ensuring quality), the readings do provide some more standards to look at for online course quality, and I've already bookmarked most of them. I am already QualityMatters certified (so I am familiar with that rubric) and I am in the process of completing the Blackboard Exemplary Course MOOC, so I am getting familiarized with that. As the chapter pointed out, some of these rubrics may seem very prescriptivist, but (from what I see) even if you pass the evaluations using such rubrics, this is only the setup. It's the execution that matters a lot in quality, when the rubber meets the road, when the instructor meets the students and teaching and learning happens. Even if you've designed an awesome on-campus, online, or blended course, if the instructor is not on-board you are destined for not-so-good things. This is why I think, that in order to ensure quality, the instructor(s) of the course needs to be part of the design, or debriefing process (if the instructor was an adjunct and not there when the course was designed by a peer or an instructional design team) and they need a peer community of practice (those teaching the same course in the same method) to get them ready to teach the course and to feedback what they find into that community, so the course can be improved, and other teachers of that course can learn from each other's experiences.