These are question and answer prompts from for work towards my PhD at Old Dominion University. The text is Web-Based Learning: Design, Implementation and Evaluation by Gayle V. Davidson-Shivers  (Author), Karen L. Rasmussen (Author),  Patrick R. Lowenthal  (Author)

Question 1: pg.178 # 6 – what features of online learning environments make conducting an evaluation difficult and how could the difficulties be overcome? 

I think the text does a good job at pointing out one of the most obvious and the one that came to mind immediately when reading through the chapter and that is the ability to evaluate in the actual online learning environment. Things like necessary software requirements, protective firewalls, hardware capabilities, and just simply user error are certainly things that can effect the evaluation of the module.

To some extent this might act as good user testing, certainly finding out that learners cannot access the course or sections of the material would be valuable feedback leading to necessary revisions. It becomes somewhat more difficult when the learner does’t identify there is a problem at all and evaluates the course as satisfactory. An example I have in this regard happened fairly recently at my work. We are testing some 3D products that we are considering bringing into the classroom. Our initial testing came back somewhat negative about the fidelity of the product, we looked at the product again and again and we couldn’t figure out what they where talking about until we realized they where looking at the product from a machine that had a significantly different graphics card and that was in fact reducing the fidelity of the product.

A mitigation to this could be providing a software and hardware checker package with the deliverable that is being evaluated, another option would be to potentially host the evaluations at specific locations where things like the computer hardware, and internet speeds are suitable for the evaluation.


Question 2: Extending your skills (pg. 178). Select one case study of your choice and share your thoughts on the evaluation approach as described in your select case.

Homer is at it again in Case Study # 2. He’s made the call that he’s going to have a mixed method approach to the training interventions he’s been tasked with delivering and thus he’s chosen to go with a portion of online experiences followed by hands on face to face training. So far so good.

Homer’s plan is to frame the evaluation around Kirkpatrick’s levels of evaluations. Because we are in the business world the CEO naturally is primarily interested in level 5 (ROI). Interestingly the major questions that Homer prepares still manage to hit four of Kirkpatrick’s 5. I think this is a good example of how an instructional designer can easily work within the business realm with an ever-present focus on revenue, but still represent their core values in delivering high quality instruction.

At the end of the piece it mentions that the CEO is interested in bringing in a consultant and it’s unclear how Homer really takes this. The paragraph ends with him being tasked with finding the consultant but then just pressing forward with evaluation plans. I think the consultant might be a solid idea, so long as the consultant is truly an expert, this would be similar to an Adelphi test. The one thing that I would caution about is bringing the product right from safety expert evaluation to the end user. These should be separate tasks, one should use the expert’s analysis for quality, compliance, and best practices; but then we need to make sure that we’re doing the final prototype evaluation with some of the people that will actually be using it.

Leave a Reply

Your email address will not be published. Required fields are marked *