Under consideration for the next cohort of the Digital Learning Asset Framework: assessment standards.
- What properties does a well-formed assessment possess?
- What is it actually assessing?
- Which content does this relate to?
- How do we store this information?
We in Learning & Development seem to lack a way of even describing the kinds of assessments we want to make. Even simple math is often problem for us.
Let’s all agree that the following mistake should NEVER EVER BE MADE:
Pass/Fail threshold = 80%
Number of questions = 4
We can all see that this doesn’t work, that this is essentially a 100% pass/fail. It should be painfully obvious that:
- The Learner either answers 3 out of 4 questions correctly, which is 75% (FAIL!!!)
- Or they answer 4 out of 4 questions correctly, which is 100% (PASS)
- There is a 0% chance of the Learner scoring anything in between a 75% and 100%.
Yet this is the situation that we present our Learners with all too often. Why? Probably because updates were made along the way and questions were removed. Or perhaps it was because a very well-intentioned (and perhaps even well-certified) Instructional Designer was simply moving to fast to notice, due to job pressures. These things are entirely predicable. The real reason is because we have no clear standard that we can point to and say “This doesn’t conform to spec, see?”
There is nothing new here, we humans have had the math we need to prevent such problems for a few millennia now. What we have yet to do is align this math with the work product that we create.
We can choose to fix this with the next iteration of Digital Learning Asset Framework. Do you think the resource below be valuable?
- 1-3 questions = Why bother scoring at all? Don’t kid yourself, this isn’t a real assessment.
- 4 questions = 75% (Learner can miss 1), this is the bare minimum if you call yourself a professional
- 5 questions = 80% (Learner can miss 1)
- 6 questions = drop one question
- 7 questions = add one more question
- 8 questions = 75% pass (Learner can miss 2)
- 9 questions = drop or add one question
- 10 questions = 70% (Learner can miss 3) or 80% pass (Learner can miss 2) or 90% if you’re being mean (Learner can miss 1)
- 11 question = drop or add one question
- 12 questions = 75% pass (Learner can miss 3)
- 13 questions = drop one question
- 14 questions = add one more question
- 15 questions = 80% pass (Learner can miss 3)
- 16 questions = 75% pass (Learner can miss 4)
- 17 questions = drop one question
- 18 questions = drop/add two questions
- 19 questions = add one more question
- 20 questions = either 70% (Learner can miss 6) or 75% (Learner can miss 5) or 80% (Learner can miss 4) or 85% (Learner can miss 3) or 90% (Leaner can miss 2) or, if you’re feeling mean 95% (Learner can miss 1)
It’s not hard. So let’s make it simple. This isn’t rocket science, it’s not even algebra.
By including some assessment standards in the next iteration of the Framework, we would not be preventing this problem from ever happening again. But we would be identifying such things as errors, disqualifying them from the Framework, and providing a better experience to our Learners through use of the Framework.
Does this belong in what we tackle next? Please comment below.