When we create assessments for teaching purposes we attend to one key question:
Will our assessments identify what concepts our students DO NOT KNOW?
Why focus on what they do not know?
Good teaching is about helping students acquire skills they do not already have. Continuing to harp on topics students have already mastered is very frustrating. Moving forward without mastering concepts is even more frustrating. The concepts they do not know are exactly the concepts where teaching should be applied. Assessing for what students do not know gives us the information we need for focused, applicable teaching. In education what you don’t know can really hurt you.
Sampling for sorting
We recently worked on a training program for certifying commercial drone pilots. The purpose of the FAA part 107 exam is to separate those who are allowed to fly drones from those who are not. It is a classic sorting assessment.
One section of the tutorial videos focused on reading aviation charts to determine what airspace was safe for drone flights. When we reviewed the videos we found about 60 concepts being taught in 43 minutes of video.
At the end of the videos, students were tested with 8 multiple-choice questions. The use of 8 questions to assess 60 concepts is not likely to produce good coverage. When we think about the purpose of this course (passing the FAA exam) it just might work. You only need 70% right to pass the exam. If you studied without first seeing the quiz and got 7 out of 8 questions right, then it is likely that you will pass the larger exam with greater than 80%.
When trying to sort success from failure assessment sampling is an effective technique. If we take a sample of the concepts and the student knows 80% of the sample then it is likely that they know 80% of all the concepts.
If, however, what we want is not pilots that can pass a test but pilots that really know how to read an aviation chart so they can fly safely, sampling is a problem. To fly safely must actually master the concepts. You cannot fly safely if 20% of the time you have no clue what the chart says about your airspace. That is like driving when 20% of the time you don’t know which lane you should be in.
Assessing what they do not know
If we want to sort students into bins (license, no license) then we do a sampling test of what they know. If we want to assess to guide our teaching then we are hunting for what students do not know. That requires a different kind of assessment.
The following example is a programming problem that occurs frequently with first time programmers. The correct answer is B (the semicolon should be removed). Most students get this right.
Below is the sample problem except that it is not multiple choice. Instead the students must find the error on their own and circle it using digital ink. The figure shows examples of some of the answers that students gave. Many fewer students got this problem right.
The reason for the reduced performance is that the actual error was not a choice. They had to find it for themselves. Because a fixed set of choices was not presented, the students had a harder time ruling out possibilities. They had to actually understand the program to get the problem right. Notice that instead of 3 choices the students made many more mistakes. We can now see all of the various mistakes that our students make. As we see all of the various ways in which students misunderstand we can start to consider ways to help. The multiple choice problem masks these misunderstandings by not providing them as choices. We cannot remediate what we cannot see.
What is not shown about is the various things that students did when they had no clue what the answer was. Some wrote “I don’t know” using the digital ink. Others scribbled out the problem. Some creative ones drew smiley faces and other just begged for grading mercy. All together such answers mean “I don’t know”. In the multiple choice form of the problem they would just pick an answer with a 1/3 chance of getting it right. As teachers we would be left blind to their misunderstanding.
A major reason for using multiple choice questions is that computers can grade them easily. The heavy grading burden is removed from the instructor. QuizTeq, however, can automatically grade a variety of question forms such as digital ink, drag and drop as well as short answer text.
As teachers we have a limited amount of time to provide individualized instruction. QuizTeq’s grading mechanisms can relieve the tedium of assessing performance but we still have a need to create teaching materials that deal with student misunderstandings.
In Figure 2 we only showed examples of student answers. The QuizTeq grading tool uses a proprietary overlay presentation that shows you where the most common marks are. In addition, we provide analytic data that tells you how many students are making particular errors. This lets you focus your limited time on the misconceptions that are most common.
A huge advantage of QuizTeq’s autograding comes next year or next semester when you teach the topic again. All the grading effort can be reused. The additional teaching materials you created will still be there and will be automatically applied next year. When these problems are presented again you can use the analytics to see how effective your materials were in repairing the knowledge gaps you found before. By comparing the analytics of one semester against the analytics of a subsequent semester you can identify whether your changes are doing any good.