In recent times, many teachers and learners have been forced to continue with teaching and learning in a relatively unknown space. Many schools, educators and learners embraced the new reality of remote earning. But still, in the back of our minds, we cannot quite get our heads around this assessment thing. We are very much concerned about accuracy, validity, security, integrity, and the quality of online assessment.


What we should not forget though, is that the hardest part of developing good online assessments often happens offline — developing assessments that are reliable, fair, valid, and transparent. With those concerns in mind, let us look at some guidelines for developing assessments that are fair, reliable, transparent, and valid: in other words, that are "good".


Remember the Before, During and After of Assessment
The assessment doesn’t just happen after an online lesson. It can (and should) happen before the learning, during the learning, and after the learning. In reality, assessment has a triple function — diagnostic,  formative, and summative — that helps us assess where learners are before, during, and after the learning.


Know why we want to assess
Assessments should really be about measuring learning outcomes. And learning outcomes should be about learners demonstrating what they know, and more importantly, what they can do (skills).
Learning outcomes can be low-level (recalling information) or high-level (analysing information). A great, time-tested resource to help us understand the various levels of learning, which we can then assess, is Bloom’s Taxonomy.


Choose the right tool to assess the right set of skills
There are many different types of assessments, from tests to projects to performance-based tasks to essays, etc. Every one of these has a particular function and thus may be appropriate or inappropriate depending on what we want to assess. So choosing the right assessment tool or method is very important.


Open or Closed
There are broadly two types of assessment methods — select response methods and constructed response methods. With select response methods, our online learners "select" answers from a selection. True/False and Multiple Choice are the most common select response methods. Select response items are good for recall/recognition of facts, and limited types of reasoning. However, they are not good at all for assessing learner skills.
 

With a constructed response method, our learners "construct" or "supply" their response. Constructed response items are good for descriptions/explanations. Simple constructed response items (fill-in-the-blank) still measure fairly low-level skills. However, more open constructed response assessments (like essays) can assess deeper knowledge and learner thinking. When you use constructed-response systems, it is critical to review learner responses before submitting feedback.


Multiple-choice tests
Multiple-choice tests are a frequent feature in online learning. They are easy to take and easy to mark, but they are not easy to create. Good multiple-choice tests are actually quite hard to create, but not impossible to master. Here are some guidelines for creating multiple-choice tests:

  • Make the stem clear, make it a question, and make it brief.
  • Make responses direct, with no extra, meaningless information.
  • Don’t use "All of the above" or "None of the above".
  • Make sure all of your answers – the correct answer and the distractors (the incorrect answers) – are consistent in length, style, etc.
  • Increase the plausibility of distractors by including extraneous information and choosing distractors based on common learner errors. But remember, the point of distractors is to figure out where learners have weaknesses – so we can address these errors in thinking.
  • Pay attention to language.
  • Avoid categorical terms – always, never.
  • Avoid double negatives.
  • Distractors should illuminate common learner misperceptions.


We want to be able to differentiate among high-level, medium level, and low-level performers. So, again, we really have to pay special attention to how we construct them.

  • Use four possible responses, not three.
  • Each question should stand alone. Don’t relate or connect questions.
  • Finally, remember that the point of a test is validity.


Cognitive skills and questioning techniques – Making use of taxonomies
Bloom’s Taxonomy is a great resource for designing assessments. It gives us verbs matched with learning outcomes. A taxonomy of learning outcomes with accompanying language will help scaffold the creation of your assessment. Developing good online assessments is hard.

  • We need to know why we are assessing.
  • Assessments must be aligned with learning outcomes and instructional activities.
  • Assessments demand precise language and a good deal of revision.
  • Above all, we need to use the right assessment tool to measure the right skill.

Without these practices, even the best technology will not save a poorly designed assessment.

References:
https://teaching.utoronto.ca/wp-content/uploads/2015/08/Online-Assessment-Tips.pdf
https://elearningindustry.com/developing-good-online-assessments-guidelines