Randomised Items in computer-based tests: Russian roulette in assessment?

12 Feb 2016

Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically programmed into testing software, such that neighbouring screens present different items to the test-taker. This article argues that randomisation of test items can be a disadvantage to students who were randomly presented with difficult items first. Such disadvantage would violate the American Psychological Association?s published guidelines concerning testing and assessment that call for the principle of fairness for test-takers across diverse test modes. Owing to the smallness of the chance of a student being randomly assigned difficult items first, it may be hard to prove such disadvantage. However, even if only one test-taker is affected once during a high-stakes test, the principle of fairness is compromised. This article reports on four instances out of about 400 in which students may either have been unfairly advantaged or disadvantaged by being given a series of easy or difficult items at the beginning of the test. Although the results are not statistically significant, we conclude that more research needs to be done before one can ignore what we have named the Item Randomisation Effect.