Multiple-Choice Tests: Objectivity or Delusion in Assessment? A Comparative Analysis from Romania and Moldova

Authors

  • Roza Dumbraveanu “Ion Creangă” State Pedagogical University of Chişinău, Moldova
  • Valeria Baciu “Ion Creangă” State Pedagogical University of Chişinău, Moldova
  • Gabriela Grosseck West University of Timisoara, Romania
  • Daniel Milencovici 4 West University of Timisoara, Romania

DOI:

https://doi.org/10.18662/rrem/17.3/1032

Keywords:

multiple-choice tests, digital assessment, higher education, constructive alignment, comparative analysis (Romania–Moldova)

Abstract

Multiple-choice tests (MCTs) are widely used in student assessment due to their perceived objectivity, validity, and reliability. In the digital age, MCTs have become increasingly popular supported by platforms that enable automated grading and real-time feedback. While MCTs offer benefits such as quick grading, minimized evaluator bias, and large-scale scalability, their effectiveness in providing an accurate assessment of student learning, critical thinking, and deep conceptual understanding remains a matter of ongoing scholarly debate. Reliance on testing may result in the oversimplification of knowledge understanding, the encouragement of surface learning, and limitations in assessing higher-order cognitive skills, that raise essential questions about the pedagogical value of MCTs, particularly when used as the main tool for the evaluation of student achievements. The objectives of this paper are to identify the benefits and limitations of MCTs in Higher Education as described in literature and to compare the practices from Romania and Moldova, with a focus on validity, fairness, and impact on student learning outcomes. The study is based on qualitative data collected through surveys and interviews with higher education target representatives in both contexts. Findings highlight common concerns related to guessing the correct answers, the risk of teaching to the test, the misleading design of questions, and weak alignment between learning outcomes and assessment practices. The study suggests remediation of the identified problems by integrating MCTs within a broader, balanced assessment strategy aligned with the principles of constructive alignment to support students’ competence development.

References

Alordiah, C., & Oji, J. (2024). Test equating in educational assessment: A comprehensive framework for promoting fairness, validity, and cross-cultural equity. Asian Journal of Assessment in Teaching and Learning, 14(1). https://doi.org/10.37134/ajatel.vol14.1.7.2024

Bateson, A., & Dardick, W. R. (2020). A comparison of the two-option versus the four-option multiple-choice item: A case for fewer distractors. Personnel Assessment and Decisions, 6(3), Article 5. https://doi.org/10.25035/pad.2020.03.005

Berg, P., & Singh, A. (2024). Myths of official measurement: Limits to test-based education reforms with weak governance. Journal of Public Economics, 239. https://doi.org/10.1016/j.jpubeco.2024.105246

Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32, 347–364. https://doi.org/10.1007/BF00138871

Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed.). Open University Press.

Butler, A. C. (2018). Multiple-choice testing in education: Are the best practices for assessment also good for learning? Journal of Applied Research in Memory and Cognition, 7(3), 323–331. https://doi.org/10.1016/j.jarmac.2018.07.002

Butler, A. C., Karpicke, J. D., & Roediger, H. L. III. (2007). The effect of type and timing of feedback on learning from multiple-choice tests. Journal of Experimental Psychology: Applied, 13(4), 273–281. https://doi.org/10.1037/1076-898X.13.4.273

Creţu, D. M., & Grosseck, G. (2025). A bibliometric analysis of Romanian educational research in Web of Science: Trends, challenges, and opportunities for global integration. Education Sciences, 15(3), 358. https://doi.org/10.3390/educsci15030358

DiBattista, D., & Kurzawa, L. (2011). Examination of the quality of multiple-choice items on classroom tests. The Canadian Journal for the Scholarship of Teaching and Learning, 2(2). https://doi.org/10.5206/cjsotl-rcacea.2011.2.4

Douglas, K. A., & Purzer, Ş. (2015). Validity: Meaning and relevancy in assessment for engineering education research. Journal of Engineering Education, 104(2), 108-118. https://doi.org/10.1002/jee.20070

Dumbraveanu, R. (2006). Evaluarea studenţilor: Strategii şi metode. Universitatea Pedagogică de Stat “Ion Creangă”.

Dumbraveanu, R., & Peca, L. (2022). E-learning strategy in the elaboration of courses. International Conference on Virtual Learning, 17, 15–26. https://doi.org/10.58503/icvl-v17y202201

EHEA. (2020). Bologna Process: Bologna beyond 2020 – Fundamental values of the EHEA. Rome Ministerial Conference, European Higher Education Area. https://ehea.info/Upload/Rome_Ministerial_Communique.pdf

ENQA. (2015). Standards and guidelines for quality assurance in the European Higher Education Area (3rd ed.). European Association for Quality Assurance in Higher Education. https://www.enqa.eu/wp-content/uploads/2015/11/ESG_2015.pdf

European Commission. (2023). Council Recommendation on improving the provision of digital skills. https://education.ec.europa.eu/focus-topics/digital-education/action-plan/council-recommendation-improving-the-provision-of-digital-skills

Goss, H. (2022). Student learning outcomes assessment in higher education and in academic libraries: A review of the literature. The Journal of Academic Librarianship, 48(2). https://doi.org/10.1016/j.acalib.2021.102485

Grosseck, G., Bran, R. A., & Ţîru, L. G. (2024). Digital assessment: A survey of Romanian higher education teachers’ practices and needs. Education Sciences, 14(1), 32. https://doi.org/10.3390/educsci14010032

Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items (1st ed.). Routledge. https://doi.org/10.4324/9780203850381

Jovanovska, J. (2018). Designing effective multiple-choice questions for assessing learning outcomes. Infotheca, 18(1), 25–42. https://doi.org/10.18485/infotheca.2018.18.1.2

Kaur, M., Singla, S., & Mahajan, R. (2016). Item analysis of in-use multiple choice questions in pharmacology. International Journal of Applied & Basic Medical Research, 6(3), 170–173. https://doi.org/10.4103/2229-516x.186965

Kent-Waters, J, Seago, O., Smith, L. (2018). The compendium of assessment techniques in higher education: From students' perspectives. Leeds Institute for Teaching Excellence. https://teachingexcellence.leeds.ac.uk/wp-content/uploads/sites/89/2018/10/PUGHcompendiumcomplete.pdf

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory Into Practice, 41(4), 212–218. http://www.jstor.org/stable/1477405

Kuechler, W. L., & Simkin, M. G. (2010). Why is performance on multiple-choice tests and constructed-response tests not more closely related? Theory and an empirical test. Journal of Innovative Education, 8(1), 55–73. https://doi.org/10.1111/j.1540-4609.2009.00243.x

McKenna, P. (2018). Multiple choice question: Answering correctly or knowing the answers. International Conference e-Learning 2018. https://files.eric.ed.gov/fulltext/ED590289.pdf

Ministerul Educaţiei şi Cercetării. (2015). Strategia naţională pentru învăţământ terţiar 2015–2020. https://www.edu.ro/sites/default/files/fisiere%20articole/Strategie_inv_tertiar_2015_2020.pdf

Nicol, D. (2007). E-assessment by design: Using multiple-choice tests to good effect. Journal of Further and Higher Education, 31(1), 53–64. https://www.liverpool.ac.uk/media/livacuk/cll/eddev-files/iteach/pdf/nicole_assessment_by_design_using_MCQs.pdf

Nitko, A. J., & Brookhart, S. M. (2014). Educational assessment of students. (8th ed.) Pearson Education

Oc, Y., & Hassen, H. (2025). Comparing the effectiveness of multiple-answer and single-answer multiple-choice questions in assessing student learning. Marketing Education Review, 35(1), 44–57. https://doi.org/10.1080/10528008.2024.2417106

Pepple, D. J., Young, L. E., & Carroll, R. G. (2010). A comparison of student performance in multiple-choice and long essay questions in the MBBS stage I physiology examination at the University of the West Indies. Advances in Physiology Education, 34(2), 86–89. https://doi.org/10.1152/advan.00087.2009

Pitt, E., & Quinlan, K. (2022). Impacts of higher education assessment and feedback policy and practice on students: A review of the literature 2016–2021. University of Kent. https://www.advance-he.ac.uk/knowledge-hub/

Popham, W. J. (2008). Transformative assessment. ASCD. https://www.perlego.com/book/3292589/transformative-assessment-pdf

Rashwan, N. I., Aref, S. R., Nayel, O. A., & Rizk, M. H. (2024). Post examination item analysis of undergraduate pediatric multiple-choice questions: Implications for developing a validated question bank. BMC Medical Education, 24, 168. https://doi.org/10.1186/s12909-024-05153-3

Republic of Moldova, Government. (2023). Strategia de dezvoltare „Educaţia 2030” (Government Decision No. 114/2023). Ministry of Education and Research. https://www.legis.md/cautare/getResults?doc_id=136600&lang=ro

Rios, J. A., Deng, J., & Ihlenfeldt, S. D. (2022). To what degree does rapid guessing distort aggregated test scores? A meta-analytic investigation. Educational Assessment. https://doi.org/10.1080/10627197.2022.2110465

Rodriguez, M. C. (2003). Construct equivalence of multiple-choice and constructed-response items: A random effects synthesis of correlations. Journal of Educational Measurement, 40(2), 163–184. https://www.jstor.org/stable/1435344

Rodriguez-Torrealba, R., Garcia-Lopez, E., & Garcia-Cabot, A. (2025). Joint generation of distractors for multiple-choice questions: A text-to-text approach. Computers, Materials & Continua, 83(2), 1683–1705. https://doi.org/10.32604/cmc.2025.062004

Schmidt, H. U. (2019). Learning outcomes: Core issues in higher education. In D. Staub (Ed.), Quality assurance and accreditation in foreign language education (pp. 233–247). Springer. https://doi.org/10.1007/978-3-030-21421-0_12

Scouller, K. (1998). The influence of assessment method on students' learning approaches: Multiple choice question examination versus assignment essay. Higher Education, 35(4), 453–472. https://doi.org/10.1023/A:1003196224280

Simkin, M. G., & Kuechler, W. L. (2005). Multiple-choice tests and student understanding: What is the connection? Journal of Innovative Education, 3(1), 73–98. https://doi.org/10.1111/j.1540-4609.2005.00053.x

Slepkov, A. D., Van Bussel, M. L., Fitze, K. M., & Burr, W. S. (2021). A baseline for multiple-choice testing in the university classroom. SAGE Open, 11(2). https://doi.org/10.1177/21582440211016838

Solano-Flores, G., & Nelson-Barber, S. (2001). On the cultural validity of science assessments. Journal of Research in Science Teaching, 38(5), 553–573. https://doi.org/10.1002/tea.1018

Tarrant, M., & Ware, J. (2012). A comparison of the psychometric properties of three- and four-option multiple-choice questions in nursing assessments. Nurse Education Today, 32(4), 345–349. https://doi.org/10.1016/j.nedt.2011.05.002

Downloads

Published

2025-09-19

How to Cite

Dumbraveanu, R., Baciu, V. ., Grosseck, G., & Milencovici, D. (2025). Multiple-Choice Tests: Objectivity or Delusion in Assessment? A Comparative Analysis from Romania and Moldova. Revista Romaneasca Pentru Educatie Multidimensionala, 17(3), 505-531. https://doi.org/10.18662/rrem/17.3/1032

Issue

Section

Reform, Change and Innovation in Education