An Original Dynamic Questioning System: DinaSoruS

Author :  

Year-Number: 2018-Volume 10, Issue 4
Language : null
Konu :
Number of pages: 107-125
Mendeley EndNote Alıntı Yap

Abstract

Bilgisayar ve İnternet erişiminin yaygınlaşmasıyla, bilgisayar tabanlı değerlendirme sistemleri eğitimde daha yaygın bir şekilde kullanılmaya başlamıştır. Bu alanda yapılan çalışmalarda öğrencilerin kişisel becerilerine özel testler hazırlanmasını hedefleyen uyarlanabilir test sistemleri ve aynı soru kalıbından farklı sorular üretebilmeyi hedefleyen dinamik soru üretme sistemleri öne çıkmaktadır. Bu çalışmada kendine özgü bir soru hazırlama diline sahip ve aynı soru tipinden dinamik olarak farklı sorular üretme kapasitesine sahip prototip bir dinamik soru üretme sistemi (DinaSoruS) geliştirilmiştir. Dinamik bir değerlendirme sisteminin etkinliğini belirleyen en temel unsurun aynı soru kalıbından farklı sorular üretebilme kapasitesi olduğu söylenebilir. Mevcut dinamik soru üreten sistemler incelendiğinde, büyük bir kısmının soru metninde kısıtlı ölçüde dinamik değişiklikler yapabildiği görülmektedir. DinaSoruS sistemi aynı soru kalıbındaki matematik soru metinlerinde büyük değişiklikler yapabilecek bir soru hazırlama dili temel alınarak tasarlanmıştır. Sistem tanımlı değişkenler ve fonksiyonlar yardımıyla soru metinlerdeki birçok unsuru (değer, resim, fonksiyon, grafik vesaire) dinamik olarak değiştirerek faklı sorular üretebilecek kapasitedir. Ayrıca sistem dinamik olarak matematiksel önermeler şeklinde birbirinden bağımsız olarak girilen soruları birleştirerek çoktan seçmeli veya doğru yanlış soru tipinde sorular hazırlayabilmektedir. Milli Eğitim Bakanlığı (MEB) tarafından hazırlanan 5. Sınıf matematik kazanım testlerinden biri sisteme uyarlanarak dinamik olarak iki farklı test üretilmiş ve sistemin etkinliği okullarda test edilmiştir. DinaSoruS sisteminin etkin ve doğru bir şekilde çalıştığını teyit eden uygulama çalışmasının bulguları sunulmuş ve DinaSoruS sisteminin ölçme ve değerlendirmeye sunacağı olası katkılar tartışılmıştır.

Keywords

Abstract

With the widespread use of computer and Internet access, computer based assessment systems have become more widely used in education. Adaptive testing systems aimed at preparing students' personalized skills in this field of study and dynamic question generation systems aimed at producing different questions from the same question stand out. In this study, we have developed a prototype dynamic question generation system (DinaSoruS) which has a language for preparing a unique question and has the capacity to produce different questions dynamically from the same question type. It can be said that the basic element determining the effectiveness of a dynamic evaluation system is the capacity to produce different questions from the same question table. When the existing dynamic question generating systems are examined, it is seen that a large part of the question text can make dynamic changes in a limited extent. The DinaSoruS system is based on a question-lingual language that can make major changes in the mathematical question texts of the same question type. With the aid of system defined variables and functions, it is the capacity to produce many different questions by dynamically changing many elements (values, pictures, functions, graphics, etc.) in question texts. In addition, the system can dynamically prepare questions in the form of multiple choice or correct wrong question by combining the questions entered independently in the form of mathematical proposals. One of the 5th grade mathematics acquisition tests prepared by the Ministry of National Education (MONE) was adapted to the system and two different tests were produced dynamically and the effectiveness of the system was tested in the schools. The findings of the implementation study confirming that the DinaSoruS system works effectively and correctly have been presented and the possible contributions of the DinaSoruS system to measurement and evaluation have been discussed.

Keywords


  • Alsubait,T., Parsia, B. and Sattler, U. (2012). Next generation of e-assessment: automatic generation of questions. International Journal of Technology Enhanced Learning, Vol.4 (3/4), 156 – 171.

  • American Educational Research Association, American Psychological Association & National Council on Measurement in Education (AERA) (1999). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.

  • Anderson, L. W. & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives. New York: Longman.

  • Bahar, M., Nartgün, Z., Durmuş, S. ve Bıçak, B. (2009). Geleneksel-tamamlayıcı ölçme ve değerlendirme teknikleri öğretmen el kitabı (3. Baskı). Ankara: PegemA Akademi.

  • Balcı,,E. ve Tekkaya, C. (2000). Ölçme ve Değerlendirme Tekniklerine Yönelik Bir Ölçeğin Geliştirilmesi. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 18, 42-50.

  • Bartram, D. (2006). Testing on the Internet: Issues, challenges and opportunities in the field of occupational assessment. In D. Bartram and R.K. Hambleton (Eds.), Computer-Based Testing and the Internet: Issues and Advances ( Chapter 1). London: Wiley.

  • Bejar, I. I., Lawless, R. R., Morley, M. E., Wagner, M. E., Bennett, R. E., and Revuelta, J. (2003). A feasibility study of on-the-fly item generation in adaptive testing. Journal of Technology, Learning, and Assessment, 2(3).

  • Bennett, R. E. (1998). Reinventing assessment: speculations on the future of large-scale educational testing. Princeton, NJ: Educational Testing Service, Policy Information Center.

  • Bennett, R. E., & Bejar, I. I. (1998). Validity and automated scoring: It’s not only the scoring. Educational Measurement: Issues and Practice, 17(4), 9-17.

  • Black, P. & Wiliam, D. (1998). Inside the Black Box: Raising standards through classroom assessment. London: King’s College.

  • Braun, H. I. (1994). Assessing technology in assessment. In E. L. Baker and H. F. O’Neil (Eds.), Technology assessment in education and training (pp. 231–246). Hillsdale, NJ: Erlbaum.

  • Brown, A. S., Schilling, H. E. H., & Hockensmith, M. L. (1999). The negative suggestion effect: Pondering incorrect alternatives may be hazardous to your knowledge. Journal of Educational Psychology, 91, 756– 764.

  • Carrier, M., & Pashler, H. (1992). The influence of retrieval on retention. Memory & Cognition, 20, 633–642.

  • Chandrasekaran, B., Josephson, J. & Benjamins, V.(1999). What Are Ontologies, and Why Do We Need Them? IEEE Intelligent Systems, 14 ,20 -26.

  • Cheng SC, Lin YT, Huang YM (2009). Dynamic question generation system for web-based testing using particle swarm optimization. Expert Systems with Applications, 36(1), 616–624.

  • Clauser, B. E., Margolis, M. J., Clyman, S. G., and Ross, L. P. (1997). Development of automated scoring algorithms for complex performance assessments: A comparison of two approaches. Journal of Educational Measurement, 34, 141–161.

  • Deane, P. and Sheehan, K. (2003). Automatic item generation via frame semantics: Natural language generation of math word problems. Princeton, NJ: ETS.

  • Erdal, H. (2007). 2005 İlköğretim matematik Programı Ölçme Değerlendirme Kısmının İncelenmesi, Afyonkarahisar İli Örneği (Yüksek Lisans Tezi, Afyon Kocatepe Üniversitesi, Afyonkarahisar).

  • Erdemir, Z. A. (2007). İlköğretim İkinci Kademe Öğretmenlerinin Ölçme Değerlendirme Tekniklerini Etkin Kullanabilme Yeterliliklerinin Araştırılması, Kahramanmaraş Örneği (Yüksek Lisans Tezi, Kahramanmaraş Sütçü İmam Üniversitesi, Kahramanmaraş).

  • Goguadze, G., Mavrikis, M. and Palomo A. G. (2006). Interoperability Issues between Markup Formats for Mathematical Exercises. Electronic Proceedings of WebALT 2006.

  • Gierl, M. J., Lai,H., Hogan, J. B. and Matovinovic, D. (2015). A Method for Generating Educational Test Items that are Aligned to the Common Core State Standards. Journal of Applied Testing Technology, Vol 16(1), 118.

  • Gierl, M. .J. & Haladyna, T. M. (2013). Automatic Item Generation: Theory and Practice. New York: Routledge.

  • Gierl, M.J., & Lai, H. (2012). The Role of Item Models in Automatic Item Generation. International Journal of Testing, 12:3, 273-298.

  • Gierl, M. J., Thou, J., and Alves, C. (2008). Developing a taxonomy of item model types to promote assessment engineering. Journal of Technology, Learning, and Assessment, 7(2), 1-51.

  • Haladyna, T.M. & Rodriguez, M.C. (2013). Developing and Validating Test Items. New York: Routledge

  • Huba, M.B., & Freed, J.E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Boston: Allyn and Bacon

  • Ketterlin-Geller, L. R., and Yovanoff, P. (2009). Diagnostic Assessments in Mathematics to Support Instructional Decision Making. Practical Assessment, Research and Evaluation.

  • Lowe, T.W. (2015). Online quizzes for distance learning of mathematics. Teaching Mathematics Applications, 34 (3), 138-148.

  • Liu, B. (2009). SARAC: A Framework for Automatic Item Generation. Advanced Learning Technologies, ICALT 2009.

  • MEB (2005). İlköğretim matematik dersi 1-5. sınıflar öğretim programı. Ankara: MEB Yayınları.

  • ÖDSGM (2016). Milli Eğitim Bakanlığı Ölçme, Değerlendirme ve Sınav Hizmetleri Genel Müdürlüğü (ÖDSGM) 5. Sınıf Matematik Doğal Sayılar Kazanım Testi. Erişim tarihi: Kasım 2015. Erişim adresi:

  • Mortimer, T., Stroulia, E. and Yazdchi, Y. (2013). IGOR: A web-based item generation tool. In Gierl & T. Haladyna (Eds.), Automatic item generation: Theory and practice. New York: Routledge.

  • Papasalouros, A., Kanaris, K. and Kotis, K. (2008). Automatic Generation of Multiple Choice Questions From Domain Ontologies, IADIS e-Learning Conference, Amsterdam, 427-434.

  • Parshall, C. G., Spray, J. A., Kalohn, J. C., and Davey, T. (2002). Practical considerations in computer-based testing. New York: Springer-Verlag.

  • Resnick, L. B., & Resnick, D. P(1992). Assessing the thinking curriculum: New tools for educational reform. In B. Gifford ve M. O’Connor (Eds.), Changing assessment: Alternative views of aptitude, achievement and instruction (pp. 37–76). London: Kluwer Academic Publishers.

  • Scalise, K., & Gifford, B. (2006). Computer-based assessment in e-learning: A framework for constructing “intermediate constraint” questions and tasks for technology platforms. Journal of Technology, Learning, and Assessment, 4(6), 1-44.

  • Scriven, M. (1967). Perspectives of curriculum evaluation. In R.W. Tyler, R.M. Gagne, and M. Scriven (Eds.), The methodology of evaluation (pp. 39–83). Chicago, IL: Rand McNally.

  • Singley, M.K., & Bennett, R.E. ( 2002 ). Item generation and beyond: Applications of schema theory to mathematics assessment. In S. Irvine ve P. Kyllonen (Eds.), Item generation for test development, Hillsdale, NJ: Erlbaum.

  • Sireci, S. G., & Zenisky, A. L. (2006). Innovative item formats in computer-based testing: In pursuit of improved construct representation. In S.M. Downing and T.M. Haladyna (Eds.), Handbook of Testing (pp. 329-347). Mahwah, NJ: Erlbaum.

  • Thissen, D., Wainer, H., & Wang, X-B. (1994). Are tests comprising both multiple-choice and free-response items necessarily less unidimensional than multiple-choice tests? An analysis of two tests. Journal of Educational Measurement, 31, 113-123.

  • Van der Linden, W. J., and Glas, C.A.W. (Eds.). (2010). Computerized adaptive testing: Theory and practice. (2nd ed.), Norwell, MA: Kluwer Academic Publishers.

  • Weber, B., Schneider, B., Fritze, J., Gille, B., Hornung, S., Kuhner, T., and Maurer, K. (2003). Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients. Computers in Human Behavior, 19, 81–93.

  • Yıldırım, H. H., Yıldırım, S., Ceylan E., Yetişir, M. İ. (2013). Türkiye Perspektifinden TIMSS 2011 Sonuçları. Ankara:Türk Eğitim Derneği Tedmem Analiz Dizisi I.

  • Zenisky, A.L. and Sireci, S.G. (2002). Technological innovations in large-scale assessment. Applied Measurement in Education, 15(4),337-362.

  • Zitko, B., Stankov, S., Rosic, M., Grubisic, A.(2009). Dynamic test generation over ontology-based knowledge representation in authoring shell. Expert Systems with Applications, 36(4).

                                                                                                                                                                                                        
  • Article Statistics