| Peer-Reviewed

An Evaluation of Assessment-Oriented Computer-Based Text Analysis Paradigms

Received: 25 August 2017    Accepted: 8 September 2017    Published: 9 October 2017
Views:       Downloads:
Abstract

Computer-based text analysis applications have come a long way since Ellis Page’s Project Essay Grader [1]. Automated assessment applications have achieved better than human reliability and other methods of assisting assessment have opened up additional venues for utilization in the classroom and beyond. However, a lack of understanding of the differences between the different types of applications and their limitations has made selecting the appropriate application a difficult task. This study will present the most comprehensive examination of different paradigms of computer-based text analysis applications and a new typology for classifying them.

Published in Higher Education Research (Volume 2, Issue 4)
DOI 10.11648/j.her.20170204.12
Page(s) 111-116
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Text Analysis, Content Analysis, Natural Language Processing, Latent Semantic Analysis, Peer Review, Automated Essay Scoring, Essay Assessment, Formative Assessment

References
[1] E. B. Page, "Statistical and Linguistic Strategies in the Computer Grading of Essays," University of Connecticut, Storrs, CT, 1967.
[2] J. Zeleznikow and J. R. Nolan, "Using Soft Computing to Build Real World Intelligent Decision Support Systems in Uncertain Domains," Decision Support Systems, vol. 31, no. 2, pp. 263-285, 2001.
[3] P. J. van Vliet, "Scaling Up Student Assessment: Issues and Solutions," Journal of Higher Education Theory and Practice, vol. 16, no. 6, p. 32, 2016.
[4] L. S. Larkey, “Automated Essay Grading Using Text Categorization Techniques,” in Proceedings of the 21st annual international ACM SIGIR conference on research and development in information retrieval, 1998.
[5] S. M. Phillips, “Automated Essay Scoring: A Literature Review,” Society for the Advancement of Excellence in Education (SAEE), Kelowna, BC, 2007.
[6] S. Valenti, F. Neri and A. Cucchiarelli, “An Overview of Current Research on Automated Essay Grading,” Journal of Information Tehcnology Education, vol. 2, pp. 319-330, 2003.
[7] D. Callear, J. Jerrams-Smith and V. Soh, "Bridging Gaps in Computerised Assessment of Texts," in Proceedings of the IEEE International Conference on Advanced Learning Technologies, Washington, 2001.
[8] S. Dikli, "An Overview of Automated Scoring Essays," The Journal of Technology, Learning, and Assessment, vol. 5, no. 1, pp. 3-35, 2006.
[9] N. H. MacDonald, L. T. Frase, P. S. Gingrich and S. A. Keenan, "The Writer's Workbench: Computer Aids for Text Analysis," IEEE Transactions on Communications, vol. 30, no. 1, pp. 105-110, 1982.
[10] E. Brent, C. Atkisson and N. Green, "Time-Shifted Online Collaboration: Creating Teachable Moments through Automated Grading," in Monitoring and Assessment in Online Collaborative Environments: Emergent Computational Technologies for E-learning Support, A. A. Juan and T. Daradoumis, Eds., Hershey, PA, IGI Global, 2009.
[11] S. W. Chan, "Beyond Keyword and Cue-Phrase Matching: A Sentence-Based Abstraction Technique for Information Extraction," Decision Support Systems, vol. 42, no. 2, pp. 759-777, 2006.
[12] M. Shortis and S. Burrows, "A Review of the Status of Online, Semi-Automated Marking and Feedback Systems," in ATN Assessment Conference 2009, RMIT University, 2009.
[13] D. G. Roussinov and H. Chen, "Document Clustering for Electronic Meetings: An Experimental Comparison of Two Techniques," Decision Support Systems, vol. 27, no. 1-2, pp. 67-79, 1999.
[14] T. K. Landauer, P. W. Foltz and D. Laham, “An Introduction to Latent Semantic Analysis,” Discourse Processes, vol. 25, no. 2 & 3, pp. 259-284, 1998.
[15] S. Dikli, "Automated Essay Scoring," Turkish Online Journal of Distance Education, vol. 7, no. 1, 2006.
[16] L. M. Rudner, V. Garcia and C. Welch, "An Evaluation of the Intelli Metric Essay Scoring System," The Journal of Tehcnology, Learning, and Assessment, vol. 4, no. 4, pp. 3-21, 2006.
[17] P. A. Carlson and F. C. Berry, "Calibrated Peer Review and Assessing Learning Outcomes," in 33rd ASEE/IEEE Frontiers in Education Conference, Boulder, CO, 2003.
[18] H.-C. Wang, C.-Y. Chang and T.-Y. Li, "Assessing Creative Problem-Solving with Automated Text Trading," Computers & Education, vol. 51, pp. 1450-1466, 2008.
[19] Y. Attali and J. Burstein, "Automated Essay Scoring With e-rater V. 2," The Journal of Technology, Learning, and Assessment, vol. 4, no. 3, pp. 3-30, 2006.
Cite This Article
  • APA Style

    Andrew Aken. (2017). An Evaluation of Assessment-Oriented Computer-Based Text Analysis Paradigms. Higher Education Research, 2(4), 111-116. https://doi.org/10.11648/j.her.20170204.12

    Copy | Download

    ACS Style

    Andrew Aken. An Evaluation of Assessment-Oriented Computer-Based Text Analysis Paradigms. High. Educ. Res. 2017, 2(4), 111-116. doi: 10.11648/j.her.20170204.12

    Copy | Download

    AMA Style

    Andrew Aken. An Evaluation of Assessment-Oriented Computer-Based Text Analysis Paradigms. High Educ Res. 2017;2(4):111-116. doi: 10.11648/j.her.20170204.12

    Copy | Download

  • @article{10.11648/j.her.20170204.12,
      author = {Andrew Aken},
      title = {An Evaluation of Assessment-Oriented Computer-Based Text Analysis Paradigms},
      journal = {Higher Education Research},
      volume = {2},
      number = {4},
      pages = {111-116},
      doi = {10.11648/j.her.20170204.12},
      url = {https://doi.org/10.11648/j.her.20170204.12},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.her.20170204.12},
      abstract = {Computer-based text analysis applications have come a long way since Ellis Page’s Project Essay Grader [1]. Automated assessment applications have achieved better than human reliability and other methods of assisting assessment have opened up additional venues for utilization in the classroom and beyond. However, a lack of understanding of the differences between the different types of applications and their limitations has made selecting the appropriate application a difficult task. This study will present the most comprehensive examination of different paradigms of computer-based text analysis applications and a new typology for classifying them.},
     year = {2017}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - An Evaluation of Assessment-Oriented Computer-Based Text Analysis Paradigms
    AU  - Andrew Aken
    Y1  - 2017/10/09
    PY  - 2017
    N1  - https://doi.org/10.11648/j.her.20170204.12
    DO  - 10.11648/j.her.20170204.12
    T2  - Higher Education Research
    JF  - Higher Education Research
    JO  - Higher Education Research
    SP  - 111
    EP  - 116
    PB  - Science Publishing Group
    SN  - 2578-935X
    UR  - https://doi.org/10.11648/j.her.20170204.12
    AB  - Computer-based text analysis applications have come a long way since Ellis Page’s Project Essay Grader [1]. Automated assessment applications have achieved better than human reliability and other methods of assisting assessment have opened up additional venues for utilization in the classroom and beyond. However, a lack of understanding of the differences between the different types of applications and their limitations has made selecting the appropriate application a difficult task. This study will present the most comprehensive examination of different paradigms of computer-based text analysis applications and a new typology for classifying them.
    VL  - 2
    IS  - 4
    ER  - 

    Copy | Download

Author Information
  • Department of Information Systems & Technology, Northeastern State University, Broken Arrow, USA

  • Sections