Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Exploring Automated Essay Scoring Models for Multiple Corpora and Topical Component Extraction from Student Essays

Zhang, Haoran (2021) Exploring Automated Essay Scoring Models for Multiple Corpora and Topical Component Extraction from Student Essays. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Download (2MB) | Preview

Abstract

Since it is a widely accepted notion that human essay grading is labor-intensive, automatic scoring method has drawn more attention. It reduces reliance on human effort and subjectivity over time and has commercial benefits for standardized aptitude tests. Automated essay scoring could be defined as a method for grading student essays, which is based on high inter-agreement with human grader, if they exist, and requires no human effort during the process. This research mainly focuses on improving existing Automated Essay Scoring (AES) models with different technologies. We present three different scoring models for grading two corpora: the Response to Text Assessment (RTA) and the Automated Student Assessment Prize (ASAP). First of all, a traditional machine learning model that extracts features based on semantic similarity measurement is employed for grading the RTA task. Secondly, a neural network model with the co-attention mechanism is used for grading sourced-based writing tasks. Thirdly, we propose a hybrid model integrating the neural network model with hand-crafted features. Experiments show that the feature-based model outperforms its baseline, but a stand-alone neural network model significantly outperforms the feature-based model. Additionally, a hybrid model integrating the neural network model and hand-crafted features outperforms its baselines, especially in a cross-prompt experimental setting. Besides, we present two investigations of using the intermediate output of the neural network model for keywords and key phrases extraction from student essays and the source article. Experiments show that keywords and key phrases extracted by our models support the feature-based AES model, and human effort can be relieved by using automated essay quality signals during the training process.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Zhang, Haoranhaz64@pitt.eduhaz64
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairLitman, Dianedlitman@pitt.edu
Committee MemberKovashka, Adrianakovashka@cs.pitt.edu
Committee MemberWalker, Erineawalker@pitt.edu
Committee MemberCorrenti, Richardrcorrent@pitt.edu
Date: 3 May 2021
Date Type: Publication
Defense Date: 2 December 2020
Approval Date: 3 May 2021
Submission Date: 1 March 2021
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 175
Institution: University of Pittsburgh
Schools and Programs: Dietrich School of Arts and Sciences > Computer Science
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Automated Essay Scoring, Automated Writing Evaluation, Information Extraction, Natural Language Processing, Topical Components
Date Deposited: 03 May 2021 14:54
Last Modified: 03 May 2021 14:54
URI: http://d-scholarship.pitt.edu/id/eprint/40299

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item