Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Desirable revisions of evidence and reasoning for argumentative writing

Afrin, Tazin (2022) Desirable revisions of evidence and reasoning for argumentative writing. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

This is the latest version of this item.

[img]
Preview
PDF
Download (2MB) | Preview

Abstract

Successful essay writing by students typically involves multiple rounds of revision and assistance from teachers, peers, or automated writing evaluation (AWE) systems. Natural language processing (NLP) has become a key component of AWE systems, with NLP being used to assess the content and structure of student writing. Typically, students are involved in cycles of essay drafting and revising with or without AWE systems. After drafting an essay, students often receive formative feedback automatically generated by a system or provided by other humans such as teachers or student peers. During the revision process, students then produce texts that are in line with the feedback, to improve the quality of the essay. Hence, analyzing student revisions in terms of their desirability for improving the essay is important. Current intelligent writing assistant tools typically provide instant feedback by locating problems in the text (e.g., spelling mistake) and providing possible solutions but fail to tell if the user successfully implemented the feedback, especially feedback that involves higher-level semantic analysis (e.g., better example). In this thesis, we take a step towards advancing automated revision analysis capabilities. First, we propose a framework for analyzing the nature of students' revision of evidence use and reasoning in text-based argumentative essay writing tasks. Using statistical analysis, we evaluate the reliability of the proposed framework and establish the relationship of the scheme to essay improvement. Then we propose computational models to study the automatic classification of the desirable revisions. We explore two ways to improve the prediction of revision desirability -- the context of the revision, and the feedback students received before the revision. To the best of our knowledge, this is the first study to explore using feedback messages for a revision classification task. Finally, we also explore how auxiliary knowledge from a different writing task might help improve the identification of desirable revisions using a multi-task model and transfer-learning.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Afrin, Tazintaa74@pitt.edutaa740000-0001-9795-7869
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairLitman, Dianedlitman@pitt.edudlitman
Committee MemberHwa, Rebeccahwa@pitt.edureh23
Committee MemberWalker, Erineawalker@pitt.edueawalker
Committee MemberGodley, Amandaagodley@pitt.eduagodley
Date: 2 June 2022
Date Type: Publication
Defense Date: 1 April 2022
Approval Date: 2 June 2022
Submission Date: 23 April 2022
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 157
Institution: University of Pittsburgh
Schools and Programs: School of Computing and Information > Computer Science
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: writing revision, argumentative writing, NLP, better revision, revision feedback, automated feedback, bert, transfer learning, multitask learning
Date Deposited: 02 Jun 2022 21:09
Last Modified: 02 Jun 2022 21:09
URI: http://d-scholarship.pitt.edu/id/eprint/42954

Available Versions of this Item


Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item