Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Discovering sentences for argumentation about the meaning of statutory terms

Savelka, Jaromir (2020) Discovering sentences for argumentation about the meaning of statutory terms. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

This is the latest version of this item.

PDF (Sentences for Argumentation about the Meaning of Statutory Terms)
Download (2MB) | Preview


In this work I studied, designed, and evaluated computational methods to support interpretation of statutory terms. Understanding statutes is difficult because the abstract rules they express must account for diverse situations, even those not yet encountered. The interpretation involves an investigation of how a particular term has been referred to, explained, interpreted, or applied in the past. This is an important step that enables a lawyer to then construct arguments in support of or against particular interpretations. Going through the list of results manually is labor intensive. A response to a search query may consist of hundreds or thousands of documents. I investigated the feasibility of developing a system that would respond to a query with a list of sentences that mention the term in a way that is useful for understanding and elaborating its meaning. I treat the discovery of sentences for argumentation about the meaning of statutory terms as a special case of ad hoc document retrieval. The specifics include retrieval of short texts (sentences), specialized document types (legal case texts), and, above all, the unique definition of document relevance.

This work makes a number of contributions to the areas of legal information retrieval and legal text analytics. First, a novel task of discovering sentences for argumentation about the meaning of statutory terms is proposed. The task includes analyzing past treatment of a statutory term, a task lawyers routinely perform using a combination of manual and computational approaches. Second, a data set comprising 42 queries (26,959 sentences) was assembled to support the experiments presented here. Third, by systematically assessing the performance of a considerable number of traditional information retrieval techniques, I position this novel task in the context of a large body of work on ad hoc document retrieval. Fourth, I assembled a unique list of 129 descriptive features that model the retrieved sentences, their relationships to the terms of interest, as well as the statutory provisions they come from. I demonstrate how the proposed feature set could be utilized in learning-to-rank settings by showing how a number of machine learning algorithms learn to rank the sentences with very reasonable effectiveness. Fifth, I analyze the effectiveness of fine-tuning pre-trained language models in the context of this special task and demonstrate a very promising direction for future work.


Social Networking:
Share |


Item Type: University of Pittsburgh ETD
Status: Unpublished
CreatorsEmailPitt UsernameORCID
Savelka, Jaromirjas438@pitt.edujas438
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairAshley,
Committee MemberMilos,
Committee MemberHe,
Committee MemberLitman,
Date: 20 August 2020
Date Type: Publication
Defense Date: 20 April 2020
Approval Date: 20 August 2020
Submission Date: 20 July 2020
Access Restriction: 2 year -- Restrict access to University of Pittsburgh for a period of 2 years.
Number of Pages: 239
Institution: University of Pittsburgh
Schools and Programs: School of Computing and Information > Intelligent Systems Program
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Information retrieval Statutory interpretation Case-law analysis Ranking Rank aggregation Similarity measures Learning to rank Language models Deep learning
Date Deposited: 20 Aug 2020 18:40
Last Modified: 20 Aug 2022 05:15

Available Versions of this Item


Monthly Views for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item