Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Techniques to Enhance Abstractive Summarization Model Training for Low Resource Domains

Magooda, Ahmed (2022) Techniques to Enhance Abstractive Summarization Model Training for Low Resource Domains. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Download (2MB) | Preview

Abstract

Nowadays, the amount of information is growing exponentially, and it is challenging to digest even the information for a particular topic. Summarization can reduce the information into a handful of paragraphs, helping human readers digest information easier. Automatic summarization spans different techniques (abstractive, extractive, phrase extractive, etc.). Abstractive summarization specially aims to mimic how humans summarize, as it aims to summarize a large amount of text into a readable, comprehensive summary. Abstractive summarization has benefited from recent advances in Machine learning and Natural Language Processing. However, the majority of prior studies focus on data-rich domains, where large datasets are available.
On the other hand, very few studies focus on data scarce domains. A typical practical issue that is rendered in such domains is model overfitting. Training complex models using a few samples can easily lead to overfitting. As a step towards remedying these shortcomings, this thesis aims to enhance abstractive summarization models in low-resource settings by tackling three challenges.
1-Can we adapt widely used data augmentation/synthesis techniques to abstractive summarization to remedy the scarceness issue?
2- How can we benefit from domain transfer or pretraining, and what can be a helpful strategy to do it more efficiently?
3- Can we extract additional information from the data and to use it more effectively?

This thesis first proposes new data synthesis (augmentation) models, novel techniques to synthesize new data for model training. We then introduced a variant of a recent data augmentation technique to be used in generative tasks. Additionally, we explored the utility of using curriculum learning to both improve pretraining and fine tuning processes. Finally, to overcome the third challenge, we propose integrating the summarization model into a multitask learning setting. We also show that some auxiliary tasks can consistently improve abstractive summarization in a low resource setting. We finally combine multitask learning and data augmentation to observe if the combination would be more helpful than each approach in isolation. We ultimately showed that combining more than one technique can introduce some improvements compared to a single technique. However, overall, using techniques in isolation leads to more consistent improvements.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Magooda, Ahmedaem132@pitt.eduaem132
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairLitman, Dianedlitman@pitt.edudlitman
Committee MemberKovashka, Adrianakovashka@cs.pitt.eduaik85
Committee MemberHauskrecht, Milosmilos@pitt.edumilos
Committee MemberHe, Daqingdah44@pitt.edudah44
Date: 6 June 2022
Date Type: Publication
Defense Date: 4 March 2022
Approval Date: 6 June 2022
Submission Date: 4 April 2022
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 157
Institution: University of Pittsburgh
Schools and Programs: Dietrich School of Arts and Sciences > Computer Science
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: NLP ML Text Summarization Text Synthesis Data Augmentation Multitask Learning
Date Deposited: 06 Jun 2022 15:56
Last Modified: 06 Jun 2022 15:56
URI: http://d-scholarship.pitt.edu/id/eprint/42259

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item