Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Efficient Learning Framework for Training Deep Learning Models with Limited Supervision

Ghasedi Dizaji, Kamran (2021) Efficient Learning Framework for Training Deep Learning Models with Limited Supervision. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Download (18MB) | Preview

Abstract

In recent years, deep learning has shown tremendous success in different applications, however these modes mostly need a large labeled dataset for training their parameters. In this work, we aim to explore the potentials of efficient learning frameworks for training deep models on different problems in the case of limited supervision or noisy labels.

For the image clustering problem, we introduce a new deep convolutional autoencoder with an unsupervised learning framework. We employ a relative entropy minimization as the clustering objective regularized by the frequency of cluster assignments and a reconstruction loss.

In the case of noisy labels obtained by crowdsourcing platforms, we proposed a novel deep hybrid model for sentiment analysis of text data like tweets based on noisy crowd labels. The proposed model consists of a crowdsourcing aggregation model and a deep text autoencoder. We combine these sub-models based on a probabilistic framework rather than a heuristic way, and derive an efficient optimization algorithm to jointly solve the corresponding problem.

In order to improve the performance of unsupervised deep hash functions on image similarity search in big datasets, we adopt generative adversarial networks to propose a new deep image retrieval model, where the adversarial loss is employed as a data-dependent regularization in our objective function.

We also introduce a balanced self-paced learning algorithm for training a GAN-based model for image clustering, where the input samples are gradually included into training from easy to difficult, while the diversity of selected samples from all clusters are also considered.

In addition, we explore adopting discriminative approaches for unsupervised visual representation learning rather than the generative algorithms, such as maximizing the mutual information between an input image and its representation and a contrastive loss for decreasing the distance between the representations of original and augmented image data.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Ghasedi Dizaji, Kamrankag221@pitt.edukag221
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairHuang, Hengheng.huang@pitt.edu
Committee MemberWei, Gaoweigao@pitt.edu
Committee MemberZhi-Hong, Maozhm4@pitt.edu
Committee MemberLiang, Zhanliang.zhan@pitt.edu
Committee MemberWei, Chenwei.chen@pitt.edu
Date: 13 June 2021
Date Type: Publication
Defense Date: 5 April 2021
Approval Date: 13 June 2021
Submission Date: 9 April 2021
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 152
Institution: University of Pittsburgh
Schools and Programs: Swanson School of Engineering > Electrical and Computer Engineering
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Unsupervised Learning, Weakly Supervised Learning, Self-supervised Learning
Date Deposited: 13 Jun 2021 18:45
Last Modified: 13 Jun 2021 18:45
URI: http://d-scholarship.pitt.edu/id/eprint/40589

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item