Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Towards On-device Machine Learning: Efficient Inference, Independent Learning, and Collaborative Unsupervised Learning

Wu, Yawen (2023) Towards On-device Machine Learning: Efficient Inference, Independent Learning, and Collaborative Unsupervised Learning. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

Download (10MB) | Preview


With the increasing ubiquity of edge devices, such as the Internet of Things (IoT) and mobile devices, deploying machine learning models, particularly deep neural networks (DNNs), on these devices to extract information from sensed data can enable the democratization of artificial intelligence (AI). On-device AI has the potential to support various tasks, including wildlife monitoring, augmented reality, and autonomous driving.

To enable on-device AI, both on-device inference and training need to be achieved. On-device inference enables edge devices to make predictions from collected data. On-device training enables the devices to adapt to dynamic environments by learning from the environments and updating the model in situ. By applying on-device training to distributed devices, collaborative learning uses multiple devices to learn a shared model while keeping the data on personal devices for privacy.

However, it is challenging to achieve on-device inference and training. First, edge devices have limited computation capabilities and limited memory size, but DNNs are demanding in computation and memory. Therefore, DNNs need to be effectively compressed before deploying to edge devices. Second, there is a large gap between the high computation and energy demand of on-device training and the limited computing resources and battery capacity on edge devices. Third, during on-device training, each device can only collect a limited amount of data. However, model training needs a large amount of data to achieve a high generalization performance.

To address these challenges, this dissertation proposes several techniques to enable inference and training on single devices, and collaborative learning with multiple devices. First, a model compression framework is proposed to compress multi-exit neural networks by pruning and quantization. After compression, computation cost and model size are reduced while the accuracy is preserved. Second, an efficient training method is proposed to reduce the computation cost of training by skipping unnecessary training data and pruning the gradient computation. To further learn with as few labels as possible, a data selection approach to select the representative data for training without using labels is proposed. Third, a collaborative unsupervised learning framework for distributed devices to learn a shared model from decentralized unlabeled data is proposed.


Social Networking:
Share |


Item Type: University of Pittsburgh ETD
Status: Unpublished
CreatorsEmailPitt UsernameORCID
Wu, Yawenyawen.wu@pitt.eduyaw660000-0001-6840-267X
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Thesis AdvisorHu, Jingtongjthu@pitt.edujthu0000-0003-4029-4034
Committee MemberHuang,
Committee MemberMao,
Committee MemberLee, In
Committee MemberTang,
Committee MemberShi,
Committee MemberFang,
Date: 13 June 2023
Date Type: Publication
Defense Date: 17 February 2023
Approval Date: 13 June 2023
Submission Date: 19 January 2023
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 159
Institution: University of Pittsburgh
Schools and Programs: Swanson School of Engineering > Computer Engineering
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Artificial intelligence, machine learning, on-device machine learning, on-device inference, on-device training, federated learning, self-supervised learning
Related URLs:
Date Deposited: 13 Jun 2023 14:10
Last Modified: 13 Jun 2023 14:10


Monthly Views for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item