Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation

Gharani, Pedram (2019) Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Download (2MB) | Preview

Abstract

Positioning in navigation systems is predominantly performed by Global Navigation Satellite Systems (GNSSs). However, while GNSS-enabled devices have become commonplace for outdoor navigation, their use for indoor navigation is hindered due to GNSS signal degradation or blockage. For this, development of alternative positioning approaches and techniques for navigation systems is an ongoing research topic. In this dissertation, I present a new approach and address three major navigational problems: indoor positioning, obstacle detection, and keyframe detection. The proposed approach utilizes inertial and visual sensors available on smartphones and are focused on developing: a framework for monocular visual internal odometry (VIO) to position human/object using sensor fusion and deep learning in tandem; an unsupervised algorithm to detect obstacles using sequence of visual data; and a supervised context-aware keyframe detection.
The underlying technique for monocular VIO is a recurrent convolutional neural network for computing six-degree-of-freedom (6DoF) in an end-to-end fashion and an extended Kalman filter module for fine-tuning the scale parameter based on inertial observations and managing errors. I compare the results of my featureless technique with the results of conventional feature-based VIO techniques and manually-scaled results. The comparison results show that while the framework is more effective compared to featureless method and that the accuracy is improved, the accuracy of feature-based method still outperforms the proposed approach.
The approach for obstacle detection is based on processing two consecutive images to detect obstacles. Conducting experiments and comparing the results of my approach with the results of two other widely used algorithms show that my algorithm performs better; 82% precision compared with 69%. In order to determine the decent frame-rate extraction from video stream, I analyzed movement patterns of camera and inferred the context of the user to generate a model associating movement anomaly with proper frames-rate extraction. The output of this model was utilized for determining the rate of keyframe extraction in visual odometry (VO). I defined and computed the effective frames for VO and experimented with and used this approach for context-aware keyframe detection. The results show that the number of frames, using inertial data to infer the decent frames, is decreased.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Gharani, Pedrampeg25@pitt.edupeg250000-0003-3383-8910
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairKarimi, Hassan A.hkarimi@pitt.efuhkarimi
Committee MemberLewis, Michaelcmlewis@pitt.educmlewis
Committee MemberMunro, Paulpwm@pitt.edupwm
Committee MemberSejdic, Ervinesejdic@pitt.eduesejdic
Date: 30 August 2019
Date Type: Publication
Defense Date: 31 July 2019
Approval Date: 30 August 2019
Submission Date: 25 August 2019
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 128
Institution: University of Pittsburgh
Schools and Programs: School of Computing and Information > Information Science
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Visual-inertial odometry, Recurrent convolutional neural network (RCNN), Indoor positioning, Obstacle detection, Keyframe detection
Date Deposited: 30 Aug 2019 15:45
Last Modified: 30 Aug 2019 15:45
URI: http://d-scholarship.pitt.edu/id/eprint/37406

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item