Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

The Potential for 3D Depth Cameras to Automatically Evaluate Independent Wheelchair Transfer Techniques

Lin, Wei (2021) The Potential for 3D Depth Cameras to Automatically Evaluate Independent Wheelchair Transfer Techniques. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

This is the latest version of this item.

PDF (Main article)
Submitted Version

Download (4MB) | Preview


Wheelchair users rely heavily on their upper extremities to complete common but essential activities of daily living such as getting in and out of bed, and transferring to a toilet, a shower, and a car seat. The use of good transfer mechanics to avoid pain and injury is important for wheelchair users when performing transfers. The Transfer Assessment Instrument (TAI), is a tool developed to evaluate the transfer technique and help clinicians and users to recognize deficits in the technique. However, there are some limitations when therapists use the TAI as an assessment tool. These barriers decrease the usability of the TAI in clinical settings. An artificial intelligence system that can automatically score the TAI may potentially reduce the barriers associated with TAI’s usability. We aim to develop a system that can watch a patient transfer and allow for automating the TAI using marker-less motion capture technology and machine learning algorithms that classify the motions into proper and improper techniques.
Machine learning algorithms were developed and trained using data from 91 full-time wheelchair users to predict proper (low risk) and improper (high risk) wheelchair transfer techniques in accordance with eleven TAI item scores. The transfer data was split into training set (80%) and testing set (20%). The training set was used for classifier selection and model tuning. The test set was excluded from all training processes. Three k-nearest neighbors (KNN) and eight random forest classifiers were selected for 11 TAI items based on model performance. The area under the receiver operating characteristic curves (AUCs) are .83 to .99 for the training set and.79 to .94. for the test set. In order to avoid the false positive case (i.e. participant performed improper technique but the transfer is labelled as a proper transfer by the classifier), we tuned the models to achieve high precision. The precisions of the models are .87 to .96, and the recalls are .61 to .93.
For a system to automate the scoring of the TAI the system must also be able to distinguish the “setup phase” and “lift phase” of the transfer. On the TAI 4.0, items 1 to 6 are in the wheelchair setup skill group and Items 7 to 15 are in the body setup and flight/landing skill groups. In order to extract the features of each item, the motion data during the transfer needs to be separated into a setup phase and lift phase. We applied and compared a biomechanical variable based threshold method and an ML algorithm to automatically distinguish the time frames of the transfer phases. For the threshold method, the peaks observed in the linear displacement and velocity of one joint center marked by the Kinect, SPINE_BASE, were used for phase delineation. For the ML method, we trained a KNN classifier using 35 features from 81 participant’s transfer data recorded by Kinect. Using the KNN model, each time frame of the transfer was labeled as belonging to either the “setup” or “lift” phase. After further applying a filter algorithm, the method was used to identify the start and end timepoints of the transfer phases. We found that the ML method had less error in identifying the phase times but the threshold method spends less computational time in identifying the points. Although the threshold errors were larger this method had higher accuracy for predicting the TAI scores for items 10, 11, 12, 13, 14, 15 (lift phase items). The ML method had higher accuracy for predicting the TAI scores for items 1, 2 and 7 (wheelchair and body setup items). For items 8 and 9, the two methods showed equal performance. The ML method tended to undershoot the end phase times of the lift phase so it’s possible that tuning the algorithms to include more of the lift phase data could increase the accuracies of the TAI item scores that deal with the lift phase biomechanics. This will continue to be an area of future work.
Due to the discontinuation of the Kinect v2, we aimed to find another 3D depth sensor that could track full body motion for future research. The Intel® RealSense has superior technical properties in relation to the Kinect v2 and has shown excellent performance for tracking facial and hand motions in previous studies. Although Intel did not make a full body joint tracking algorithm for the sensor at the time of our study, a 3rd party one (NuitrackTM) was available that can be used with a variety of 3D sensor models including RealSense. This solution enabled us to create the same biomechanical features with the RealSense as we had created with the Kinect to quantify transfer technique. To further understand the potential for RealSense to be used as a viable substitute sensor for capturing wheelchair technique biomechanics, we compared the measurement properties of the two sensors. We assessed intra-rater reliability for each sensor, and evaluated the inter-rater reliability and agreements between the Kinect and RealSense with 30 wheelchair users who performed multiple independent transfers (150 trials total). The study found that the Kinect had higher intra-rater reliability than the RealSense for measuring four key kinematic variables related to the wheelchair transfer technique. For the agreement analysis, more than 95% of the data points fell within ±1.96 standard deviation of the mean differences. However, the inter-rater reliability between two sensors was poor. The low reliability of the RealSense may be due to the lack of robustness of the 3rd party algorithm for skeletal tracking of sitting postures and in general in comparison to the more extensively tested and developed Kinect SDK. Intel just recently introduced a full body skeletal tracking model for their sensors. It’s possible that this version of the RealSense SDK may help increase the reliability for future applications.

Packaging these outcomes together into a user-friendly system could aid therapists and patients in identifying harmful motions and learning proper evidence-based transfer practices. After using a 3D Depth Cameras to watch a wheelchair transfer, the system would be able evaluate the TAI more reliably than a therapist rater would and generate objective feedback to the users. Therefore, the results of the current study could increase the usability and feasibility of TAI in a clinical setting.


Social Networking:
Share |


Item Type: University of Pittsburgh ETD
Status: Unpublished
CreatorsEmailPitt UsernameORCID
Lin, Weiliw49@pitt.eduliw490000-0001-5136-2798
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairAlicia, Koontzakoontz@pitt.eduakoontz0000-0002-2222-3673
Committee MemberDan, Dingdad5@pitt.edudad5
Committee MemberLeming, ZhouLeming.Zhou@pitt.edulzhou10000-0003-4398-0267
Committee MemberBrooke, Slavensslavens@uwm.edu0000-0002-5095-182X
Committee MemberCheng-Shiu, Chungchc139@pitt.educhc1390000-0002-9997-5558
Date: 5 March 2021
Defense Date: 15 March 2021
Approval Date: 11 June 2021
Submission Date: 1 April 2021
Access Restriction: 2 year -- Restrict access to University of Pittsburgh for a period of 2 years.
Number of Pages: 174
Institution: University of Pittsburgh
Schools and Programs: School of Health and Rehabilitation Sciences > Rehabilitation Science
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Kinect, RealSense, Wheelchair Biomechanics, Skeletal Tracking, Machine Learning, Activities of Daily Living, Motion capture, phase of time-series data
Date Deposited: 11 Jun 2021 21:31
Last Modified: 11 Jun 2023 05:15

Available Versions of this Item


Monthly Views for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item