Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form


Irwin, Regina (Jeannie) Yuhaniak (2010) SPEECH TO CHART: SPEECH RECOGNITION AND NATURAL LANGUAGE PROCESSING FOR DENTAL CHARTING. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

Primary Text

Download (2MB) | Preview


Typically, when using practice management systems (PMS), dentists perform data entry by utilizing an assistant as a transcriptionist. This prevents dentists from interacting directly with the PMSs. Speech recognition interfaces can provide the solution to this problem. Existing speech interfaces of PMSs are cumbersome and poorly designed. In dentistry, there is a desire and need for a usable natural language interface for clinical data entry. Objectives. (1) evaluate the efficiency, effectiveness, and user satisfaction of the speech interfaces of four dental PMSs, (2) develop and evaluate a speech-to-chart prototype for charting naturally spoken dental exams. Methods. We evaluated the speech interfaces of four leading PMSs. We manually reviewed the capabilities of each system and then had 18 dental students chart 18 findings via speech in each of the systems. We measured time, errors, and user satisfaction. Next, we developed and evaluated a speech-to-chart prototype which contained the following components: speech recognizer; post-processor for error correction; NLP application (ONYX) and; graphical chart generator. We evaluated the accuracy of the speech recognizer and the post-processor. We then performed a summative evaluation on the entire system. Our prototype charted 12 hard tissue exams. We compared the charted exams to reference standard exams charted by two dentists. Results. Of the four systems, only two allowed both hard tissue and periodontal charting via speech. All interfaces required using specific commands directly comparable to using a mouse. The average time to chart the nine hard tissue findings was 2:48 and the nine periodontal findings was 2:06. There was an average of 7.5 errors per exam. We created a speech-to-chart prototype that supports natural dictation with no structured commands. On manually transcribed exams, the system performed with an average 80% accuracy. The average time to chart a single hard tissue finding with the prototype was 7.3 seconds. An improved discourse processor will greatly enhance the prototype's accuracy. Conclusions. The speech interfaces of existing PMSs are cumbersome, require using specific speech commands, and make several errors per exam. We successfully created a speech-to-chart prototype that charts hard tissue findings from naturally spoken dental exams.


Social Networking:
Share |


Item Type: University of Pittsburgh ETD
Status: Unpublished
CreatorsEmailPitt UsernameORCID
Irwin, Regina (Jeannie) Yuhaniakrey3@pitt.eduREY3
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairChapman, Wendy WebberWEC6@PITT.EDUWEC6
Committee MemberChapman, BrianCHAPBE@PITT.EDUCHAPBE
Committee MemberSpallek, Heikohspallek@pitt.eduHSPALLEK
Committee MemberHaug,
Committee MemberSchleyer, Titustitus@pitt.eduTITUS
Date: 11 January 2010
Date Type: Completion
Defense Date: 28 September 2009
Approval Date: 11 January 2010
Submission Date: 1 December 2009
Access Restriction: 5 year -- Restrict access to University of Pittsburgh for a period of 5 years.
Institution: University of Pittsburgh
Schools and Programs: School of Medicine > Biomedical Informatics
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Dental Informatics; Medical Informatics Applications; Natural Language Processing; Speech Recognition Software; User-Computer Interface
Other ID:, etd-12012009-163519
Date Deposited: 10 Nov 2011 20:07
Last Modified: 15 Nov 2016 13:52


Monthly Views for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item