Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

The noisy channel model and sentence processing in individuals with simulated broadened auditory filters

Nunn, Kristen (2018) The noisy channel model and sentence processing in individuals with simulated broadened auditory filters. Master's Thesis, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Download (864kB) | Preview

Abstract

Noise is abundant in every day communication. This high prevalence of noise means we need a language processing mechanism that can recover intended meanings when given noisy input. Research suggests that we do this by maintaining uncertainty about linguistic input and interpreting sentences in a way that is unfaithful to the literal syntax (Gibson, Bergen, & Piantadosi, 2013; Levy, 2011; Levy, Bicknell, Slattery, & Rayner, 2009). People with communication disorders like aphasia or hearing loss have an even higher prevalence of noise. Research has shown that both groups show higher degrees of uncertainty than controls (Gibson, Sandberg, Fedorenko, Bergen, & Kiran, 2015; Nunn, 2016, & Warren, Dickey, & Liburd, 2017). The present study aims to examine how different aspects of cochlear hearing loss influence certainty about linguistic information. While having their eyes tracked, 40 individuals were administered the Gibson Task with sound files simulating broadened auditory filters (BAF). The Gibson Task is a forced-choice picture task that requires participants to select which image best represents a sentence they heard. One illustration represents the literal syntax and one represents an alternate interpretation that may be obtained through edits to the literal syntax. Sentences of different structure (double object/prepositional object, active/passive) require different types and amounts of edits to switch between interpretations. Sentences of different plausibility are more or less likely to be interpreted literally. Using data collected by Nunn (2016), comparisons were made between groups with simulated BAF, simulated reduced audibility of high frequency information (LPF), and no hearing loss (NoHL). The BAF and LPF groups were less accurate and showed higher degrees of uncertainty than the NoHL group. The BAF group was more faithful to the literal syntax than the LPF group for the double object/prepositional object condition. When comparing people with aphasia (PWA) to LPF and BAF, BAF outperformed PWA in all structures but the LPF group only outperformed PWA on actives/passives. Finally, groups with high accuracy scores sometimes showed covert signs of uncertainty in eye-tracking data. The variability between groups implies a complex relationship between noise, syntactic structure, and fidelity to a perceived linguistic signal.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Nunn, Kristenkmn44@pitt.edukmn440000000295627185
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairDickey, Michael Walshmdickey@pitt.edu
Committee MemberBrown, Christophercbrown1@pitt.edu
Committee MemberWarren, Tessatessa@pitt.edu
Date: 25 May 2018
Date Type: Publication
Defense Date: 2 March 2018
Approval Date: 25 May 2018
Submission Date: 29 March 2018
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 137
Institution: University of Pittsburgh
Schools and Programs: School of Health and Rehabilitation Sciences > Communication Science and Disorders
Degree: MS - Master of Science
Thesis Type: Master's Thesis
Refereed: Yes
Uncontrolled Keywords: uncertainty, noisy channel model, rational sentence processing, low pass filter, vocoded, simulated hearing loss, eye-tracking
Date Deposited: 25 May 2018 13:22
Last Modified: 25 May 2018 13:22
URI: http://d-scholarship.pitt.edu/id/eprint/33985

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item