We believe that the auditory system, like the visual system, may besensitive to abrupt stimulus changes and the transient component inspeech may be particularly critical to speech perception. If thiscomponent can be identified and selectively amplified, improvedspeech perception in background noise may be possible.This project describes a method to decompose speech into tonal,transient, and residual components. The modified discrete cosinetransform (MDCT) and the wavelet transform are transforms used tocapture tonal and transient features in speech. The tonal andtransient components were identified by using a small number of MDCTand wavelet coefficients, respectively. In previous studies, all ofthe MDCT and all of the wavelet coefficients were assumed to beindependent, and identifications of the significant MDCT and thesignificant wavelet coefficients were achieved by thresholds.However, an appropriate threshold is not known and the MDCT and thewavelet coefficients show statistical dependencies, described by theclustering and persistence properties.In this work, the hidden Markov chain (HMC) model and the hiddenMarkov tree (HMT) model were applied to describe the clustering andpersistence properties between the MDCT coefficients and between thewavelet coefficients. The MDCT coefficients in each frequency indexwere modeled as a two-state mixture of two univariate Gaussiandistributions. The wavelet coefficients in each scale of each treewere modeled as a two-state mixture of two univariate Gaussiandistributions. The initial parameters of Gaussian mixtures wereestimated by the greedy EM algorithm. By utilizing the Viterbi andthe MAP algorithms used to find the optimal state distribution, thesignificant MDCT and the significant wavelet coefficients weredetermined without relying on a threshold.The transient component isolated by our method was selectivelyamplified and recombined with the original speech to generateenhanced speech, with energy adjusted to equal to the energy of theoriginal speech. The intelligibility of the original and enhancedspeech was evaluated in eleven human subjects using the modifiedrhyme protocol. Word recognition rate results show that theenhanced speech can improve speech intelligibility at low SNR levels(8% at -15 dB, 14% at -20dB, and 18% at -25 dB).