Liu, Jingyu
(2007)
Comparing Multi-dimensional and Uni-dimensional Computer Adaptive Strategies in Psychological and Health Assessment.
Doctoral Dissertation, University of Pittsburgh.
(Unpublished)
Abstract
This study aimed to compare the efficiencies of multi-dimensional CAT versus uni-dimensional CAT based on the multi-dimensional graded response model and provide information about the optimal size of the item pool. Item selection and ability estimation methods based on multi-dimensional graded response models were developed and two studies, one based on simulated data, the other based on real data, were conducted. Five design factors were manipulated: correlation between dimensions, item pool size, test length, ability level, and number of estimated dimensions. A modest effect due to the correlation between dimensions on the outcome measures was observed, although the effect was found primarily for correlations of 0 versus 0.4. Based on a comparison of the correlation condition equal to zero with correlation conditions greater than zero, the multi-dimensional CAT was more efficient than the uni-dimensional CAT. As expected, ability level had an impact on the outcome measures. A multi-dimensional CAT provided more accurate estimates for those examinees with average true ability values than those with true ability values in the extreme range. The multi-dimensional CAT was over-estimated for examinees with negative true ability values and under-estimated for examinees with positive true ability values. This is consistent with Bayesian estimation methods which "shrink" estimates toward the mean of the prior distribution. As the number of estimated dimensions increased, more accurate estimates were achieved. This supports the idea that the ability of one dimension can be used to augment the information available to estimate ability in another dimension. Finally, larger item pools and longer tests yielded more accurate and reliable ability estimation, although greater difference in efficiency was realized when comparing shorter tests and smaller item pools.Information on the optimal item pool size was provided by plotting the outcome measures versus the item pool size. The plots indicated that, for short tests, the optimal item pool size was 20 items; for longer test, the optimal item pool size was 50 items. However, if item exposure control or content balancing were an issue, a larger item pool would be needed to achieve the same efficiency in ability estimates.
Share
Citation/Export: |
|
Social Networking: |
|
Details
Item Type: |
University of Pittsburgh ETD
|
Status: |
Unpublished |
Creators/Authors: |
|
Date: |
27 September 2007 |
Date Type: |
Completion |
Defense Date: |
4 June 2007 |
Approval Date: |
27 September 2007 |
Submission Date: |
13 June 2007 |
Access Restriction: |
No restriction; Release the ETD for access worldwide immediately. |
Institution: |
University of Pittsburgh |
Schools and Programs: |
School of Education > Psychology in Education |
Degree: |
PhD - Doctor of Philosophy |
Thesis Type: |
Doctoral Dissertation |
Refereed: |
Yes |
Uncontrolled Keywords: |
CAT; multidimensional IRT |
Other ID: |
http://etd.library.pitt.edu/ETD/available/etd-06132007-134603/, etd-06132007-134603 |
Date Deposited: |
10 Nov 2011 19:47 |
Last Modified: |
15 Nov 2016 13:44 |
URI: |
http://d-scholarship.pitt.edu/id/eprint/8092 |
Metrics
Monthly Views for the past 3 years
Plum Analytics
Actions (login required)
 |
View Item |