Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Unidimensional Vertical Scaling of Mixed Format Tests in the Presence of Item Format Effect

Moore, Debra White (2015) Unidimensional Vertical Scaling of Mixed Format Tests in the Presence of Item Format Effect. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Primary Text

Download (2MB)

Abstract

The purpose of this study was to contribute to the existing body of evidence on vertically scaling mixed format tests by examining the impact of item format effect in conjunction with specific configurations of common item sets on two of the most popular calibration methods under test specification and scaling scenarios likely to exist in practice. In addition to advice for practical application provided by the investigation, this study also explored the impact of explicitly modeling the vertical scale factor when simulating data compared to a traditional model for in which the underlying vertical scale is implied.
Using a CINEG data collection design, six grade level tests, consisting of 61 items, were created with a 9:1 ratio of multiple-choice to constructed-response items and two different sets of 14 mixed format items designated as common items. Ability distributions for 2000 students per grade level were generated with the mean ability for successive grade levels increasing at varying increments to simulate grade level separation along with four covariance structures that reflected varying degrees of correlation to simulate item format effects.
Under a 3PL-GRM model combination, expected scores were calculated with recovery of expected score used as evaluation criteria. Ability and item parameters were estimated using the MLE proficiency estimator in MULTILOG and transformation constants were calculated in STUIRT using the Stocking-Lord linking method. The performance of separate and pairwise concurrent calibration was then examined by calculating average BIAS and average RMSE values across 100 replications.
While the results of the study provided evidence that item format effects, vertical scaling method, and separation between grade levels significantly impacted the vertical scales, influence of these variables was often in combination with one another. The major findings were (1) pairwise concurrent calibration holistically performed better compared to separate calibration; (2) moderate to large item format effects were more likely to bias resultant vertical scales; (3) a large separation between grade levels resulted in a more biased vertical scale; and (4) explicitly modeling the vertical scaling factor during data generation influenced mean RMSE values more significantly than mean BIAS values.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Moore, Debra Whitedwm9@pitt.eduDWM9
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairLane, Suzannesl@pitt.eduSL
Committee MemberStone, Clement A.cas@pitt.eduCAS
Committee MemberYe, Feifeifeifeiye@pitt.eduFEIFEIYE
Committee MemberKirisci, Leventlevent@pitt.eduLEVENT
Date: 27 August 2015
Date Type: Publication
Defense Date: 8 July 2015
Approval Date: 27 August 2015
Submission Date: 5 August 2015
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 241
Institution: University of Pittsburgh
Schools and Programs: School of Education > Psychology in Education
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: vertical scaling, mixed format tests, item format effect
Date Deposited: 27 Aug 2015 19:58
Last Modified: 15 Nov 2016 14:29
URI: http://d-scholarship.pitt.edu/id/eprint/25904

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item