Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

THE EXAMINATION OF THE PSYCHOMETRIC QUALITY OF THE COMMON EDUCATIONAL PROFICIENCY ASSESSMENT (CEPA)-ENGLISH TEST

Daiban, Salma Ali (2009) THE EXAMINATION OF THE PSYCHOMETRIC QUALITY OF THE COMMON EDUCATIONAL PROFICIENCY ASSESSMENT (CEPA)-ENGLISH TEST. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Primary Text

Download (2MB) | Preview

Abstract

The CEPA-English test is used for achievement, selection, and placement purposes. Since this test has heavily influences student's academic futures, it is imperative to ensure that the test functions as intended and provides meaningful results. Therefore, the purpose of this study was to examine the technical quality of the CEPA-English test in relation to Forms A and B. This study evaluated 1) the psychometric properties of the CEPA-English test, 2) the extent to which DIF occurs, 3) the comparability of Forms A and B, and 4) the amount of information provided at the cutoff score of 150, which is the mean of the test in the NAPO study. The study sample included 9,496 students for Form A and 9,296 for Form B, taken from the 2007 administration. The results for both Forms A and B test data revealed that the unidimensional 3PL IRT model provided a better fit at both item and test levels than the 1PL or 2PL models and the assumptions of the 3PL model were met. However, the property of invariance of item parameters was not strictly met for Form A and to some extent for Form B. Overall, the analyses revealed that the CEPA-English test demonstrated good psychometric properties, since in both forms, the majority of the items were of moderate difficulty. In addition, items moderately discriminated between high-performing and low-performing students, and both forms showed a high internal reliability. Yet, it was also found that the test could be improved by eliminating items with negative discrimination and adding easier items to gain more precise information at the cutoff score of 150. In addition, the test developer may want to evaluate items that misfit the 3PL model. Finally, while DIF items were detected between males and females, and between Arts and Sciences students, nevertheless a significant proportion of DIF items were flagged by school type, which may indicate curriculum differences across private, public, and home schools. Therefore, the test developer could evaluate items with a medium and large DIF to determine whether to revise or eliminate them from Forms A and B of the CEPA-English test.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Daiban, Salma AliSDaiban@uaeu.ac.ae
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairLane, Suzannesl@pitt.eduSL
Committee MemberStone, Clement Acas@pitt.eduCAS
Committee MemberYe, Feifeifeifeiye@pitt.eduFEIFEIYE
Committee MemberTerhorst, Laurenlat15@pitt.eduLAT15
Date: 17 December 2009
Date Type: Completion
Defense Date: 16 November 2009
Approval Date: 17 December 2009
Submission Date: 10 December 2009
Access Restriction: 5 year -- Restrict access to University of Pittsburgh for a period of 5 years.
Institution: University of Pittsburgh
Schools and Programs: School of Education > Psychology in Education
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: DIF; Equating; Validating IRT Models; Dichotomous IRT Models; Equipercentile Equating; Mantel-Haenszel Procedure; CEPA
Other ID: http://etd.library.pitt.edu/ETD/available/etd-12102009-190539/, etd-12102009-190539
Date Deposited: 10 Nov 2011 20:10
Last Modified: 15 Nov 2016 13:54
URI: http://d-scholarship.pitt.edu/id/eprint/10300

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item