Matsumura, Lindsay Clare and Junker, Brian and Weisberg, Yanna and Crosson, Amy
(2006)
Overview of the Instructional Quality Assessment.
Technical Report.
National Center for Research on Evaluation, Standards, and Student Testing (CRESST), University of California, Graduate School of Education & Information Studies.
![[img]](http://d-scholarship.pitt.edu/style/images/fileicons/text_plain.png) |
Plain Text (licence)
Available under License : See the attached license file.
Download (1kB)
|
Abstract
Educators, policy-makers, and researchers need to be able to assess the efficacy of specific interventions in schools and school Districts. While student achievement is unquestionably the bottom line, it is essential to open up the educational process so that each major factor influencing student achievement can be examined; indeed as a proverb often quoted in industrial quality control goes, “That which cannot be measured, cannot be improved”. Instructional practice is certainly a central factor: if student achievement is not improving, is it because instructional practice is not changing, or because changes in instructional practice are not affecting achievement? A tool is needed to provide snapshots of instructional practice itself, before and after implementing new professional development or other interventions, and at other regular intervals to help monitor and focus efforts to improve instructional practice. In this paper we review our research program building and piloting the Instructional Quality Assessment (IQA), a formal toolkit for rating instructional quality based primarily on classroom observation and student assignments. In the first part of the paper we review the need for, and some other efforts to provide, direct assessments of instructional practice. In the second part of this paper we briefly summarize the development of the IQA in reading comprehension and in mathematics at the elementary school level. In the third part of the paper we report on a large pilot study of the IQA, conducted in Spring 2003 in two moderately large urban school Districts. We conclude with some ideas about future work and future directions for the IQA.
Share
Citation/Export: |
|
Social Networking: |
|
Details
Item Type: |
Monograph
(Technical Report)
|
Status: |
Published |
Creators/Authors: |
Creators | Email | Pitt Username | ORCID  |
---|
Matsumura, Lindsay Clare | lclare@pitt.edu | LCLARE | | Junker, Brian | | | | Weisberg, Yanna | | | | Crosson, Amy | | | |
|
Monograph Type: |
Technical Report |
Date: |
2006 |
Date Type: |
Publication |
Access Restriction: |
No restriction; Release the ETD for access worldwide immediately. |
Publisher: |
National Center for Research on Evaluation, Standards, and Student Testing (CRESST) |
Place of Publication: |
University of California, Graduate School of Education & Information Studies |
Institution: |
National Center for Research on Evaluation, Standards, and Student Testing |
Schools and Programs: |
School of Education > Learning Sciences and Policy |
Refereed: |
Yes |
Official URL: |
http://www.cse.ucla.edu/products/reports/r671.pdf |
Additional Information: |
Copyright holders: The Regents of the University of California |
Date Deposited: |
09 Oct 2015 16:38 |
Last Modified: |
01 Nov 2017 14:00 |
URI: |
http://d-scholarship.pitt.edu/id/eprint/26209 |
Metrics
Monthly Views for the past 3 years
Plum Analytics
Actions (login required)
 |
View Item |