Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Quantifying learning in medical students during a critical care medicine elective: A comparison of three evaluation instruments

Rogers, PL and Jacob, H and Rashwan, AS and Pinsky, MR (2001) Quantifying learning in medical students during a critical care medicine elective: A comparison of three evaluation instruments. Critical Care Medicine, 29 (6). 1268 - 1273. ISSN 0090-3493

[img] Plain Text (licence)
Available under License : See the attached license file.

Download (1kB)

Abstract

Objective: To compare three different evaluative instruments and determine which is able to measure different aspects of medical student learning. Design: Student learning was evaluated by using written examinations, objective structured clinical examination, and patient simulator that used two clinical scenarios before and after a structured critical care elective, by using a crossover design. Participation: Twenty-four 4th-yr students enrolled in the critical care medicine elective. Interventions: All students took a multiple-choice written examination; evaluated a live simulated critically ill patient, requested data from a nurse, and intervened as appropriate at different stations (objective structured clinical examination); and evaluated the computer-controlled patient simulator and intervened as appropriate. Measurements and Main Results: Students' knowledge was assessed by using a multiple-choice examination containing the same data incorporated into the other examinations. Student performance on the objective structured clinical examination was evaluated at five stations. Both objective structured clinical examination and simulator tests were videotaped for subsequent scores of responses, quality of responses, and response time. The videotapes were reviewed for specific behaviors by faculty masked to time of examination. Students were expected to perform the following: a) assess airway, breathing, and circulation; b) prepare a mannequin for intubation; c) provide appropriate ventilator settings; d) manage hypotension; and e) request, interpret, and provide appropriate intervention for pulmonary artery catheter data. Students were expected to perform identical behaviors during the simulator examination; however, the entire examination was performed on the whole-body computer-controlled mannequin. The primary outcome measure was the difference in examination scores before and after the rotation. The mean preelective scores were 77 ± 16%, 47 ± 15%, and 41 ± 14% for the written examination, objective structured clinical examination, and simulator, respectively, compared with 89 ± 11%, 76 ± 12%, and 62 ± 15% after the elective (p < .0001). Prerotation scores for the written examination were significantly higher than the objective structured clinical examination or the simulator; postrotation scores were highest for the written examination and lowest for the simulator. Conclusion: Written examinations measure acquisition of knowledge but fail to predict if students can apply knowledge to problem solving, whereas both the objective structured clinical examination and the computer-controlled patient simulator can be used as effective performance evaluation tools.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: Article
Status: Published
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Rogers, PL
Jacob, H
Rashwan, AS
Pinsky, MRpinsky@pitt.eduPINSKY
Date: 1 January 2001
Date Type: Publication
Journal or Publication Title: Critical Care Medicine
Volume: 29
Number: 6
Page Range: 1268 - 1273
DOI or Unique Handle: 10.1097/00003246-200106000-00039
Schools and Programs: School of Medicine > Critical Care Medicine
Refereed: Yes
ISSN: 0090-3493
PubMed ID: 11395619
Date Deposited: 20 Mar 2012 16:00
Last Modified: 04 Feb 2019 15:56
URI: http://d-scholarship.pitt.edu/id/eprint/11468

Metrics

Monthly Views for the past 3 years

Plum Analytics

Altmetric.com


Actions (login required)

View Item View Item