About Client:
The client is a globally recognized university system specializing in postgraduate medical and nursing education. It operates one of the largest clinical simulation centers in North America, conducting thousands of high-stakes Objective Structured Clinical Examinations (OSCEs) and Simulated Patient Encounters (SPEs) every year.
Background:
In these simulations, students engage with trained actors portraying patients or, in some cases, medical mannequins used for hands-on procedures, while faculty evaluators assess their performance across both technical and interpersonal skills—from clinical accuracy and diagnosis to empathy and communication.
However, as enrollment grew and simulation scenarios became more complex, evaluation became increasingly burdensome. Faculty had to manually review hours of video, cross-check multiple criteria, and provide detailed feedback. With thousands of sessions conducted annually, maintaining consistency and standardization across evaluators became a significant challenge.
Challenge:
The university’s leadership sought a way to scale the evaluation process without compromising on fairness or educational quality. They faced three key pain points:
- Manual, Time-Intensive Evaluation: Reviewing and scoring each session took 45–60 minutes, making it difficult to keep pace with growing simulation volume.
- Lack of Standardization: Evaluation methods varied between one-on-one, multi-student, and multi-room simulations, leading to discrepancies in scoring.
- Subjectivity in Assessing Soft Skills: Attributes like empathy, confidence, and patient engagement were prone to evaluator bias despite standardized rubrics.
The goal was clear: reduce instructor burden, enhance consistency, and deliver faster, more actionable feedback to students.
