Korean J Fam Med 2018; 39(2): 96-100  https://doi.org/10.4082/kjfm.2018.39.2.96
Comparison of Patient-Physician Interaction Scores of Clinical Practice Examination between Checklists and Rating Scale
Nam Eun Kim, Hoon Ki Park*, Kyong Min Park, Bong Kyung Seo, Kye Yeung Park, Hwan Sik Hwang
Department of Family Medicine, Hanyang University College of Medicine, Seoul, Korea
Hoon Ki Park https://orcid.org/0000-0002-8242-0943
Tel: +82-2-2290-8738, Fax: +82-2-2281-7279, E-mail: hoonkp@hanyang.ac.kr
Received: June 30, 2016; Revised: October 7, 2016; Accepted: October 14, 2016; Published online: March 20, 2018.
© Korean Academy of Family Medicine. All rights reserved.

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Background: The clinical practice examination (CPX) was introduced in 2010, and the Seoul-Gyeonggi CPX Consortium developed the patient-physician interaction (PPI) assessment tool in 2004. Both institutions use rating scales on classified sections of PPI but differ in their scoring of key components. This study investigated the accuracy of standardized patient scores across rating scales by comparing checklist methods and verified the concurrent validity of two comparable PPI rating tools.
Methods: An educational CPX module dyspepsia case was administered to 116 fourth-year medical students at Hanyang University College of Medicine. One experienced standardized patient rated exams using two different PPI scales. She scored checklists composed of 43 items related to the two original PPI scales through video clips of the same students. From these checklists, we calculated Pearson’s correlation coefficient.
Results: The correlations of total PPI score between the checklist and rating scale methods were 0.29 for the Korean Medical Licensing Examination (KMLE) tool and 0.30 for the consortium tool. The correlations between the KMLE and consortium tools were 0.74 for checklists and 0.83 for rating scales. In terms of section scores, the consortium tool showed only three significant correlations between the two methods out of seven sections and the KMLE tool showed only two statistically significant correlations out of five sections.
Conclusion: The rating scale and checklist methods exhibited a weak relationship in the PPI assessment, but a high correlation between assessment tools using the same method. However, the current rating scale requires modification by reorganizing key scoring components through factor analysis.
Keywords: Physician-Patient Relations; Medical Education; Educational Measurement; Behavior Rating Scale; Checklist

This Article