The clinical practice examination (CPX) was introduced in 2010, and the Seoul-Gyeonggi CPX Consortium developed the patient-physician interaction (PPI) assessment tool in 2004. Both institutions use rating scales on classified sections of PPI but differ in their scoring of key components. This study investigated the accuracy of standardized patient scores across rating scales by comparing checklist methods and verified the concurrent validity of two comparable PPI rating tools.
An educational CPX module dyspepsia case was administered to 116 fourth-year medical students at Hanyang University College of Medicine. One experienced standardized patient rated exams using two different PPI scales. She scored checklists composed of 43 items related to the two original PPI scales through video clips of the same students. From these checklists, we calculated Pearson's correlation coefficient.
The correlations of total PPI score between the checklist and rating scale methods were 0.29 for the Korean Medical Licensing Examination (KMLE) tool and 0.30 for the consortium tool. The correlations between the KMLE and consortium tools were 0.74 for checklists and 0.83 for rating scales. In terms of section scores, the consortium tool showed only three significant correlations between the two methods out of seven sections and the KMLE tool showed only two statistically significant correlations out of five sections.
The rating scale and checklist methods exhibited a weak relationship in the PPI assessment, but a high correlation between assessment tools using the same method. However, the current rating scale requires modification by reorganizing key scoring components through factor analysis.
Citations
Given emerging evidence of the association between stress and disease, practitioners need a tool for measuring stress. Several instruments exist to measure perceived stress; however, none of them are applicable for population surveys because stress conceptualization can differ by population. The aim of this study was to develop and validate the Perceived Stress Inventory (PSI) and its short version for use in population surveys and clinical practice in Korea.
From a pool of perceived stress items collected from three widely used instruments, 20 items were selected for the new measurement tool. Nine of these items were selected for the short version. We evaluated the validity of the items using exploratory factor analysis of the preliminary data. To evaluate the convergent validity of the PSI, 387 healthy people were recruited and stratified on the basis of age and sex. Confirmatory analyses and examination of structural stability were also carried out. To evaluate discriminatory validity, the PSI score of a group with depressive symptoms was compared with that of a healthy group. A similar comparison was also done for persons with anxious mood.
Exploratory factor analysis supported a three-factor construct (tension, depression, and anger) for the PSI. Reliability values were satisfactory, ranging from 0.67 to 0.87. Convergent validity was confirmed through correlation with the Perceived Stress Scale, Center for Epidemiologic Studies Depression Scale, and State-Trait Anxiety Inventory. People with depressive or anxious mood had higher scores than the healthy group on the total PSI, all three dimensions, and the short version.
The long and short versions of the PSI are valid and reliable tools for measuring perceived stress. These instruments offer benefits for stress research using population-based surveys.
Citations