interrater reliability


Also found in: Dictionary, Thesaurus, Legal, Financial, Encyclopedia.
Related to interrater reliability: intrarater reliability

in·ter·judge re·li·a·bil·i·ty

in psychology, the consistency of measurement obtained when different judges or examiners independently administer the same test to the same subject.
Farlex Partner Medical Dictionary © Farlex 2012

interrater reliability

The extent to which two independent parties, each using the same tool or examining the same data, arrive at matching conclusions. Many health care investigators analyze graduated data, not binary data. In an analysis of anxiety, for example, a graduated scale may rate research subjects as “very anxious, ” “somewhat anxious, ” “mildly anxious, ” or “not at all anxious, ” whereas a binary method of rating anxiety might include just the two categories “anxious” and “not anxious.” If the study is carried out and coded by more than one psychologist, the coders may not agree on the implementation of the graduated scale: some may interview a patient and find him or her “somewhat” anxious; another might assess the patient as being ”very anxious.” The congruence in the application of the rating scale by more than one psychologist constitutes its interrater reliability.
Synonym: interobserver reliability
See also: reliability
Medical Dictionary, © 2009 Farlex and Partners
References in periodicals archive ?
Interrater Reliability of the Raters Intraclass Lower Upper F-test df1 df2 Coefficient * Bound ** Bound ** ODU 11/12 Explorer & 0.782 0.749 0.810 4.852 1439 2878 Thin UI ODU 11/12 0.768 0.725 0.803 4.577 719 1438 Explorer Thin UI 0.790 0.750 0.820 5.011 719 1438 ODU 11/12 Explorer & 0.714 0.684 0.741 3.579 1439 2878 Curved UI ODU 11/12 0.737 0.701 0.769 3.858 719 1438 Explorer Curved UIs 0.691 0.644 0.732 3.357 719 1438 p-value ODU 11/12 Explorer & <0.05 Thin UI ODU 11/12 <0.05 Explorer Thin UI <0.05 ODU 11/12 Explorer & <0.05 Curved UI ODU 11/12 <0.05 Explorer Curved UIs <0.05 * Intraclass coefficient: <0.5 poor reliability, 0.50-0.75 moderate reliability, 0.75-0.90 good reliability, >0.90 excellent reliability ** Confidence Interval: 95% Table II.
Interrater reliability estimates of the composite scores are presented in Table 4.
These issues are usually addressed through on-going training and regular interrater reliability checks.
In fact, the intrarater and interrater reliability of US measurement at the basal point were very high, and the intrarater reliability was higher for measurement at the basal point than that at a distance of 5 cm from the basal point.
Interrater Reliability. Two trained oncology healthcare professionals rated the two users during the performance of five exergames for 10 assessments of each item.
Intraclass Correlation Coefficient (ICC) model 2 was used to examine interrater reliability, and ICC model 3 was used to examine intrarater reliability.
TABLE 1: Intraclass correlation coefficients (ICC) for studying the intra- and interrater reliability of raters in measuring the hip, knee, and ankle joint angles using the designed MGR.
Two evaluators were trained in the correct scoring of the FLACC (Face, Legs, Activity, Cry, Consolability [16]) scale and underwent an interrater reliability evaluation.
In the structured interview, applicants may be consecutively interviewed by several interviewers who are all trained in interview administration and have established interrater reliability in the use of a standardized EI rating scale.
Their report describes interrater reliability using intraclass correlations (ICC [2,1] = .889), percent agreement (ranging from 92 percent to 96 percent), and levels of agreement (ranging from 57 percent to 100 percent).
reported good intrarater and interrater reliability in a small sample of people with stroke, but they found that 20 percent of the subjects were unable to complete the test [15].