Determining Inter-rater Reliability Of An Innovation Implementation Checklist | 53948
Journal of Nursing & Care
Our Group organises 3000+ Global Conferenceseries Events every year across USA, Europe & Asia with support from 1000 more scientific Societies and Publishes 700+ Open Access Journals which contains over 50000 eminent personalities, reputed scientists as editorial board members.
This Readership is 10 times more when compared to other Subscription Journals (Source: Google Analytics)
All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.
Inter-rater reliability is an important consideration in instrument development as well as in the ongoing fidelity of measurements
that can be somewhat subjective.The Cohen’s kappa statistic takes chance into consideration and thus, provides a more robust
measurement of agreement than inter-rater reliability. This analysis was an important step in a program evaluation of an innovative,
multi-faceted professional nursing frameworkthat incorporated a newly developed instrument. In order to evaluate the implementation
and diffusion of the innovation, site visits were conducted by a team of two investigators using the instrument comprised of six
unit-level components. The two investigators met separately with nursing staff and leaders on all study units in 50% of the military
hospitals included in the program evaluation. Using the “Optimized Performance Checklist,” each rated the implementation as met,
not met, or partially met. Each of the 34 units was rated separately on 20 data elements, or items, in the checklist, generating 675
pairs of data elements for the observers. The formula for the kappa statistic (observed-expected agreement/1-expected agreement)
was applied. The observers agreed on 652 of the 675 ratings, resulting in 97% agreement. However, when taking into consideration
chance agreements and disagreements, the Cohen’s kappa statistic was .91. The Cohen’s kappa indicates a very high level of agreement
even when chance is considered. The kappa is an easy to calculate statistic that provides a more conservative and realistic estimate of
inter-rater reliability. It should be used when attempting to verify observer fidelity.
Patricia A Patrician, PhD, RNN, FAAN, is the Donna Brown Banton Endowed Professor at the University of Alabama at Birmingham (UAB). She joined the UAB faculty in 2008 after a 26 year career in the US Army Nurse Corps. She teaches in the PhD Program and conducts research on nurse staffing, the nursing practice environment and patient and nurse quality and safety outcomes. She is a Senior Nurse Faculty/Scholar in the Veteran’s Administration Quality Scholars fellowship program that focuses on the science of quality improvement and a national Consultant for the Quality and Safety Education for Nurses program.