alexa
Reach Us +447480022765
Determining Inter-rater Reliability Of An Innovation Implementation Checklist | 53948
ISSN: 2167-1168

Journal of Nursing & Care
Open Access

Our Group organises 3000+ Global Conferenceseries Events every year across USA, Europe & Asia with support from 1000 more scientific Societies and Publishes 700+ Open Access Journals which contains over 50000 eminent personalities, reputed scientists as editorial board members.

Open Access Journals gaining more Readers and Citations
700 Journals and 15,000,000 Readers Each Journal is getting 25,000+ Readers

This Readership is 10 times more when compared to other Subscription Journals (Source: Google Analytics)
All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Determining inter-rater reliability of an innovation implementation checklist

6th World Nursing and Healthcare Conference

Patricia A Patrician, Lori A Loan, Pauline A Swiger, Sara Breckenridge, Mary S McCarthy, Julie J Freeman and Donna L Belew

University of Alabama at Birmingham, USA European Regional Medical Command, USA Madigan Army Medical Center, USA Fort Belvoir Community Hospital, USA The Geneva Foundation, USA

ScientificTracks Abstracts: J Nurs Care

DOI: 10.4172/2167-1168.C1.019

Abstract
Inter-rater reliability is an important consideration in instrument development as well as in the ongoing fidelity of measurements that can be somewhat subjective.The Cohen’s kappa statistic takes chance into consideration and thus, provides a more robust measurement of agreement than inter-rater reliability. This analysis was an important step in a program evaluation of an innovative, multi-faceted professional nursing frameworkthat incorporated a newly developed instrument. In order to evaluate the implementation and diffusion of the innovation, site visits were conducted by a team of two investigators using the instrument comprised of six unit-level components. The two investigators met separately with nursing staff and leaders on all study units in 50% of the military hospitals included in the program evaluation. Using the “Optimized Performance Checklist,” each rated the implementation as met, not met, or partially met. Each of the 34 units was rated separately on 20 data elements, or items, in the checklist, generating 675 pairs of data elements for the observers. The formula for the kappa statistic (observed-expected agreement/1-expected agreement) was applied. The observers agreed on 652 of the 675 ratings, resulting in 97% agreement. However, when taking into consideration chance agreements and disagreements, the Cohen’s kappa statistic was .91. The Cohen’s kappa indicates a very high level of agreement even when chance is considered. The kappa is an easy to calculate statistic that provides a more conservative and realistic estimate of inter-rater reliability. It should be used when attempting to verify observer fidelity.
Biography

Patricia A Patrician, PhD, RNN, FAAN, is the Donna Brown Banton Endowed Professor at the University of Alabama at Birmingham (UAB). She joined the UAB faculty in 2008 after a 26 year career in the US Army Nurse Corps. She teaches in the PhD Program and conducts research on nurse staffing, the nursing practice environment and patient and nurse quality and safety outcomes. She is a Senior Nurse Faculty/Scholar in the Veteran’s Administration Quality Scholars fellowship program that focuses on the science of quality improvement and a national Consultant for the Quality and Safety Education for Nurses program.

Email: [email protected]

Relevant Topics
Top