alexa
Reach Us +44-1904-929220
Study of Quality Assurance Programs in Anatomic Pathology that Drive Improved Proficiency, Reduce Cost and Enhance Positive Patient Outcomes | OMICS International
ISSN-2155-9929
Journal of Molecular Biomarkers & Diagnosis

Like us on:

Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business

Study of Quality Assurance Programs in Anatomic Pathology that Drive Improved Proficiency, Reduce Cost and Enhance Positive Patient Outcomes

Mark Priebe MT*

Quality Star LLC, 17117 Oak Drive, Omaha, Nebraska, USA

*Corresponding Author:
Mark Priebe MT
Managing Director, Quality Star LLC
17117 Oak Drive, Omaha
Nebraska, USA
E-mail: [email protected]

Received date: August 01, 2016; Accepted date: August 28, 2016; Published date: August 30, 2016

Citation: Priebe MTM (2016) Study of Quality Assurance Programs in Anatomic Pathology that Drive Improved Proficiency, Reduce Cost and Enhance Positive Patient Outcomes. J Mol Biomark Diagn 7: 297. doi:10.4172/2155-9929.1000297

Copyright: © 2016 Priebe M. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

Visit for more related articles at Journal of Molecular Biomarkers & Diagnosis

Abstract

Objective: To review the frequency and related impact of interpretive errors in anatomic pathology and how quality assurance (QA) programs measure in their ability to help reduce diagnostic interpretive error in surgical pathology.

Design: From an extensive number of published studies, the rate of major discrepancies identified for cancer patients referred to another institution occur from 4.6% to 14.7%, depending on type of tissue. However published data indicates the current intra-lab QA programs ability to detect these discrepancies is only 0.8% to 1.7%. To help understand the cause of this gap, four formal quality assurance case review programs both inter and intra-lab were reviewed for their ability to satisfy a set of selected design attributes known to help identify interpretive error. Peer reviewed literature was researched to support claims for each program percent compliance to the attributes, strengths, drawbacks, and best demonstrated practices were identified.

Results: No program met the selected attribute listing 100%, compliance ranged from 29% (met 2 of 7) to 86% (met 6 of 7) for each program.

Conclusion: Laboratories should be aware of the limitations of each QA program and take into consideration their case and pathologist mix and current on-site concerns, select a program with attributes that best match their QA needs. In general, programs that are standardized, include external review by subspecialist and are performed close to the final sign-out date may offer the greatest amount of error discovery and potential to positively influence patient outcomes and continuous improvement. Although this study focused on discordance in cancer related surgical pathology, case review can also be an effective tool in discovery of all histology/cytology diagnostic and clerical discrepancies.

Keywords

Quality assurance; Interpretive error; Surgical pathology; Diagnostic error

Introduction

Two significant publications emerged over the past 18 months calling our attention to the need for enhanced focus on diagnostic quality and the laboratory/pathologist contribution to diagnostic discordance. The first, institute of medicine report (improving diagnosis in healthcare”, Nov 2015), identified “improving the diagnostic process is not only possible, but also represents a moral, professional and public health imperative”. A support article by Johns Hopkins estimates medical errors may result in 250,000 deaths per year, making medical errors the third most common cause of death in the US [1]. When it comes to anatomic pathology (AP), getting the diagnosis right the first time is imperative, especially in the diagnosis of cancer. The appropriate treatment plan and therapy is critical to successful patient outcomes.

The second, interpretive diagnostic error reduction in surgical pathology and cytology, is an expert panel review of over 100 published studies on diagnostic discrepancy in AP. The findings document an 18% median discrepancy and 7.4% major discrepancy rate for surgical pathology When studies are reviewed closer, it was found that external case review is 5 fold more sensitive in detecting discrepancies than internal review [2] (Table 1).

Study type/# Discrepancy rate % Major discrepancy rate %
Surgical pathology/147 18.3 (7.5-37.4) 6.3 (1.9-10.6)
External case review/135 23.0 (10.6-40.2) 7.4 (4.6-14.7)
Internal case review/57 10.9 (3.8-17.6) 1.2 (0.3-3.1)

Table 1: Summary of studies on the frequency of interpretive errors.

Every year, 60 million surgical biopsies are performed and 1.6 million Americans are diagnosed with cancer [3]. As pathology and radiology play a significant role in the diagnostic process, it is important to note that radiology has targeted <2% major discrepancy rate as their quality goal [4].

Current quality tools are no longer acceptable in AP. Over the past 20 years laboratories have made significant investments in quality initiatives. Certainly on the clinical side, with increasing adoption of automation and sample handling, quality has improved proportionally. On the anatomic side, with more subjectivity and far less automation, those investments have a less significant impact and quality has only marginally improved. Plotting the work of Raab reporting on major discrepancies identified by year of study, there is only a minor negative slope tracking our progress. Clearly next generation quality tools need to be implemented if we want to make any significant improvement in reducing diagnostic discrepancies. Evidence indicates that there is a compelling gap in our current quality practices and an opportunity to improve quality assurance initiatives (Figure 1).

molecular-biomarkers-diagnosis-Major-discrepancies-surgical

Figure 1: Major discrepancies in surgical pathology identified by year of published study.

Marginal quality progress has a high cost

Major medical institutions are focusing on quality metrics of diagnostic accuracy and publishing their results and their efforts to reduce them. The university of Pennsylvania medical center estimates the average annual treatment cost due to interpretive errors in anatomic pathology costs $21,444 ($10,803-26,661) per occurrence and occur at the rate of 281 cases annually within their institution [5,6]. MD Anderson cancer center reported after reviewing 2,718 patient cases referred to them during September of 2011, interinstitutional review, 18.7% presented with minor discrepancies and 6.2% (169 patients) with major discrepancies. The financial review of 8 major breast discrepancies identified an average cost impact of $70,000 ($18,560-$115,800) per case [5,6].

Outcomes of an external QA program implementation

In actual practice, implementing an external QA case review program utilizing subspecialists as reviewers, showed a significant reduction in deferral rates over time. The QA program spanned over 51 months and totaled 354 QA cases reviewed by 10 subspecialties. The longitudinal change in deferral rates started with an initial assessment rate of 10% deferrals, improving over time to 3% at the end of the 51- month study. The greatest gain in deferral reduction came in the first two years of program implementation and remained relatively stable for the remaining two years of the study (Figure 2) [7].

molecular-biomarkers-diagnosis-Overall-deferral-percent

Figure 2: Overall deferral rates percent over time (1 month intervals).

Cost of readmission

Measuring 30-60-day readmission rates is a required quality metric by CMS. In a recent study on the examination of 30-day readmissions at the Ohio state university Wexner medical center comprehensive cancer hospital, of 2,531 inpatient admissions in CMS patients over 6 months, 11% of patients experienced at least one readmission.

The most common causes for first readmission were new diagnoses not present at first admission (n=43, 23%), new or worsening symptoms due to cancer progression (n=40, 21%) and complications of procedures (n=25, 13%). There were 38 (21%) initial readmissions classified as potentially preventable.

The study did not attempt to propose the impact of diagnostic pathology discrepancies but did note the contributing impact of misdiagnosis [8]. When looking at the cost of readmissions, the Cleveland Clinic found that each readmission in general medical oncology cost on average $18,365 [9].

Need for change

When things go wrong, 46% of the error in diagnosis come from pathology and radiology, [10] while 97% of the cancer diagnosis is based on the pathology specimen [11,12]. From our data above, 100s of studies has shown that the rate of major discrepancies identified for cancer patients referred to another institution occurs from 4.6% to 14.7%; yet publsished data indicates the current Intralab quality assurance (QA) ability to detect these discrepancies is only 0.8% to 1.7% [13,5].

We will review the current and next generation quality assurance programs to understand their strength and limitations and potential to help decrease this quality measurement gap.

Methods

For most laboratories, the quality strategy is made up of multiple QA/QC programs that best fit the institutions patient mix, staff experience and specialty status. QA programs can be formal those that are scheduled, (volume and time) predictable and under your control; or Informal having programs that apply as QA but do not have a formal schedule, frequency or under your control (Table 2).

Formal quality assurance programs Informal quality assurance programs
Retrospective case review (intra and inter) Autopsy
Proficiency testing Diagnostic consult (internal or external)
Prospective case review Patient referral

Table 2: QA programs.

For this discussion we will focus on the formal QA programs, although the informal programs can offer a wealth of quality information and should be tracked and documented as part of your overall quality program, they lack the ability to be fairly applied and routinely scheduled. In addition, such programs only apply to known positive cases missing the opportunity for discovery in false negative cases. Although CLIA has implemented QA requirements for slide review of 10% in gyn-cytology, no such mandated QA exists for surgical pathology. In a CAP Q-probe (May 2012) with 73 labs responding, of those reporting (56), 45% of the laboratories reported using post (retrospective) sign out case review as the means to help detect defects, followed by Don't Know 29%, clinician request 21% and tumor conference of 5% (Table 3).

Attribute Proficiency testing Internal case review (retrospective) Internal case review (prospective) External peer case review by subspecialist (retrospective)
Standardized * - - *
Benchmarking * - - *
Subspecialty review * ? ? *
Detects false negative and positive cases - * * *
QA total process - * * *
Influence the diag. in real/near-real-time - ? * ?
Does not add to the Pathologist Workload - - - *
Key positive feature/s Established minimum quality tool Most common QA practice Real time External subspecialist review, does not use pathologist time
Negative consideration Does not QA the full case detail from gross to report Demanding on pathologist and technologist time, limited subspecialty coverage, bias and conflict Most demanding on pathologist and technologist time, requires a significant depth of on-site subspecialty Program needs to be double blinded for confidentiality
Best demonstrated practice CAP and ASCP proficiency programs ADASP guidelines on QC and QA in AP quality assurance UPMC QualityStar™ external QA case review by subspecialist

Table 3: Current formal quality assurance program for AP.

Results

Proficiency testing (PT) or external quality assurance (EQA)

This compares a laboratory's test results using unknown specimens (usually digital images), to results from other laboratories. It is the most established QA program and should be considered the minimum requirement for AP laboratory quality assurance. Clinical feedback and reference to subspecialists are provided and standardization allows for national benchmarking capabilities. PT programs from CAP, ASCP and others are approved by the American board of pathology and meet part IV requirements for maintenance of certification (MOC) (American board of pathology website for a complete listing of PT programs that are level IV compliant.

Drawbacks: adds to pathologist workload, does not offer full case review from gross to clinical report, and is not representative of pathologist or laboratory caseload. To gain the added value of a subspecialist review requires a significant volume and depth of pathology specialty.

Internal case review (Retrospective)

A random selection of 1% to 10% of cases or more, for secondary QA case review also referred to as peer review. This is the most common practice today for QA case review, allows for complete case review and represents the pathologist's workload. If performed prior to final sign-out, may be able to influence the diagnosis. This program can also be utilized for MOC Part IV.

Drawbacks, it is also subjected to on-site biases and personnel conflicts. It is not standardized, so benchmarking is difficult between institutions. Most laboratories also lack true peer review from subspecialist in all tissue types or pathology specialty.

Internal case review (Prospective)

Case reviews like above but performed prior to sign out in real time to allow findings to influence the final diagnosis and add additional comments that may contribute to enhanced patient care. An elegant example was presented by the [13]. The presentation demonstrated similar error rates pre- and post-sign out with no effect on case turnaround time. Can be used for MOC Part IV.

Drawbacks, the program does require a significant depth of pathology subspecialty, software (AP/LIS) and development support that is not found in most AP laboratories. As the program is not standardized, it is difficult to receive the benefits of benchmarking with similar programs nationally. It is also subjected to on-site biases and personnel conflicts.

External (Peer) case review by sub-specialist (Retrospective)

This is a comprehensive AP/QA program that is built around case review outside the institution (inter-lab) as a new level, next generation of quality intelligence. It offers a significant enhancement (5X) in the ability to provide quality feedback for guidance and continuous improvement. If performed prior to final sign-out, may be able to influence the diagnosis. Two characteristics stand out when comparing the sensitivity of error detection between intra- and inter- laboratory case review: 1. The difference in the ability to gain incremental case scrutiny by using subspecialists for review (when compared to using generalist pathologists) and 2. The difference in moving the review outside the institution to reduce on-site bias and feedback confrontation.

In this program, cases can be submitted via glass slides or digital images (Cases are de-identified prior to submission and cases with digital images are uploaded to a secure cloud). Academic medical centers which are also national cancer institute (NCI) sites, provide blinded subspecialist case review. The benefit is a standardized program that allows benchmarking at an increased level of granularity without adding to the pathologist's workload. The program is also ABP approved for MOC part IV and is the only patient safety organization recognized by the agency for healthcare research and quality (AHRQ).

Drawbacks: It does require additional efforts to blind each case prior to submission and uploading of multiple WSI images takes time and may need to be coordinated within the lab. Laboratories without digital imaging are required to mail case slides to a secure confidential site for digitizing.

Discussion

The strongest impact for reducing interpretive diagnostic error in AP would be to truly transform quality assurance for better outcomes. The data supports external peer review, by subspecialty, close to signout, as the primary benchmark for measuring diagnostic accuracy for improved quality, however most QA programs lack one or all of these attributes. If we want to make a meaningful change in quality, we need to raise the bar on our quality metrics and challenge a 1% improvement over 15+ years as acceptable.

It is very difficult for a general pathologist to stay current in all organ systems and cancer types. As with all disciplines, frequency of interactions builds confidence and skills, and helps keep practitioners current with evolving diagnostic tools such as molecular assays, biomarkers and immunohistochemical stains. Having subspecialists on site is rare in the average hospital setting (3-4 pathologists), and having multiple subspecialists to provide quality assurance peer review is extremely rare. Laboratories should feel comfortable in going outside of their institution to seek benchmarking and learning opportunities.

Quality intelligence can impact current interpretive diagnosis behavior however, in itself quality intelligence has no value unless it is reviewed, presumptive corrective action implemented, follow-up monitoring provides confirmation of improvement and surveillance monitors the adoption of the corrective action. A good review of managing the process can also be found in publication [14].

Diagnostic accuracy is often claimed, but less often measured. If you don’t measure, then you don’t know. Today, a broader, next-gen quality measurement is voluntary-but nearing compulsory. Treat your quality intelligence with confidentiality and re-establish best practices to raise the bar for diagnostic accuracy and better patient care. The original laboratory is in the best position to;

• Determine whether a discordant diagnosis has already been identified through other quality or clinical review mechanisms.

• To assess whether a clinical follow-up is needed, and whether an opportunity exists for improved care for a particular patient.

• Set the goals for best practices. Knowing the clinically meaningful diagnostic discordance frequency of the lab or pathologist, gives you the ability to accept the current quality metric or establish a new goal of improvement.

• Implement corrective action in the form of training, policies and or procedures.

• Establish longitudinal tracking with external benchmarking of related cases to measure effectiveness and tuning needs of the corrective action process.

As professionals in the healthcare system, we realize that focusing on quality is imperative. The fact that you are reading this article is attestation that you did not enter the healthcare field to simply maintain status quo. When thinking about the programs reviewed in this article, each will make a contribution to your quality initiatives. The goal is to build your quality tool set with the most effective and cost saving programs that will most rapidly close the gap on diagnostic errors in anatomic pathology. Taking out 5% of diagnostic error by moving the major diagnostic discrepancy rate from 7% to 2% may impact 80,000 patients and save $1.7 billion annually in healthcare cost. With all the things that are on our desks, closing this gap is worthy of our attention.

Acknowledgment

Thank you to Monique Spence for her editorial review and contribution.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Relevant Topics

Article Usage

  • Total views: 13059
  • [From(publication date):
    September-2016 - Jan 25, 2020]
  • Breakdown by view type
  • HTML page views : 12852
  • PDF downloads : 207
Top