Skip to main content
  • Research article
  • Open access
  • Published:

Quality assurance and patient safety protocols for breast and gynecologic pathology in an Academic Women’s Hospital

Abstract

Background

Quality assurance and peer-review practices in surgical pathology have been well described in the literature, but the majority of these reports apply to the realm of general surgical pathology. We focused on the peer-review reporting system of a specialty women’s health pathology practice consisting exclusively of breast and gynecologic pathology, with the specific aims of identifying diagnostic discrepancies that affected patient care.

Methods

The quality measures in this specialty practice are monitored, and the Medical Director reviews all amended/corrected reports. Error types are qualitative, and are categorized according to impact on patient care. QA data of all amended reports from 2012 to 2014 in breast and gynecologic pathology, as a measure of error type and frequency, were reviewed.

Results

Of all specimens during this time period, 343 (0.54% of all reports) required amendment due to a QA metric-discovered discrepancy. Breast specimens demonstrated a higher amendment rate than GYN specimens (1.14% of breast specimens versus 0.27% of GYN specimens). The most common error type requiring an amendment for both breast and GYN specimens was a type A, or Minor Disagreement (reports amended for type A discrepancy: 78.7% of total; 81.9% of breast; 72.6% of GYN). Type B, or Moderate Disagreement discrepancies, accounted for 21.3% of all amended cases (reports amended for type B discrepancy: 18.1% of breast; 27.3% of GYN). Of all breast and GYN reports reviewed during the QA evaluation, there were no cases categorized as type C, or Major Disagreements, which would significantly alter patient treatment.

Conclusion

When surgical pathology is practiced in a laboratory utilizing comprehensive quality assurance protocols, major diagnostic interpretation errors are infrequent. The practice minimizes error, maximizes patient safety, and maximizes educational opportunities of practicing pathologists in real-time.

Background

Surgical pathologic diagnoses direct patient treatment, and therefore, correct diagnostic interpretation is essential for proper patient management. Recently published data regarding diagnostic disagreements among pathologists’ evaluation of breast biopsy specimens have alarmed the public by reporting an almost 25% discordance rate among pathologists participating in the study, particularly in diagnoses of atypia [4]. Unfortunately, this study was misrepresentative of the true practice of pathology, and has generated misleading data created in a non-CLIA lab research environment.

As a result of this study, we chose to evaluate our own QA data to determine the frequency of diagnostic discrepancies (as measured by examination of amended reports) in breast and GYN pathology occurring at our institution. At our institution, an academic Women’s Hospital, comprehensive quality assurance protocols are practiced to detect and remedy significant diagnostic error in an effort to maximize patient safety.

Laboratory medicine is a highly structured field, of which the accuracy and safety has continuously been evaluated and regulated for the last several decades. To provide diagnostic information to other clinicians, pathologists utilize an abundance of diagnostic tools and consultation in forming a diagnostic judgment, such as access to patient electronic medical record, access to radiographic images, submission of additional tissue levels, specialized immunohistochemical stains, access to prior related specimen slides, and, in some cases, submission of additional tissue. Importantly, after a diagnosis is rendered using these tools, frequent re-evaluation of case material by various QA measurements frequently occurs. These QA strategies are employed by practicing laboratories, not only as a means of decreasing diagnostic error, but also to meet federal regulatory guidelines for accreditation. Practicing within this setting, diagnostic errors may still occur, but the rate of diagnostic errors which could result in major harm to a patient are low [2, 13, 17, 19, 25].

The goal of this study was to review our quality metrics to determine whether our QA process of protocols minimize serious events that might alter patient management.

Design and methods

In this study, we retrospectively assessed error frequency and severity occurring in breast and gynecologic (GYN) pathology specimens by reviewing QA data on intradepartmental reports requiring an amendment.

The quality assurance/peer review protocols practiced and monitored at our institution include 10% intradepartmental random case review, frozen section/permanent section diagnosis correlation, intradepartmental consensus conference review, review of cases presented at multidisciplinary tumor boards, double independent reads of all breast core biopsies and all new malignancies prior to sign out, review of prior biopsy materials concurrently with surgical resections, and real-time cytohistologic correlations. In addition, all cases that are sent to outside institutions by patient request have incoming reports that are reviewed by the Chief of service, and pathologists are required to issue addendum reports that state their report was reviewed by an outside institution and diagnosis was in agreement. If a diagnosis is not in agreement with an outside institution, the case is then reviewed by the Chief of service to adjudicate the diagnosis with further exterior consultation if necessary.

The peer review processes are all monitored by the Chief of service, who also serves as the head of the departmental quality assurance committee. The Chief, in concert with the QA manager, initiates the production of an amended report if the criteria for an amended report are met as described below, and the Chief assigns an error severity based on the report defect and how it impacts patient care. Impact on patient care is determined individually for each case, taking into account the pathology report result, clinical information supplied, electronic medical record information, and clinician input.

Error severity in amended reports is designated as follows: A - Minor Disagreement, such as a spelling or formatting error which had no bearing on patient care; B - Moderate Disagreement, including defects/omissions in diagnoses that would not result in a change in patient care, Type B errors are deficits in the report, that may include incorrect information, for example, errors of omission, missed lymphovascular invasion, or incorrect grading of a breast carcinoma. Such errors would not necessitate a change in patient management at our institution. Type C errors represent a major disagreement, which include major diagnostic discrepancies that would be considered to be a serious event warranting a change in the treatment plan of a patient [Table 1]. Type C errors by definition are major report defects would immediately impact patient care, such as misinterpretation of the biological nature of a tumor-benign versus malignant. If there is any doubt of error assignment as type B or C, the Chief of Service consults the case with the treating physician to properly assign the error type.

Table 1 Amendment error severity categories

A class C amended report mandates reporting to the medical staff patient safety office within 24 h of error discovery according to ACT 13, the Pennsylvania Medical Care Availability and Reduction of Error (MCARE) of 2002. The clinician/treating physician is immediately notified of the impending amendment, and the case is discussed with the treating physician to verify the degree of patient care impact. MCARE Act mandates written notification to the patient within 7 days of discovery.

Pathology slides from amended reports are reviewed by QA committee members. There are three members of the QA committee, including the Chief, who meet quarterly to review details of all amended reports that are pre-reviewed and assigned by the Chief of service. The committee determines the final error category assigned to cases, based upon clinical information from medical records and impact of treatment decisions based the amended report. Feedback is distributed to pathologists on error assignment, and they have the opportunity to review the case slides. If cases are deemed to have teaching value, they are reviewed as anonymous unknowns at consensus conference.

All breast and GYN surgical pathology reports amended for any reason from 2012 to 2014 were included in this study.

All surgical pathology reports have two parts, the gross description, which is dictated by pathologists and transcribed by secretarial staff, and the final diagnosis with microscopic description, and correlative comments, which are dictated by pathologists and transcribed by secretaries. There are a limited number of templates that pathologists may use for specific types of reports, such as biomarkers for breast cancer. Templates are also typed by secretaries. The assignment of errors applies not only to the main diagnostic header, but also to supplementary studies such as immunohistochemistry results and prognostic/predictive biomarkers. If biomarker results are reported erroneously, an amended report must be issued and the error impact (B or C) is assigned in consultation with the treating physician.

Fifteen pathologists worked in the surgical pathology laboratory during this study period. The minimum number of years of experience post-training was 4 years, and the maximum, 34 years.

The QA process described here was initially introduced in the laboratory in mid- 2011. Prior to 2011, the QA process second review pre-signout of benign breast core biopsies was not performed. While discrepancy rates were low prior to 2011, they were not quantified.

The QA program did not require a significant learning curve, because the data that was being recorded by pathologists was part of the normal work reviews that pathologists performed. The learning curve was relatively flat, as witnessed by three new clinical fellows each year who entered our fellowship program. They had no difficulty following the program and contributing to the data that was collected. Fellows typically achieved a plateau in the QA program within the first 3 months of practice.

The rate of error and frequency of error type for each specimen group was evaluated. The top five B type errors for breast and gynecologic cases are seen in Tables 2 and 3.

Table 2 Top Five Type B Breast Pathology Report Errors
Table 3 Top Five Type B Gynecologic Pathology Report Errors

Results

In total, 63,665 breast and GYN specimen reports were created from 2012 to 2014. These specimens consisted of 44,005 GYN resection specimens and biopsies and 19,660 breast resection specimens and biopsies. Of all specimens during this time period, 343 (0.54% of all combined breast and GYN reports) required amendment due to a QA metric-discovered discrepancy. All amended reports were reviewed by the Chief of service and assigned an error designation. Breast specimens demonstrated a higher amendment rate than GYN specimens (1.14% of breast specimens versus 0.27% of GYN specimens). The most common error type requiring an amendment for both breast and GYN specimens was a type A, or Minor Disagreement (reports amended for type A discrepancy: 78.7% of combined breast and GYN; 81.9% of breast; 72.6% of GYN). Type B, or Moderate Disagreement discrepancies, accounted for 21.3% of all amended cases combined (reports amended for type B discrepancy: 18.1% of breast; 27.3% of GYN). Of all breast and GYN reports reviewed during the QA evaluation, there were no amended cases which were categorized as type C, or Major Disagreements which would significantly alter patient treatment [Table 4].

Table 4 Frequency of Amended Reports by Specimen and Error Type

Discussion

Retrospective review of our combined QA data for GYN and breast specimen reports from 2012 to 2014 demonstrates a low diagnostic discrepancy rate (0.54%), with the most common reason for error being a Type A, or Minor Disagreement, which is a spelling or formatting error within the report. When a diagnostic error did occur, the effect on patient care was minimal. There were no instances of major diagnostic discrepancies. We credit this low discrepancy rate, in part, to the comprehensive QA measurements that are in place at our institution.

One may question the lack of serious events in this study as a weakness, citing a situation where if everyone misinterprets a case, that it would not be perceived as an error. While such a scenario is possible, it is extremely unlikely given the multi-faceted comprehensive approach of the described peer review process. The peer review redundancies described herein provides a system of check and balances that is difficult to circumvent. The redundancies appear to have prevented serious events.

Discrepancies in surgical pathology (as well as all other medical fields) exist, even when the utmost care is put into rendering a diagnosis. As the interpretation of a histologic specimen is “more subjective” than a standard clinical laboratory test, factors such as pathologists’ experience, clinical information provided about a case, the use of ancillary studies, and others can play a role in the variation and accuracy of a diagnosis [13]. Diagnostic error has been extensively studied and categorized in various ways in the literature, and studies regarding discrepancies in surgical pathology reports demonstrate a range of error rates, with certain organ systems having an overall higher rate of disagreement than others, such as skin lesions, breast, bone and soft tissue, and others [16, 17, 25]. In 2014, the CAP published data on its Q-probes study data from 2011, which prospectively examined any post-signout changes to surgical pathology reports from 73 institutions occurring over a 3-month time span to establish benchmarks for error rates in surgical pathology. Defects were classified using the error taxonomy suggested by Meier et. al in [11]. In this study, 1,688 report defects were discovered out of the 360,218 reports reviewed, yielding an overall defect rate of 0.47% [25]. While over half of these report errors were classified as “other defects,” which mainly included typographical or dictation errors, misinterpretation errors accounted for 14.6% of the overall report errors, and were found most commonly in skin and breast specimens [25]. More recently, a large literature review of 137 published articles regarding interpretive errors in surgical pathology and cytology conducted by the CAP demonstrated a median major discrepancy rate in surgical pathology of 6.3%, with significant error rates ranging from 0.1 to 10% [13]. The seemingly wide range of error rates in surgical pathology reports can be attributed to the variation among institutions in the determination of error rate and classification of errors, as well as the specimen type, and the construction and accuracy of the study itself [5, 13, 15].

To assist in error reduction and report accuracy, and to maintain institutional accreditation, pathologists employ and practice auditing systems through various QA measures, which have been evaluated in numerous published studies regarding QA in surgical pathology. In order to operate, modern-day laboratories must adhere to a QA program compliant with federal regulation, in particular, the Clinical Laboratory Improvement Amendment of 1988 (CLIA’88), under the direction of a physician laboratory director. Under CLIA’88, which established standards for all national laboratories to ensure the safety and reliability of laboratory testing, laboratories must create and abide by QA protocols, as well as undergo inspections by accreditation agencies, such as the CAP to ensure protocols are followed and major deficiencies are remedied [3]. The goal of these programs is to enhance patient safety by identifying and correcting errors in the diagnostic process which would lead to patient mismanagement. In surgical pathology, standard QA protocols for all practices do not exist. However, common methods among pathologists are employed, such as prospective and retrospective second reviews of cases, expert opinion on difficult cases, random or focused review of a selected percentage of cases, frozen section/permanent section correlation, cytology-histology correlation, multi-discipline tumor board and pathology consensus conferences, and others. The majority of these QA measures are founded on the concept of “second opinion,” by a peer pathologist or subspecialty expert when assessing a diagnosis [10, 17, 21, 23, 25], Although each method has its own benefits, with error detection by some methods being superior to others [16, 17], these and other QA methods have been studied are shown to effectively detect and reduce major diagnostic errors, the serious events which adversely affect patient care and increase medical care costs.

Second opinion pathology reviews, whether pre- or post-signout, by intradepartmental or outside consultation, are commonly employed by pathology practices and are generally accepted to have a positive impact on diagnostic accuracy and concordance. Numerous studies of various organ systems demonstrate positive benefit by identifying errors or reaching consensus on difficult diagnoses, particularly before patient care is begun. Pre-signout reviews hold the added benefit of the identification and alleviation of errors before pathology information is reported to clinicians. An early, large prospective study on pre-signout peer review by Whitehead et al., examined 3,000 surgical pathology cases which were double read by a separate pathologist pre-signout and demonstrated a 7.8% discrepancy rate with 12.4% of the discrepant cases classified as “significant” discrepancies [26]. A later prospective study regarding the benefit of intra-institutional, peer review diagnostic biopsies, discovered a major diagnostic error which would affect patient care in 1.2% of the 2,694 biopsy specimens after being reviewed by a second pathologist before sign-out [8]. Later, a study by Novis in 2005 [15] retrospectively and prospectively examined surgical pathology intra-departmental error rates in a community hospital setting before and after implementation of a policy requiring a second review of all histologic material by a separate pathologist. By reviewing all amended reports for 1 year before and 1 year after the implementation of this policy, he found that the misdiagnosis rate of 1.3 per 1000 (10 of 7,909 total reports reviewed) before implementation of the pre-signout review decreased to 0.6 per 1000 (5 of 8,469 total reports) reports after implementation of the policy [15]. These findings are reaffirmed by the recent data from the 2014 CAP Q-probes study, which found that second review of all malignancies as a pre-sign out strategy was significantly associated with a lower misinterpretation rate, and was also associated with lesser significant errors, such as defects in protocols or labeling errors [25].

Studies regarding the benefit of inter-institutional second review of outside (post-signout) pathology by expert subspecialty pathologists have yielded similar results, and mandatory second review of outside referral pathology cases before surgical intervention has been employed and studied by various institutions [7, 9, 22, 24]. Through this QA strategy, discrepancies in outside pathology with major diagnostic and prognostic implications are remedied before the initiation of treatment, thus preventing inappropriate therapy and reducing unnecessary medical costs [6, 7, 9, 12, 20, 22, 24]. Many breast pathology-specific studies on the benefits of inter-institutional review have been published. A recent study from Mount Sinai Medical Center looked specifically at discrepancies in breast pathology from excisional and needle core biopsies submitted as part of a surgical referral from an outside facility. All of the specimens were reviewed by a pathologist who specialized in breast pathology. The authors found that, after reviewing 430 biopsy specimens for306 patients, second review by an expert in breast pathology led to changes in diagnosis in 17% of cases, the majority of which were a change in diagnosis from one benign condition to another. However, in 10% of cases, the change in diagnosis altered surgical management of the patient [20]. In a recent, somewhat similar study from MD Anderson Cancer Center, all consultation breast pathology referral cases from a 1-year period (1,970 total cases) were examined for discrepancies between the original outside institution report, and the newly-issued expert report. The authors discovered a significant discrepancy, which was a disagreement affecting patient care, in 226, or 11.47%, or the cases [6]. These and other similar studies demonstrate the value of a second, expert opinion in breast and other surgical pathology cases to avoid wrong or unnecessary treatment as well as savings in healthcare costs.

Finally, studies on other surgical pathology QA measures have touted similar effects on diagnostic accuracy and patient management, and have been found to be a useful addition to pathology QA protocols. One such method, review of pathology during multi-discipline conferences, was shown by various studies to identify discrepancies in breast pathology, particularly due to the benefit of additional clinical information [1, 14]. Raab et. al. studied the benefit of monitoring frozen section/permanent section discrepancies overtime by utilizing CAP Q-Tracks data on 174 participating institutions based on 3 Q-probes studies from 1999 to 2003, and found institutions who practiced long-term frozen/permanent section correlations to have significantly lower discordance rates, deferral rates, and microscopic sampling errors [18].

Our overall discrepancy rate, as measured by report amendment, was 0.5% for breast and GYN specimens combined. There were no serious events catalogued. The goal of our QA program, to minimize serious events (Type C error) was accomplished utilizing a comprehensive peer review process that also enhanced pathologist education and active participation in all facets of the program.

In summary, surgical pathology is a complex practice for which a high level of training, expertise, and oversight is required to provide accurate diagnostic interpretation. Surgical pathology employs QA strategies to not only be in compliance with federal law, but also to provide “boundaries” of diagnostic standardization which help minimize sweeping variation in diagnostic accuracy, decreases diagnostic discordance and maximizes patient safety by minimizing the occurrence of serious events.

When practiced in an environment of QA oversight and assistance, and not in a vacuum, as suggested by some studies in which published error rates are derived from misrepresentative study models [4], discrepancy rates are reduced and patient safety is heightened, and the occurrence of major diagnostic disagreements that could affect patient management for breast or gynecologic pathology diagnoses are distinctly uncommon. When surgical pathology is practiced in a laboratory utilizing comprehensive quality assurance protocols, major diagnostic interpretation errors are infrequent. The practice minimizes error, maximizes patient safety, and maximizes educational opportunities of pathologists.

Conclusion

This study describes the quality peer review practice in an academic women’s hospital that maximizes patient safety and minimizes serious diagnostic errors. These processes ensure that pathologic diagnoses are accurate for proper patient care.

References

  1. Chang J, et al. The impact of a multidisciplinary breast cancer center on recommendations for patient management: the university of Pennsylvania experience. Cancer. 2001;91(7):1231–7.

    Article  CAS  PubMed  Google Scholar 

  2. Chaudhary S, Kahn L, Bhuiya T. Retrospective blinded review of interpretational diagnostic discrepancies in surgical pathology: 18 years of experience at a tertiary care facility. Ann Clin Lab Sci. 2014;44(4):469–75.

    PubMed  Google Scholar 

  3. Clinical Laboratory Improvement Amendments of 1988. October 31, 198. 102 Stat. 2903, Public Law 100–578.

  4. Elmore J, et al. Diagnostic concordance among pathologists interpreting breast biopsy specimens. JAMA. 2015;313(11):1122–32.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Frable W. Surgical pathology – second reviews, institutional reviews, audits, and correlations: What’s out there? Error or diagnostic variation? Arch Pathol Lab Med. 2006;130:620–5.

    PubMed  Google Scholar 

  6. Khazai L, et al. Breast pathology second review identifies clinically significant discrepancies in over 10% of patients. J Surg Oncol. 2015;111:197.

    Article  Google Scholar 

  7. Kronz J, Westra W, Epstein J. Mandatory second opinion surgical pathology at a large referral Hospital. Cancer. 1999;86(11):2426–35.

    Article  CAS  PubMed  Google Scholar 

  8. Lind A, et al. Prospective peer review in surgical pathology. Am J Clin Pathol. 1995;104(5):560–6.

    Article  CAS  PubMed  Google Scholar 

  9. Manion E, Cohen M, Weydert J. Mandatory second opinion in surgical pathology referral material: clinical consequences of major disagreements. Am J Surg Pathol. 2008;32(5):732–7.

    Article  PubMed  Google Scholar 

  10. Maxwell S, Raab S. Directed peer review in surgical pathology. Adv Anat Pathol. 2012;19(5):331–7.

    Article  Google Scholar 

  11. Meier F, et al. Development and validation of a taxonomy of defects. Am J Clin Pathol. 2008;130:238–46.

    Article  PubMed  Google Scholar 

  12. Middleton, L, et. al. Second-opinion pathologic review is a patient safety mechanism that helps reduced error and decrease waste. J Oncol Pract. 2014;10(4):1–7.

  13. Nakhleh R, Nosé V, Colasacco C, et al. Interpretive diagnostic error reduction in surgical pathology and cytology. Arch Pathol Lab Med. 2016;140:29–40.

    Article  PubMed  Google Scholar 

  14. Newman, E, et.al., Changes in surgical management resulting from case review at a breast cancer multidisciplinary tumor board. Cancer. 2006;107(10):234–2351.

  15. Novis D. Routine review of surgical pathology cases as a method by which to reduce diagnostic error in a community hospital. Am J Surg Pathol. 2005;10(2):63–37.

    Google Scholar 

  16. Raab S, et al. Effectiveness of random and focused review in detecting surgical pathology error. Am J Clin Pathol. 2008;130:305–912.

    Article  Google Scholar 

  17. Raab S, Nakhleh R, Ruby S. Patient safety in anatomic pathology: measuring discrepancy frequencies and causes. Arch Pathol Lab Med. 2005;129(4):459–66.

    PubMed  Google Scholar 

  18. Raab S, et al. The value of monitoring frozen section-permanent section correlation data over time. Arch Pathol Lab Med. 2006;130:337–42.

    PubMed  Google Scholar 

  19. Renshaw A, Gould E. Measuring the value of review of pathology material by a second pathologist. Am J Clin Pathol. 2006;125:737–9.

    Article  PubMed  Google Scholar 

  20. Romanoff A, et al. Breast pathology review: does it make a difference? Ann Surg Oncol. 2014;21:3504–8.

    Article  PubMed  Google Scholar 

  21. Roy J, Hunt J. Detection and classification of diagnostic discrepancies (errors) in surgical pathology. Adv Anat Pathol. 2010;17:359–65.

    Article  PubMed  Google Scholar 

  22. Soofi Y, Khoury T. Inter-institutional pathology consultation: the importance of breast pathology subspecialization in a setting of tertiary cancer center. Breast J. 2015;21(4):334–44.

    Article  Google Scholar 

  23. Tomaszewski J, et al. Consensus conference on second opinion in diagnostic anatomic pathology: who what, and when. Am J Clin Pathol. 2000;114:329–35.

    CAS  PubMed  Google Scholar 

  24. Tsung J. Institutional pathology consultation. Am J Surg Pathol. 2004;28(3):388–402.

    Article  Google Scholar 

  25. Volmar K, et al. Surgical pathology report defects: a college of American pathologists q-probes study of 73 institutions. Arch Pathol Lab Med. 2014;138:602–6012.

    Article  PubMed  Google Scholar 

  26. Whitehead M, et al. Quality assurance of histopathologic diagnoses: a prospective audit of three thousand cases. Am J Clin Pathol. 1984;81:478–91.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

There was no funding source for this study. This quality study was approved for by the Quality Committee of University of Pittsburgh Medical Center.

Availability of data and material

Qualtiy data collated by author Abbie Mallon.

Authors’ contributions

DD (concept, data acquisition, data analysis, writing manuscript); CS (data analysis, writing manuscript); AM (data collation/acquisition, final review, writing manuscript). All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not Applicable.

Ethics approval and consent to participate

Not Applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David J. Dabbs.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dabbs, D.J., Stoos, C.T. & Mallon, A. Quality assurance and patient safety protocols for breast and gynecologic pathology in an Academic Women’s Hospital. Appl Cancer Res 36, 3 (2016). https://doi.org/10.1186/s41241-016-0002-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41241-016-0002-8

Keywords