Retrospective review of our combined QA data for GYN and breast specimen reports from 2012 to 2014 demonstrates a low diagnostic discrepancy rate (0.54%), with the most common reason for error being a Type A, or Minor Disagreement, which is a spelling or formatting error within the report. When a diagnostic error did occur, the effect on patient care was minimal. There were no instances of major diagnostic discrepancies. We credit this low discrepancy rate, in part, to the comprehensive QA measurements that are in place at our institution.
One may question the lack of serious events in this study as a weakness, citing a situation where if everyone misinterprets a case, that it would not be perceived as an error. While such a scenario is possible, it is extremely unlikely given the multi-faceted comprehensive approach of the described peer review process. The peer review redundancies described herein provides a system of check and balances that is difficult to circumvent. The redundancies appear to have prevented serious events.
Discrepancies in surgical pathology (as well as all other medical fields) exist, even when the utmost care is put into rendering a diagnosis. As the interpretation of a histologic specimen is “more subjective” than a standard clinical laboratory test, factors such as pathologists’ experience, clinical information provided about a case, the use of ancillary studies, and others can play a role in the variation and accuracy of a diagnosis [13]. Diagnostic error has been extensively studied and categorized in various ways in the literature, and studies regarding discrepancies in surgical pathology reports demonstrate a range of error rates, with certain organ systems having an overall higher rate of disagreement than others, such as skin lesions, breast, bone and soft tissue, and others [16, 17, 25]. In 2014, the CAP published data on its Q-probes study data from 2011, which prospectively examined any post-signout changes to surgical pathology reports from 73 institutions occurring over a 3-month time span to establish benchmarks for error rates in surgical pathology. Defects were classified using the error taxonomy suggested by Meier et. al in [11]. In this study, 1,688 report defects were discovered out of the 360,218 reports reviewed, yielding an overall defect rate of 0.47% [25]. While over half of these report errors were classified as “other defects,” which mainly included typographical or dictation errors, misinterpretation errors accounted for 14.6% of the overall report errors, and were found most commonly in skin and breast specimens [25]. More recently, a large literature review of 137 published articles regarding interpretive errors in surgical pathology and cytology conducted by the CAP demonstrated a median major discrepancy rate in surgical pathology of 6.3%, with significant error rates ranging from 0.1 to 10% [13]. The seemingly wide range of error rates in surgical pathology reports can be attributed to the variation among institutions in the determination of error rate and classification of errors, as well as the specimen type, and the construction and accuracy of the study itself [5, 13, 15].
To assist in error reduction and report accuracy, and to maintain institutional accreditation, pathologists employ and practice auditing systems through various QA measures, which have been evaluated in numerous published studies regarding QA in surgical pathology. In order to operate, modern-day laboratories must adhere to a QA program compliant with federal regulation, in particular, the Clinical Laboratory Improvement Amendment of 1988 (CLIA’88), under the direction of a physician laboratory director. Under CLIA’88, which established standards for all national laboratories to ensure the safety and reliability of laboratory testing, laboratories must create and abide by QA protocols, as well as undergo inspections by accreditation agencies, such as the CAP to ensure protocols are followed and major deficiencies are remedied [3]. The goal of these programs is to enhance patient safety by identifying and correcting errors in the diagnostic process which would lead to patient mismanagement. In surgical pathology, standard QA protocols for all practices do not exist. However, common methods among pathologists are employed, such as prospective and retrospective second reviews of cases, expert opinion on difficult cases, random or focused review of a selected percentage of cases, frozen section/permanent section correlation, cytology-histology correlation, multi-discipline tumor board and pathology consensus conferences, and others. The majority of these QA measures are founded on the concept of “second opinion,” by a peer pathologist or subspecialty expert when assessing a diagnosis [10, 17, 21, 23, 25], Although each method has its own benefits, with error detection by some methods being superior to others [16, 17], these and other QA methods have been studied are shown to effectively detect and reduce major diagnostic errors, the serious events which adversely affect patient care and increase medical care costs.
Second opinion pathology reviews, whether pre- or post-signout, by intradepartmental or outside consultation, are commonly employed by pathology practices and are generally accepted to have a positive impact on diagnostic accuracy and concordance. Numerous studies of various organ systems demonstrate positive benefit by identifying errors or reaching consensus on difficult diagnoses, particularly before patient care is begun. Pre-signout reviews hold the added benefit of the identification and alleviation of errors before pathology information is reported to clinicians. An early, large prospective study on pre-signout peer review by Whitehead et al., examined 3,000 surgical pathology cases which were double read by a separate pathologist pre-signout and demonstrated a 7.8% discrepancy rate with 12.4% of the discrepant cases classified as “significant” discrepancies [26]. A later prospective study regarding the benefit of intra-institutional, peer review diagnostic biopsies, discovered a major diagnostic error which would affect patient care in 1.2% of the 2,694 biopsy specimens after being reviewed by a second pathologist before sign-out [8]. Later, a study by Novis in 2005 [15] retrospectively and prospectively examined surgical pathology intra-departmental error rates in a community hospital setting before and after implementation of a policy requiring a second review of all histologic material by a separate pathologist. By reviewing all amended reports for 1 year before and 1 year after the implementation of this policy, he found that the misdiagnosis rate of 1.3 per 1000 (10 of 7,909 total reports reviewed) before implementation of the pre-signout review decreased to 0.6 per 1000 (5 of 8,469 total reports) reports after implementation of the policy [15]. These findings are reaffirmed by the recent data from the 2014 CAP Q-probes study, which found that second review of all malignancies as a pre-sign out strategy was significantly associated with a lower misinterpretation rate, and was also associated with lesser significant errors, such as defects in protocols or labeling errors [25].
Studies regarding the benefit of inter-institutional second review of outside (post-signout) pathology by expert subspecialty pathologists have yielded similar results, and mandatory second review of outside referral pathology cases before surgical intervention has been employed and studied by various institutions [7, 9, 22, 24]. Through this QA strategy, discrepancies in outside pathology with major diagnostic and prognostic implications are remedied before the initiation of treatment, thus preventing inappropriate therapy and reducing unnecessary medical costs [6, 7, 9, 12, 20, 22, 24]. Many breast pathology-specific studies on the benefits of inter-institutional review have been published. A recent study from Mount Sinai Medical Center looked specifically at discrepancies in breast pathology from excisional and needle core biopsies submitted as part of a surgical referral from an outside facility. All of the specimens were reviewed by a pathologist who specialized in breast pathology. The authors found that, after reviewing 430 biopsy specimens for306 patients, second review by an expert in breast pathology led to changes in diagnosis in 17% of cases, the majority of which were a change in diagnosis from one benign condition to another. However, in 10% of cases, the change in diagnosis altered surgical management of the patient [20]. In a recent, somewhat similar study from MD Anderson Cancer Center, all consultation breast pathology referral cases from a 1-year period (1,970 total cases) were examined for discrepancies between the original outside institution report, and the newly-issued expert report. The authors discovered a significant discrepancy, which was a disagreement affecting patient care, in 226, or 11.47%, or the cases [6]. These and other similar studies demonstrate the value of a second, expert opinion in breast and other surgical pathology cases to avoid wrong or unnecessary treatment as well as savings in healthcare costs.
Finally, studies on other surgical pathology QA measures have touted similar effects on diagnostic accuracy and patient management, and have been found to be a useful addition to pathology QA protocols. One such method, review of pathology during multi-discipline conferences, was shown by various studies to identify discrepancies in breast pathology, particularly due to the benefit of additional clinical information [1, 14]. Raab et. al. studied the benefit of monitoring frozen section/permanent section discrepancies overtime by utilizing CAP Q-Tracks data on 174 participating institutions based on 3 Q-probes studies from 1999 to 2003, and found institutions who practiced long-term frozen/permanent section correlations to have significantly lower discordance rates, deferral rates, and microscopic sampling errors [18].
Our overall discrepancy rate, as measured by report amendment, was 0.5% for breast and GYN specimens combined. There were no serious events catalogued. The goal of our QA program, to minimize serious events (Type C error) was accomplished utilizing a comprehensive peer review process that also enhanced pathologist education and active participation in all facets of the program.
In summary, surgical pathology is a complex practice for which a high level of training, expertise, and oversight is required to provide accurate diagnostic interpretation. Surgical pathology employs QA strategies to not only be in compliance with federal law, but also to provide “boundaries” of diagnostic standardization which help minimize sweeping variation in diagnostic accuracy, decreases diagnostic discordance and maximizes patient safety by minimizing the occurrence of serious events.
When practiced in an environment of QA oversight and assistance, and not in a vacuum, as suggested by some studies in which published error rates are derived from misrepresentative study models [4], discrepancy rates are reduced and patient safety is heightened, and the occurrence of major diagnostic disagreements that could affect patient management for breast or gynecologic pathology diagnoses are distinctly uncommon. When surgical pathology is practiced in a laboratory utilizing comprehensive quality assurance protocols, major diagnostic interpretation errors are infrequent. The practice minimizes error, maximizes patient safety, and maximizes educational opportunities of pathologists.