Get Adobe Flash player

Physicians and advanced practice clinicians may utilize comparable amounts of guideline-discordant and low-value tests and treatments for 3 conditions commonly seen in primary care, a recent study found.

Researchers used National Ambulatory Medical Care Survey and the National Hospital Ambulatory Medical Care Survey data from 1997 to 2011 to compare the management of upper respiratory infections, back pain, and headache by physicians and advanced practice clinicians (nurse practitioners and physician assistants). Results were published online on June 20 by Annals of Internal Medicine.

The study included 28,949 primary care visits for the 3 conditions (25,529 physician visits and 3,420 advanced practice clinician visits). About 90% of the data came from physicians' office visits, and the rest came from clinicians practicing in hospital-based outpatient clinics. Office-based clinicians saw similar patients, whereas hospital-based advanced practice clinicians treated younger patients (mean age, 42.6 vs. 45.0 years; P<0.001) and delivered care in an urban setting less frequently (49.7% vs. 81.7% of visits; P<0.001) compared with their physician counterparts.

In both settings, advanced practice clinicians were not significantly more likely than physicians to order antibiotics, CT/MRI, or radiography, or to refer patients to other physicians for the 3 conditions, according to unadjusted analyses. Practice patterns in both settings remained consistent after researchers adjusted for multiple variables. Hospital-based advanced practice clinicians did order more antibiotics (52.8% vs. 46.0%; P= 0.043) and made more referrals (11.8% vs. 8.3%; P= 0.018) than hospital-based physicians, although only referral differences remained significant after a sensitivity analysis.

The study authors noted limitations to their analysis, such as how the data samples may exclude visits to more independent advanced practice clinicians (thereby underrepresenting their care) and how some "low-value" care may have been justified. They also lacked longitudinal patient data and could not account for variations in state-level scope-of-practice laws.

The study raises questions in light of previous research, which has suggested that advanced practice clinicians are more likely than primary care physicians to order tests or refer to specialists, according to an accompanying editorial. "Of course, it is plausible that any greater clinical expertise conferred by the physician's additional training and clinical experience is not relevant to efficient and appropriate care of these common health concerns," the editorialist wrote. "However, it also may be the case that the numerous other sources of variation in use of low-value services are sufficient to render underlying differences due to training and experience undetectable in these data."

The U.S. Preventive Services Task Force (USPSTF) issued an updated recommendation last week on screening for colorectal cancer in average-risk asymptomatic adults.

To update its 2008 recommendation on this topic, the USPSTF reviewed the evidence on the effectiveness of several available methods (including colonoscopy, flexible sigmoidoscopy, computed tomography colonography, the guaiac-based fecal occult blood test, the fecal immunochemical test, the multitargeted stool DNA test, and the methylated SEPT9DNA test) and screening strategies in reducing the incidence of and mortality from colorectal cancer or all-cause mortality. It also looked at evidence on possible harms of screening tests and their ability to detect adenomatous polyps, advanced adenomas based on size, or both, as well as colorectal cancer. The USPSTF recommendation statement was published online June 15 by JAMA.

The USPSTF noted convincing evidence that screening for colorectal cancer with several different methods can accurately detect early-stage colorectal cancer and adenomatous polyps. Although single test performance was an important issue, the sensitivity of the test over time is more important in an ongoing screening program, the Task Force stated.

The USPSTF also found convincing evidence that screening for colorectal cancer in adults ages 50 to 75 years reduces colorectal cancer mortality and recommended screening for this age group. The USPSTF found no head-to-head studies demonstrating that any of the screening strategies it considered are more effective than others, although the tests had varying levels of evidence supporting their effectiveness, as well as different strengths and limitations. No method of screening for colorectal cancer was shown to reduce all-cause mortality in any age group.

The benefit of early detection of and intervention for colorectal cancer declines after age 75, the USPSTF found. It noted that older adults who have been previously screened for colorectal cancer will enjoy at best a moderate benefit to continued screening from ages 76 to 85, but those in this age group who have never been screened are more likely to benefit than those who have. Therefore, the USPSTF recommended that the decision to screen for colorectal cancer in adults ages 76 to 85 years should be made on an individual basis, taking into account whether the patient is healthy enough to be treated if colorectal cancer is detected and any comorbid conditions that would significantly limit life expectancy.

In addition, the Task Force commissioned a microsimulation modeling study, also published online June 15, of a previously unscreened population undergoing colorectal cancer screening. The goal of the modeling study was to provide information on optimal starting and stopping ages and screening intervals across screening methods. The model, which assumed 100% adherence, concluded that the strategies of colonoscopy every 10 years, annual fecal immunochemical testing, sigmoidoscopy every 10 years with annual fecal immunochemical testing, and computed tomographic colonography every 5 years from ages 50 to 75 years provided similar life-years gained and a comparable balance of benefit and screening burden.

The USPSTF recommendation statement concluded that nearly 1 in 3 adults does not get screened for colorectal cancer. "For colorectal cancer screening programs to be successful in reducing mortality, they need to involve more than just the screening method in isolation," the report stated. "Screening is a cascade of activities that must occur in concert, cohesively, and in an organized way for benefits to be realized, from the point of the initial screening examination (including related interventions or services that are required for successful administration of the screening test, such as bowel preparation or sedation with endoscopy) to the timely receipt of any necessary diagnostic follow-up and treatment."

An accompanying editorial noted that the USPSTF appears to be saying that some tests are better than others but does not specify a preference. "How can tests differ and yet be the same in the eyes of the task force?" the editorial asks. "In the Recommendation Statement, the task force states a principle that may explain this paradox: 'the best screening test is the one that gets performed.' A test can rank low when tested on a representative population but still be better aligned with an individual patient's preferences and, therefore, be most likely to get done. Thus, to choose among the screening strategies, the USPSTF recommends shared decision making, a process in which physician and patient share information and reach a consensus about what screening test is best for the patient."

Earlier this year, the Canadian Task Force on Preventive Health Care suggested colorectal cancer screening with a stool test every 2 years or sigmoidoscopy every decade among average-risk adults ages 50 to 74. Read more about comparisons and contrasts between guidelines in ACP Internist's June issue.

Ga Eun Park, MD; Jae-Hoon Ko, MD; Kyong Ran Peck, MD, PhD; Ji Yeon Lee, MD; Ji Yong Lee, MD; Sun Young Cho, MD; Young Eun Ha, MD; Cheol-In Kang, MD, PhD; Ji-Man Kang, MD; Yae-Jean Kim, MD, PhD; Hee Jae Huh, MD, PhD; Chang-Seok Ki, MD, PhD; Nam Yong Lee, MD, PhD; Jun Haeng Lee, MD, PhD; Ik Joon Jo, MD, PhD; Byeong-Ho Jeong, MD; Gee Young Suh, MD, PhD; Jinkyeong Park, MD; Chi Ryang Chung, MD, PhD; Jae-Hoon Song, MD, PhD; and Doo Ryeon Chung, MD, PhD

Background: In 2015, a large outbreak of Middle East respiratory syndrome (MERS) occurred in the Republic of Korea. Half of the cases were associated with a tertiary care university hospital.

Objective: To document the outbreak and successful control measures.

Design: Descriptive study.

Setting: A 1950-bed tertiary care university hospital.

Patients: 92 patients with laboratory-confirmed MERS and 9793 exposed persons.

Measurements: Description of the outbreak, including a timeline, and evaluation of the effectiveness of the control measures.

Results: During the outbreak, 92 laboratory-confirmed MERS cases were associated with a large tertiary care hospital, 82 of which originated from unprotected exposure to 1 secondary patient. Contact tracing and monitoring exposed patients and assigned health care workers were at the core of the control measures in the outbreak. Nontargeted screening measures, including body temperature screening among employees and visitors at hospital gates, monitoring patients for MERS-related symptoms, chest radiographic screening, and employee symptom monitoring, did not detect additional patients with MERS without existing transmission links. All in-hospital transmissions originated from 3 patients with MERS who also had pneumonia and productive cough.

Limitations: This was a retrospective single-center study. Statistical analysis could not be done. Because this MERS outbreak originated from a superspreader, effective control measures could differ in endemic areas or in other settings.

Conclusion: Control strategies for MERS outbreaks should focus on tracing contacts of persons with epidemiologic links. Adjusting levels of quarantine and personal protective equipment according to the assumed infectivity of each patient with MERS may be appropriate.

Primary Funding Source: Samsung Biomedical Research Institute.

Adrian Reuben, MBBS; Holly Tillman, MS; Robert J. Fontana, MD; Timothy Davern, MD; Brendan McGuire, MD; R. Todd Stravitz, MD; Valerie Durkalski, PhD; Anne M. Larson, MD; Iris Liou, MD; Oren Fix, MD; Michael Schilsky, MD; Timothy McCashland, MD; J. Eileen Hay, MBBS; Natalie Murray, MD; Obaid S. Shaikh, MD; Daniel Ganger, MD; Atif Zaman, MD; Steven B. Han, MD; Raymond T. Chung, MD; Alastair Smith, MB, ChB; Robert Brown, MD; Jeffrey Crippin, MD; M. Edwyn Harrison, MD; David Koch, MD; Santiago Munoz, MD; K. Rajender Reddy, MD; Lorenzo Rossaro, MD; Raj Satyanarayana, MD; Tarek Hassanein, MD; A. James Hanje, MD; Jody Olson, MD; Ram Subramanian, MD; Constantine Karvellas, MD; Bilal Hameed, MD; Averell H. Sherker, MD; Patricia Robuck, PhD; and William M. Lee, MD

Background: Acute liver failure (ALF) is a rare syndrome of severe, rapid-onset hepatic dysfunction—without prior advanced liver disease—that is associated with high morbidity and mortality. Intensive care and liver transplantation provide support and rescue, respectively.

Objective: To determine whether changes in causes, disease severity, treatment, or 21-day outcomes have occurred in recent years among adult patients with ALF referred to U.S. tertiary care centers.

Design: Prospective observational cohort study. (ClinicalTrials .gov: NCT00518440)

Setting: 31 liver disease and transplant centers in the United States.

Patients: Consecutively enrolled patients—without prior advanced liver disease—with ALF (n = 2070).

Measurements: Clinical features, treatment, and 21-day outcomes were compared over time annually for trends and were also stratified into two 8-year periods (1998 to 2005 and 2006 to 2013).

Results: Overall clinical characteristics, disease severity, and distribution of causes remained similar throughout the study period. The 21-day survival rates increased between the two 8-year periods (overall, 67.1% vs. 75.3%; transplant-free survival [TFS], 45.1% vs. 56.2%; posttransplantation survival, 88.3% vs. 96.3% [P < 0.010 for each]). Reductions in red blood cell infusions (44.3% vs. 27.6%), plasma infusions (65.2% vs. 47.1%), mechanical ventilation (65.7% vs. 56.1%), and vasopressors (34.9% vs. 27.8%) were observed, as well as increased use of N-acetylcysteine (48.9% vs. 69.3% overall; 15.8% vs. 49.4% [P < 0.001] in patients with ALF not due to acetaminophen toxicity). When examined longitudinally, overall survival and TFS increased throughout the 16-year period.

Limitations: The duration of enrollment, the number of patients enrolled, and possibly the approaches to care varied among participating sites. The results may not be generalizable beyond such specialized centers.

Conclusion: Although characteristics and severity of ALF changed little over 16 years, overall survival and TFS improved significantly. The effects of specific changes in intensive care practice on survival warrant further study.

Primary Funding Source: National Institutes of Health.