AAOS Bulletin - June, 2006

Benchmarking in academic orthopaedic surgery

Initial results of groundbreaking study now available

By James J. Hamilton, MD

Last fall, the AAOS agreed to support the Faculty Practice Solutions Center (FPSC) in developing subspecialty benchmarks for academic orthopaedic surgery. This cooperative effort resulted in a quantum leap forward in the analysis of academic clinical work benchmarks.

Background

Previous efforts by various groups—including the Medical Group Management Association (MGMA)—to develop orthopaedic subspecialty benchmarks were problematic for two reasons. First, the cell sizes for many of the subspecialties were quite small, making extrapolation of the data to a larger population difficult. Second, these efforts did not rely on “hard data,” but on surveys.

After researching available resources, the Academy’s Academic Practice and Business Management Committee realized that developing reliable, ongoing benchmarks would require a partner that had a system in place to annually collect and process the data. The decision to partner with FPSC—a joint initiative of the University Health System Consortium (UHC) and the Association of American Medical Colleges—was based on the extent of participation (59 participating academic orthopaedic departments representing more than 1,200 orthopaedic surgeons) and the rigorous methodology used by participants to develop benchmarks.

The first set of benchmarks—for 2004—is now available. Benchmarks for 2005 will be released this fall. Because these benchmarks are limited to data from academic practices, they are not necessarily applicable to private practices. They are, however, the best and most reliable benchmarks for academic orthopaedic subspecialties currently available.

Methodology

The AAOS first identified 10 subspecialties for benchmark calculations: general orthopaedics, spine, sports, joint, hand, pediatrics, foot and ankle, shoulder and elbow, oncology, and trauma. Each FPSC orthopaedic surgeon was then assigned to a subspecialty and a determination of what percent of his or her time spent in clinical activity was made.

To develop meaningful benchmarks, the time spent to produce a volume of work must be factored into the calculation. In calculating an average, for example, we cannot simply add the RVUs of a physician who spends 100 percent of his time in clinic and the RVUs of a physician who spends 75 percent of her time in clinic and divide by two. To calculate an accurate average, we must extrapolate the RVUs of the part-time clinician to the level they would be if the physician were in clinic full-time. Only after all the RVUs of all participating physicians are extrapolated to 100 percent clinical effort, is it possible to establish an average for a full-time clinician.

The FPSC had already determined that benchmark data should be based on surgeons who spent more than 60 percent of their time in clinical activity. Using data for clinicians with less clinical activity tends to skew the data to a higher level. This study, therefore, is based on RVU data from physicians who spent more than 60 percent of their time in clinic, normalized to the 100 percent level.

Responses were tabulated for 51 of the 59 participating orthopaedic departments, representing a total of 758 orthopaedic surgeons. This basically represents a response rate of 100 percent because the remaining eight institutions either had not collected data for the entire year or had not maintained their information in a way that the FPSC could aggregate it with information from other institutions. The FPSC then calculated mean and median subspecialty benchmarks using the RVU information submitted by participating institutions.

The data was downloaded directly from the institutions’ accounting, insurance billing and/or practice management systems. Thus, it is “hard data” rather than estimates. These data, on the contrary, form the basis for the institutions’ insurance billings and statements generation.

Results

As previously noted, other academic orthopaedic subspecialty benchmarks have been based on very low total numbers. The FPSC total of 553 orthopaedic surgeons who spend “greater than 60 percent” of their time in clinic is nearly four times more than the number represented in the 2004 MGMA survey. Furthermore, there are more than 23 physicians—in a few cases, more than 75 physicians—in each of the subspecialties. Never before has subspecialty information been based on such a large and diverse sample.

Table 1 contains the mean and median RVUs for each of the 10 subspecialties. The ranges are striking: a mean of 5,906 RVUs for general orthopaedics versus a mean of 9,683 RVUs for spine specialists. Some significant changes from previous benchmarks—especially in general surgery, hand and trauma—are obvious. Although additional research needs to be conducted, two factors quickly come to mind to explain these changes. First, the rigorous data collection protocols and large sample sizes used in this study can make a significant difference in the final levels seen for each of the subspecialties. Second, previous studies may have aggregated several operative subspecialties under “general orthopaedics,” artificially raising the production of the “generalist.”

Another analysis reveals the unique “clinical fingerprint,” based on the CPT codes used, of each subspecialty. These data indicate it is possible to identify a doctor as belonging to a given subspecialty by virtue of the procedures that he or she performs, even in the absence of a designation on the part of the department chair. To view the table documenting this clinical fingerprint, visit the “Academic Focus” section of the AAOS online Practice Management Center.

Future benchmarks will be published as soon as they are completed, beginning with the 2005 results this fall. In addition, the AAOS-UHC agreement requires the FPSC to undertake 100 hours of analysis between now and December 31, 2008. The subjects of these analyses will be determined by the new Practice Management Committee.

Qualifications

The data used in this project warrants discussion. Each surgeon’s RVU productivity was derived from the actual CPT codes submitted by that surgeon for billing and electronically transmitted to the FSPC. Each institution was responsible for the accuracy and completeness of its code submissions.

The FPSC then tabulated work RVUs for each CPT code. Because many CPT codes have no assigned work RVUs, a methodology was established to assign an RVU value particular to each individual institution.

Specifically, each institution’s charges for individual CPT codes were analyzed by a range of codes and their relation to the RVUs attached to those codes. This resulted in a charge/RVU ratio specific to the individual institution. Submitted charges for services without designated RVUs were then multiplied by the charge/RVU ratio to create an RVU for that code.

This ensured that all billed services had a work RVU included in the total work effort of a physician. Many analyses and benchmark resources do not take this step, resulting in lower productivity or, at a minimum, skewed results.

The FPSC methodology also took modifiers for multiple procedures into account and adjusted the attached work RVUs accordingly. Because many analyses and benchmarking resources do not take this step, variations occur between the data in this study and that in other sources.

The AAOS believes that the FPSC methodology provides significantly more accurate data than did previous studies. In reviewing the data, however, readers should recognize that differences in methodology between studies are likely and appreciate that accurate comparisons can only be made between data contained within this study. Furthermore, there is no allowance for any activities (lectures, research, medical student education) that do not result in billings. Comparing any of these benchmarks to the production of an individual surgeon is only valid if the surgeon is involved in clinical activity 100 percent of the time and all clinical activity is accurately billed.

As can be seen from Table 1, our data show significant variations in work RVUs among subspecialties. This variation demonstrates why accurate, reliable subspecialty benchmarks are needed. From an academic practice/business management viewpoint, it is critical to know the expected work output of a physician to develop budgets, establish work levels and make projections.

Perspective

The benchmarks established by this study are specific to billed work and each is based on a single subspecialty area. What any individual practitioner would reasonably produce depends on many factors not considered by these benchmarks.

For example, the physical place where the work is performed could influence production. A sports medicine surgeon working in a single-specialty boutique hospital will probably have a higher productivity than one working in a city or county hospital. Having a dedicated scrub nurse or operating room team will alter case and turnover times. The degree of resident and fellow involvement in a case also alters a surgeon’s productivity.

Likewise, a patient’s socioeconomic status and education level will affect the time needed for office visits. Finally, the number of ancillary staff (such as nurse practitioners, physician assistants and clinical nurses) and examination rooms available for an individual physician’s use during clinic will significantly affect the clinic’s through-put.

Most important, however, the “value” of an individual physician should not be measured solely by RVU production. In the academic setting, teaching, administrative responsibilities, research, publications and national leadership are all significant additional factors.

Summary

The AAOS and the FPSC have established what we feel are the best and most reliable benchmarks for academic orthopaedic subspecialties currently available. These benchmarks should only be used in comparison with other benchmarks using the same methodology. Care should be exercised in the application of these data to individual situations.

In the future, orthopaedic billing data will be automatically downloaded by FPSC participants and analyzed according to this study’s methodology. This will result in a continuing, annual update of the FPSC orthopaedic subspecialty benchmarks. Academic orthopaedic departments that are not members of the FPSC should consider joining so that their data can be included and their faculty’s data can be compared directly to the benchmark data.

The AAOS and the FPSC extend our appreciation to the chairs of the orthopaedic departments of all the participating institutions for their willingness to provide data for this landmark initiative.

James J. Hamilton, MD, previously chaired the AAOS Academic Practice and Business Management Committee and now serves on the AAOS Practice Management Committee. He can be reached at James.Hamilton@tmcmed.org

Table 1

2004 Academic Orthopaedic Subspecialty RVU Production

SPEC DESC

GREATER THAN 0.60 CFTE

N

MEAN

RVU

MEDIAN

RVU

General

37

5906

5375

Oncology

25

6046

6334

Pediatrics

60

7605

7064

Foot/Ankle

42

7850

7099

Trauma

73

7982

7647

Sports Medicine

81

8320

7359

Hand

66

8568

8785

Joint

69

8645

8390

Shoulder/Elbow

23

8709

8598

Spine

77

9683

9348

TOTAL

553

-

-


Close Archives | Previous Page