February 2003 Bulletin

Levels of evidence

A step forward on the road to better practice?

By James G. Wright, MD, MPH, FRCSC

Surgeons use evidence to make decisions with patients about clinical care. For most clinical questions, however, the amount of available information is overwhelming and the conclusions often contradictory.

One approach to making clinical decisions is evidence-based practice (EBP). EBP means surgeons need to (i) define the clinical question, (ii) assemble the evidence, (iii) appraise the evidence, and (iv) apply the evidence while considering their experience and patients’ individual circumstances.1

Although evidence-based practice has prompted considerable controversy and criticism,2 most surgeons would agree that to make the best decisions, they need the best evidence easily available in a comprehensive and timely fashion.

Levels of evidence, based on the rigor of study design, are a way to sort through and rate the quality of the surgical literature. Studies have different purposes and methods. Although the most common study purpose is the evaluation of therapy (i.e., does the treatment work), other purposes include studies of prognosis and diagnosis.

Within each type of study are different levels of evidence. For example, for therapeutic studies, Level I evidence is from high quality randomized clinical trials (e.g., a randomized trial comparing revision rates in patients treated with cemented and uncemented total hip arthroplasty).

Level II evidence is from cohort studies (e.g., revision rates in patients treated with uncemented THA compared with a control group of patients treated with cemented THA at the same time and institution).

Level III evidence is from case-control studies (e.g., the rates of cemented and uncemented THA in patients with a particular outcome called "cases"; i.e. revised THA, are compared to those who did not have outcome, called "controls"; i.e. non-revised THA).

Level IV evidence is from an uncontrolled case series (e.g., a case series of patients treated with uncemented THA). Level V evidence is from expert opinion. The table is relatively simple with four types of studies and five levels. The actual criteria for assignment is a little more complicated, and for interested readers, are contained in the cells of the table.

Levels of evidence have several potential uses

First, levels of evidence can be used by orthopaedic journals. In January 2003, the Journal of Bone and Joint Surgery (JBJS) included a level of evidence rating for all published clinical articles. Authors of submitted articles will be required to clearly specify the primary research question and provide a level of evidence rating for their approach to the primary research question.

Second, levels of evidence can be used to grade and choose abstracts for scientific meetings. The Pediatric Orthopedic Society of North America has used levels of evidence to select abstracts for their annual meeting for the past three years.

Third, levels of evidence may be used to develop practice guidelines. Virtually all organizations involved in developing practice guidelines use levels of evidence to appraise the literature and grade the strength of their recommendations.

Fourth, levels of evidence can also be used by practicing surgeons to sort through multiple types of evidence. For example, having searched the literature, surgeons may choose to restrict their attention to level 1 or 2 evidence.

Using levels of evidence may have several effects

Surgeons will become more familiar with different research designs. Research studies may be improved as surgeons perform higher quality studies. Orthopaedic surgeons will also be able to monitor publication trends in the quality of orthopaedic clinical research. Most important, levels of evidence ratings will place clinical studies into context for practicing surgeons. Higher levels of evidence should be more convincing and helpful to surgeons in clinical decision-making.

In using levels of evidence, however, surgeons need to consider several caveats. Levels of evidence provide only a rough guide to study quality. More-in depth assessment requires a thorough critical appraisal of the specific study methods and design. Additionally, randomized clinical trials are not possible for all clinical questions.3 Thus, Level I evidence may not be available for all clinical situations. Level II or III evidence can still have great value to the practicing orthopaedic surgeon. An answer to a clinical question must be based on a composite assessment of all evidence of all types. No single study usually provides a definitive answer.

Finally, although levels of evidence may be an incremental step in improving surgical practice, further work needs to provide comprehensive and easily available evidence for surgeons in a timely fashion. This final step may require professional societies to take responsibility for summarizing the evidence in practice guideline and innovative technological solutions such as personal digital assistants.4

References:

  1. Sackett DL, Richardson WS, Rosenberg W, Haynes RB, Straus S. Evidence based medicine. How to practice and teach EBM: Churchill Livingstone, 2000:280.
  2. Straus SE, McAlister FA. Evidence-based medicine: a commentary on common criticisms. Can Med Assoc J; 163:837-841.
  3. McLeod RS, Wright JG, Solomon MJ, Hu X, Walters BC, Lossing A. Randomized controlled trials in surgery: Issues and problems. Surgery 1996; 119:483-486.
  4. Greiver M. Practice tips. Putting the "palm" into practice. Can Fam Physician 2002; 48:43-44.

James G. Wright, MD, MPH, FRCSC, is a professor of surgery, University of Toronto, and member of the AAOS Evidence-Based Practice Committee. He can be reached at (416) 813-6433 or via e-mail at jim.wright@sickkids.ca.


Home Previous Page