Hierarchy of evidence

Evidence hierarchies reflect the relative authority of various types of biomedical research, which create levels of evidence, or at least levels of methodologies that produce evidence. There is broad agreement on the relative strength of the principal types of epidemiological studies but no single, universally-accepted hierarchy of evidence. More than 80 different hierarchies have been proposed for assessing medical evidence.[1] Typically, randomized controlled trials (RCTs) rank above observational studies, while expert opinion and anecdotal experience are ranked at the bottom. Some evidence hierarchies place systematic review and meta analysis above RCTs.

Evidence hierarchies are integral to evidence-based medicine (EBM).

Definition

In 2014, Stegenga defined a hierarchy of evidence as "rank-ordering of kinds of methods according to the potential for that method to suffer from systematic bias". At the top of the hierarchy is a method with the most freedom from systemic bias or best internal validity relative to the tested medical intervention's hypothesized efficacy.[2]:313 In 1997, Greenhalgh suggested it was "the relative weight carried by the different types of primary study when making decisions about clinical interventions".[3]

Examples

In 1995, Guyatt and Sackett published the first such hierarchy.[4]

Greenhalgh put the different types of primary study in the following order:[3]

  1. Systematic reviews and meta-analyses of "RCTs with definitive results".
  2. RCTs with definitive results (confidence intervals that do not overlap the threshold clinically significant effect)
  3. RCTs with non-definitive results (a point estimate that suggests a clinically significant effect but with confidence intervals overlapping the threshold for this effect)
  4. Cohort studies
  5. Case-control studies
  6. Cross sectional surveys
  7. Case reports

Criticism

More than a decade after it was established, use of evidence hierarchies was increasingly criticized in the 21st century. In 2011, a systematic review of the critical literature found 3 kinds of criticism: procedural aspects of EBM (especially from Cartwright, Worrall and Howick), greater than expected fallibility of EBM (Ioaanidis and others), and EBM being incomplete as a philosophy of science (Ashcroft and others).[5] Many critics have published in journals of philosophy, ignored by the clinician proponents of EBM. Rawlins[6] and Bluhm note, that EBM limits the ability of research results to inform the care of individual patients, and that to understand the causes of diseases both population-level and laboratory research are necessary. EBM hierarchy of evidence does not take into account research on the safety and efficacy of medical interventions. RCTs should be designed "to elucidate within-group variability, which can only be done if the hierarchy of evidence is replaced by a network that takes into account the relationship between epidemiological and laboratory research"[7]

In 2005, Ross Upshur noted that EBM claims to be a normative guide to being a better physician, but is not a philosophical doctrine. He pointed out that EBM supporters displayed "near-evangelical fervor" convinced of its superiority, ignoring critics who seek to expand the borders of EBM from a philosophical point of view.[8]

Borgerson in 2009 wrote that the justifications for the hierarchy levels are not absolute and do not epistemically justify them, but that "medical researchers should pay closer attention to social mechanisms for managing pervasive biases".[9] La Caze noted that basic science resides on the lower tiers of EBM though it "plays a role in specifying experiments, but also analysing and interpreting the data."[10]

Concato argued in 2004, that it allowed RCTs too much authority and that not all research questions could be answered through RCTs, either because of practical or because of ethical issues. Even when evidence is available from high-quality RCTs, evidence from other study types may still be relevant.[11] Stegenga opined that evidence assessment schemes are unreasonably constraining and less informative than other schemes now available.[2]

References

  1. Siegfried T (2017-11-13). "Philosophical critique exposes flaws in medical evidence hierarchies". Science News. Retrieved 2018-05-16.
  2. 1 2 Stegenga J (October 2014). "Down with the hierarchies". Topoi. 33 (2): 313–22. doi:10.1007/s11245-013-9189-4.
  3. 1 2 Greenhalgh T (July 1997). "How to read a paper. Getting your bearings (deciding what the paper is about)". BMJ. 315 (7102): 243–6. doi:10.1136/bmj.315.7102.243. PMC 2127173. PMID 9253275.
  4. Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ (December 1995). "Users' guides to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group". JAMA. 274 (22): 1800–4. doi:10.1001/jama.1995.03530220066035. PMID 7500513.
  5. Solomon M (October 2011). "Just a paradigm: evidence-based medicine in epistemological context". European Journal for Philosophy of Science. Springer. 1 (3): 451&ndash, 466. doi:10.1007/s13194-011-0034-6.
  6. Rawlins M (December 2008). "De Testimonio: on the evidence for decisions about the use of therapeutic interventions". Clinical Medicine. Royal College of Physicians. 8 (6): 579–88. doi:10.7861/clinmedicine.8-6-579. PMID 19149278.
  7. Bluhm R (October 2011). "From hierarchy to network: a richer view of evidence for evidence-based medicine". Perspectives in Biology and Medicine. Johns Hopkins University Press. 48 (4): 535–47. doi:10.1353/pbm.2005.0082. PMID 16227665.
  8. Upshur RE (Autumn 2005). "Looking for rules in a world of exceptions: reflections on evidence-based practice". Perspectives in Biology and Medicine. Johns Hopkins University Press. 48 (4): 477–89. doi:10.1353/pbm.2005.0098. PMID 16227661.
  9. Borgerson K (Spring 2009). "Valuing evidence: bias and the evidence hierarchy of evidence-based medicine". Perspectives in Biology and Medicine. Johns Hopkins University Press. 52 (2): 218–33. doi:10.1353/pbm.0.0086. PMID 19395821.
  10. La Caze A (January 2011). "The role of basic science in evidence-based medicine". Biology & Philosophy. Springer. 26 (1): 81&ndash, 98. doi:10.1007/s10539-010-9231-5.
  11. Concato J (July 2004). "Observational versus experimental studies: what's the evidence for a hierarchy?". NeuroRx. Springer. 1 (3): 341–7. doi:10.1602/neurorx.1.3.341. PMC 534936. PMID 15717036.

Further reading

  • Smith GC, Pell JP (December 2003). "Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials". BMJ. 327 (7429): 1459–61. doi:10.1136/bmj.327.7429.1459. PMC 300808. PMID 14684649. – Classic paper on the limits of evidence
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.