How Do You Know if Your Research Is Too Weak or Unreliable
CMAJ. 2009 Apr 28; 180(ix): 942–945.
Managing show-based knowledge: the need for reliable, relevant and readable resources
Present few would argue against the demand to base of operations clinical decisions on the best available evidence. In exercise, however, clinicians face up serious challenges when they seek such bear witness.
Inquiry-based prove is generated at an exponential rate, yet information technology is not readily available to clinicians. When it is bachelor, it is practical infrequently. A systematic review1 of studies examining the information-seeking behaviour of physicians found that the information resources near oftentimes consulted by physicians is textbooks, followed past advice from colleagues. The textbooks we consult are frequently out of engagement,2 and the advice we receive from colleagues is oft inaccurate.3 Also, nurses and other health care professionals refer only infrequently to prove from systematic reviews in clinical decision-making.4 , 5
The sheer volume of research-based testify is one of the main barriers to better use of knowledge. Nearly ten years ago, if general internists wanted to keep beside of the primary clinical literature, they would have needed to read 17 articles daily.6 Today, with more than one thousand articles indexed daily by MEDLINE, that effigy is likely double. The problem is compounded by the inability of clinicians to beget more a few seconds at a fourth dimension in their practices for finding and assimilating evidence.seven These challenges highlight the need for better infrastructure in the management of bear witness-based knowledge.
Systematic reviews and master studies
Some experts suggest that clinicians should seek systematic reviews first when trying to detect answers to clinical questions.viii Inquiry that is synthesized in this way provides a base of bear witness for clinical practice guidelines. But there are many barriers to the direct use past clinicians of systematic reviews and main studies. Clinical practitioners lack ready access to electric current research-based bear witness,9 , 10 lack the time needed to search for it and lack the skills needed to identify it, appraise information technology and employ it in clinical decision-making.eleven , 12 Until recently, training in the appraisal of evidence has not been a component of nigh educational curricula.11 , 12 In one study of the use of prove, clinicians took more than 2 minutes to place a Cochrane review and its clinical bottom line. This resource was therefore frequently abandoned in "real-time" clinical searches.7 In another study, Sekimoto and colleagues13 plant that physicians in their survey believed a lack of evidence for the effectiveness of a handling was equivalent to the treatment existence ineffective.
Often, the content of systematic reviews and primary studies is not sufficient to encounter the needs of clinicians. Although criteria have been developed to improve the reporting of systematic reviews,14 their focus has been on the validity of evidence rather than on its applicability. Glenton and colleagues15 described several factors hindering the effective use of systematic reviews for clinical controlling. They found that reviews oft lacked details about interventions and did non provide acceptable data on the risks of agin events, the availability of interventions and the context in which the interventions may or may non work. Glasziou and colleaguessixteen observed that, of 80 studies (55 unmarried randomized trials and 25 systematic reviews) of therapies published over 1 year in Show-Based Medicine (a periodical of secondary publication), elements of the intervention were missing in 41. Of the 25 systematic reviews, only iii independent a description of the intervention that was sufficient for clinical conclusion-making and implementation.
Potential solutions
Better knowledge tools and products
Those who publish and edit research-based prove should focus on the "3 Rs" of show-based communication: reliability, relevance and readability. Bear witness is reliable if it tin be shown to be highly valid. The methods used to generate it must exist explicit and rigorous, or at to the lowest degree the best bachelor. To be clinically relevant, fabric should be distilled and indexed from the medical literature so that it consists of content that is specific to the distinct needs of well-defined groups of clinicians (eastward.g., primary care physicians, hospital practitioners or cardiologists). The tighter the fit between information and the needs of users, the better. To exist readable, testify must be presented by authors and editors in a format that is user-friendly and that goes into sufficient detail to allow implementation at the dispensary or bedside.
When faced with the challenges inherent in balancing the 3 Rs, reliability should trump relevance, and both should trump readability.
More efficient search strategies
One method for finding useful show is the "5S approach"17 (Figure 1). This framework provides a model for the organization of evidence-based information services.

The "5S" arroyo to finding useful evidence. This framework provides a model for the organisation of bear witness-based data services. Ideally, resources get more reliable, relevant and readable every bit 1 moves upward the pyramid. To optimize search efficiency, it is all-time to start at the meridian of the pyramid and work downwards when trying to answer a clinical question.
Ideally resources become more reliable, relevant and readable as we move upwards the 5S pyramid. At the bottom of the pyramid are all of the main studies, such as those indexed in MEDLINE. At the next level are syntheses, which are systematic reviews of the bear witness relevant to a particular clinical question. This level is followed by synopses, which provide cursory critical appraisals of original articles and reviews. Examples of synopses appear in evidence-based journals such as ACP Journal Club (www.acpjc.org). Summaries provide comprehensive overviews of prove related to a clinical problem (e.grand., gout or asthma) by accumulation evidence from the lower levels of relevant synopses, syntheses and studies.
Given the challenges of doing a good MEDLINE search, information technology is best to offset at the top of the pyramid and work down when trying to answer a clinical question. At the meridian of the pyramid are systems such as electronic wellness records. At this level, clinical data are linked electronically with relevant evidence to support evidence-based decision-making. Computerized decision-support systems such as these are all the same rare, so usually we starting time at the second level from the meridian of the pyramid when searching for evidence. Examples at the second level include online summary publications, such as Dynamed (www.ebscohost.com/dynamed) and ClinicalEvidence (http://clinicalevidence.bmj.com/ceweb/index.jsp), which are prove-based, frequently updated and bachelor for a widening range of clinical topics. Online services such as Evidence-Updates (http://plus.mcmaster.ca/evidenceupdates), which include studies and syntheses rated for quality and relevance with links to synopses and summaries, have recently become available with open access.
Evidence-based information resource are non created equal. Users at any of the levels simply described must ensure that prove is reliable by existence enlightened of the methods used to generate, synthesize and summarize it. They should know that just because a resources has references does not hateful that it is show-based. And simply because a resource uses "evidence-based" in its title does not mean that it is so. 1 publisher stated that sales can be enhanced by placing the term "bear witness-based" in the championship of a volume (Mary Banks, Senior Publisher, BMJ Books, London, Britain: personal communication, 2009). Rating scales that we observe useful for evidence summaries and research articles are provided in Box 1 and Table one.
Table 1
Calibration for rating individual studies
Relevance | Newsworthiness |
---|---|
7 Directly and highly relevant | 7 Useful information; well-nigh practitioners in my specialty definitely don't know this |
6 Definitely relevant | 6 Useful information; most practitioners in my specialty probably don't know this |
5 Probably relevant | 5 Useful information; near practitioners in my specialty possibly don't know this |
iv Possibly relevant; likely of indirect or peripheral relevance at best | 4 Useful information; most practitioners in my specialty possibly already know this |
three Possibly not relevant | three Useful data; about practitioners in my specialty probably already know this |
two Probably not relevant: content but remotely related | two It probably doesn't matter whether they know this or not |
1 Definitely not relevant: completely unrelated content area | ane Not of direct clinical interest |
Promoting specialized search methods and making high-quality resources for prove-based information available may lead to more right answers being constitute by clinicians. In a pocket-size study of data retrieval by main intendance physicians who were observed using their usual sources for clinical answers (about unremarkably Google and UpToDate), McKibbon and Fridsma18 establish just a 1.9% increase in correct answers following searching. By contrast, others who have supplied information resource to clinicians take found that searching increased the charge per unit of right answers from 29% to 50%.19 Schaafsma and colleagues20 found that when clinicians asked peers for answers to clinical questions, the answers they received were correct only 47% of the time; if the colleague provided supportive evidence, the correct answers increased to 83%.
Question-answering services by librarians may too enhance the search process. When tested in primary intendance settings, such a service was found to save time for clinicians, although its impact on conclusion-making and clinical intendance was not clear.21 , 22
What tin journals practise?
Journals must provide enough detail to let clinicians to implement the intervention in practice. Glasziou and colleaguesxvi found that virtually study authors, when contacted for boosted information, were willing to provide it. In some cases, this led to the provision of booklets or videoclips that could be made available on a journal's website. This level of data is helpful regardless of the complexity of the intervention. For example, the need to titrate the dose of angiotensin-converting-enzyme inhibitors and confusion about monitoring the utilize of these drugs are considered barriers to their use past primary care physicians, and yet such information is frequently defective in chief studies and systematic reviews.23
Finally, journal editors and researchers should work together to format research in means that brand information technology more readable for clinicians. There is some prove that the use of more informative, structured abstracts has a positive impact on the ability of clinicians to apply bear witness24 and that the way in which trial results are presented has an impact on the direction decisions of clinicians.25 By contrast, there are no data showing that information presented in a systematic review has a positive affect on clinicians' understanding of the evidence or on their ability to apply it to individual patients.
Conclusion
Evidence, whether strong or weak, is never sufficient to make clinical decisions. It must be balanced with the values and preferences of patients for optimal shared decision-making. To support evidence-based controlling by clinicians, nosotros must telephone call for information resource that are reliable, relevant and readable. Hopefully those who publish or fund research volition detect new and improve means to meet this demand.
Footnotes
This article has been peer reviewed.
Sharon Straus is the Section Editor of Reviews at CMAJ and was non involved in the editorial decision-making procedure for this commodity.
Competing interests: Sharon Straus is an associate editor for ACP Periodical Society and Evidence-Based Medicine and is on the advisory lath of BMJ Grouping. Brian Haynes is editor of ACP Periodical Club and EvidenceUpdates, coeditor of Evidence-Based Medicine and contributes research-based evidence to ClinicalEvidence.
Contributors: Both of the authors contributed to the development of the concepts in the manuscript, and both drafted, revised and approved the final version submitted for publication.
REFERENCES
1. Haynes RB. Where'due south the meat in clinical journals? [editorial] ACP J Society. 1993;119:A22–3. [Google Scholar]
2. McKibbon A, Eady A, Marks S. PDQ evidence-based principles and practice. New York (NY): BC Decker; 2000. [Google Scholar]
3. Bero L, Rennie D The Cochrane Collaboration. Preparing, maintaining and disseminating systematic reviews of the effects of health care. JAMA. 1995;274:1935–viii. [PubMed] [Google Scholar]
4. Kiesler DJ, Auerbach SM. Optimal matches of patient preferences for information, determination-making and interpersonal behavior: show, models and interventions. Patient Educ Couns. 2006;61:319–41. [PubMed] [Google Scholar]
five. Dawes Yard, Sampson U. Noesis direction in clinical practise: a systematic review of data seeking behaviour in physicians. Int J Med Inform. 2003;71:nine–xv. [PubMed] [Google Scholar]
6. Antman East Yard, Lau J, Kupelnick B, et al. A comparison of results of meta-analyses of randomised control trials and recommendations of clinical experts. JAMA. 1992;268:240–8. [PubMed] [Google Scholar]
7. Oxman Advertizement, Guyatt GH. The science of reviewing enquiry. Ann N Y Acad Sci. 1993;703:125–34. [PubMed] [Google Scholar]
8. Olade RA. Show-based exercise and research utilisation activities among rural nurses. J Nurs Scholarsh. 2004;36:220–5. [PubMed] [Google Scholar]
ix. Kajermo KN, Nordstrom Yard, Krusebrant A, et al. Nurses' experiences of research utilization within the framework of an educational programme. J Clin Nurs. 2001;x:671–81. [PubMed] [Google Scholar]
10. Milner G, Estabrooks CA, Myrick F. Research utilisation and clinical nurse educators: a systematic review. J Eval Clin Pract. 2006;12:639–55. [PubMed] [Google Scholar]
eleven. Lavis JN. Research, public policymaking, and cognition-translation processes: Canadian efforts to build bridges. J Contin Educ Wellness Prof. 2006;26:37–45. [PubMed] [Google Scholar]
12. Straus SE, Sackett DL. Bringing show to the point of intendance. JAMA. 1999;281:1171–2. [Google Scholar]
13. Sekimoto M, Imanaka Y, Kitano N, et al. Why are physicians not persuaded by scientific testify? BMC Health Serv Res. 2006;6:92. [PMC free commodity] [PubMed] [Google Scholar]
14. Moher D, Cook DJ, Eastwood South, et al. Improving the quality of reports of meta-analyses of randomized controlled trials: the QUORUM argument. Lancet. 1999;354:1896–900. [PubMed] [Google Scholar]
xv. Glenton C, Underland V, Kho M, et al. Summaries of findings, descriptions of interventions, and information virtually adverse effects would make reviews more informative. J Clin Epidemiol. 2006;59:770–8. Epub 2006 May 30. [PubMed] [Google Scholar]
16. Glasziou P, Meats E, Heneghan C, et al. What is missing from descriptions of treatment in trials and reviews? BMJ. 2008;336:1472–4. [PMC free article] [PubMed] [Google Scholar]
17. Haynes RB. Of studies, syntheses, synopses, summaries and systems: the '5S' evolution of information services for bear witness-based wellness care decisions. ACP J Club. 2006;145:A8–9. [PubMed] [Google Scholar]
eighteen. McKibbon KA, Fridsma DB. Effectiveness of clinician-selected electronic information resource for answering primary care physicians' information needs. J Am Med Inform Assoc. 2006;13:653–9. Epub 2006 Aug 23. [PMC free article] [PubMed] [Google Scholar]
19. Westbrook JI, Coirea WE, Gosling Equally. Do online information retrieval systems help experienced clinicians respond clinical questions? J Am Med Inform Assoc. 2005;12:315–32. [PMC free article] [PubMed] [Google Scholar]
xx. Schaafsma F, Verbeek J, Hulshof C, et al. Caution required when relying on a colleague's advice; a comparison between professional advice and evidence from the literature. BMC Health Serv Res. 2005;5:59. [PMC free article] [PubMed] [Google Scholar]
21. McGowan J, Hogg Due west, Campbell C, et al. Merely-in-fourth dimension information improved determination-making in master care: a randomized controlled trial. PLoS 1. 2008;three:e3785. Epub 2008 Nov 21. [PMC free article] [PubMed] [Google Scholar]
22. Brettle A, Hulme C, Ormandy P. The costs and effectiveness of information-skills training and mediated searching: quantitative results from the EMPIRIC project. Wellness Info Libr J. 2006;23:239–47. [PubMed] [Google Scholar]
23. Kasje WN, Denig P, de Graeff PA, et al. Perceived barriers for treatment of chronic heart failure in general practice: Are they affecting performance? BMC Fam Pract. 2005;six:xix. [PMC gratuitous commodity] [PubMed] [Google Scholar]
24. Hartley J. Clarifying the abstracts of systematic literature reviews. Balderdash Med Libr Assoc. 2000;88:332–7. [PMC free commodity] [PubMed] [Google Scholar]
25. McGettigan P, Sly Grand, O'Connell D, et al. The furnishings of information framing on the practices of physicians. J Gen Intern Med. 1999;xiv:633–42. [PMC free article] [PubMed] [Google Scholar]
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2670907/
0 Response to "How Do You Know if Your Research Is Too Weak or Unreliable"
Post a Comment