Evidenced based medicine – what is it?
The past two decades have seen a huge focus on “evidenced based medicine”. This concept has promoted a rapid spread of articles, how to books, textbooks, etc for various providers, practitioners and researchers from health care – to psychology – to social work. Yet, in the past 20 years it has become both a medical milestone and a meddling method? Is it what we wanted it to be?
Donald Berwick, a Harvard based quality improvement “expert” who worked with “evidenced based” protocol, wrote in 2005 : “we had overshot the mark” and turned evidence based medicine into an “intellectual hegemony that can cost us dearly if we do not take stock and modify it”.
Again, in 2008 Berwick stated that “evidenced-based medicine sometimes must take a back seat” to patient-centered care. But are the colleges and associations going to do with this?
Early critics of “evidenced based medicine” claimed it would be become a “one protocol fits all” and eliminate individuality. Others claimed that a clinician’s instinct and experience were indispensable to good care. Unfortunately, again many colleges and associations do not take this into consideration.
Subsequently, we have the evolving method called “pragmatic science”. With this method, more and more provider groups are monitoring their participants to track the “effects of treatment” and use the results to improve services. Unfortunately, as Berwick (2005) claimed, these QI (quality improvement) designs lack the scientific rigor and therefore remain “largely trapped on the far side of the publication wall”.
In 2000, the Evidence Based Medicine Working Group recognized the importance of the consumer’s perspective when it published the following: “Whatever the evidence, value and preference judgments are implicit in every clinical decision” (Montori and Guyatt, 2008, p. 1815.)
Since that time, there has been a lot of controversy regarding whether the “patient wants” should override “professional judgment” or whether the “patient wants” are more important than the “evidenced based” model currently available.
Let’s ask the following: what if the patient connects more with a Jungian style of therapy versus a cognitive – behavioural style of therapy, in psychology?
Or what if the patient connects more with homeopathic medicine than Traditional Chinese medicine in Alternative Medicine?
Or what if a client connects more with New German Medicine than radiation and chemotherapy in Allopathic Oncology Medicine?
Now another huge component is who decides what gets into a given journal. It is well known, recognized and argued that a tremendous amount of research does not get published. Why? Too many editors for the medical journals get reimbursements from pharamaceuticals. However, the same goes for psychology journals or the DSM (Diagnostic Statistical Manual) used for psychiatry and psychology. Way too much political drive, connection, and reimbursement is involved in what gets published and utilized.
If we know that the mind can be very strong – and in fact, the placebo effect can impact up to 38% of the outcomes. Why would we not utilize what makes sense to the client? Utilize their tremendous “placebo effect” for their own benefit?
Further, in quantum research we find that both intent and observation itself can impact on the outcomes – some say up to 30-35%.
Well, if we were to do a simple summary of the impact on research, i.e., 30 (intent) + 30 (observation) + 38 (placebo) = 98% then only 2% is left for empirical research.
Now, the challenge here is that we have no idea how much intent, observation, and placebo can interact and/or overlap each other. Do they compound logarithmically? Likelihood is that they are not isolated independent variables. But how do they interact?
But regardless of how they may or may not interact, when we combine this with the following:
– 34% of tested conventional medical procedures have evidence to demonstrate they are either “beneficial” or “likely to be beneficial”.
– all other conventional beneficial medical treatments tested are classified as:
“unknown effectiveness” (51%)
“likely to be ineffective or harmful” (3%),
“unlikely to be beneficial” (5%)
“trade offs between benefits and harms” (7%).
Link to the data: http://clinicalevidence.bmj.co…
These numbers don’t even include almost any surgical procedures – since 90% of all surgical procedures have never been tested in randomized trials.
It leaves one wondering where the empirical evidence is.
To what extent is the “evidenced based” research guided by what grants are available? Who determines where the grant money is going? The variables we currently know how to define? How about the outcomes we know how to measure? How does that reflect a scientific evolution of understanding, if we are simply reinforcing what we knew yesterday? How often do we hear the same ol’ theories getting promoted – just using different terms to say the same ol’ thing?
When we are dealing with the more psychologically abstract sciences whether in terms of counseling or with the impact on physiological healing methods – the issue becomes even more confounded.
The older the practitioner, the more experience he/she brings to the table – should he/she ignore their experience? What are their intentions versus that of the client versus that of the researcher? Are they drawing a particular client to them? What if they find that the “evidenced based model” has little effect for their particular clients?
Never mind the fact that each practitioner is unique; each client is unique; and the interaction between the two will be unique.
Do you want a practitioner who only works with the rules in a given book? Or do you want a practitioner who is using his intellectual and intuitive mind?
Or should being a practitioner be like learning how to play the piano – he/she takes all the basic lessons. Learns how to do the scales and chord progressions; knows how to play the classics; etc Then the practitioner learns how to effectively take what he knows a step further. He learns to compose himself.
The challenge with this way of doing things, is the unfortunate confound of the practitioner that may be of more harm than good, while doing his own composition. How do we find a balance that protects the client and allows for the evolution of practice?
If research and implementation were more involved, both the practitioners and the researchers would benefit. Many programs are now implenting the RE-AIM (for Reach, Effectiveness, Adoption, Implementation, and Maintenance) to encourage practitioners to get more involved. (Glasgow, 2006; NCOA, 2008). Use of this framework empowers the end-users and holds researchers accountable to practitioners and consumers.
How does this effect you as the consumer? Make sure you do your homework and ascertain what model your practitioner is using with you and whether you agree with it. Arguments can be utilized for and against each component. It is your health, your dollar. Take responsibility. Do your research.
For more information, contact: Dr Holly at email@example.com
Copyright 2011 © Choices Unlimited for Health & Wellness
Disclaimer: This site is provided for general information only, and is not a substitute for the medical advice of your own doctor or other health care professional. This site is not responsible or liable for any diagnosis made by a user based on the content of this website. This site is not liable for the contents of any external internet sites listed, nor does it endorse any commercial product or service mentioned or advised on any of such sites. Always consult your own health care practitioner.