Choosing an Evidence-based Practice with Impact
A crucial consideration when choosing an evidence-based practice with impact is often overlooked by practice guides and clearinghouses. This blog tackles a common misconception about "evidence-based" designations, and offers you an important lens with which to view evidence-based practice selection.
A Common Misconception About "Evidence-based" Designations
Evidence-based practice resources and clearinghouses, like the California Evidence-based Clearinghouse for Child Welfare, often use designations like "well-supported," "supported," and "promising" to denote differences in the rigor of research evidence supporting particular practices. It is easy to confuse these designations with grades of a practice's impact, but we shouldn't make that assumption.
While the distinction of "evidence-based" should and does denote a practice that is more impactful than a non- evidence-based practice, distinctions between evidence-based practices generally do not. Popularly cited "evidence-based" gradations generally do not factor in the size of a practice's impact—only the experimental rigor with which a significant impact of any size was confirmed. In other words, top tier evidence-based designations like "well-supported" don't necessarily mean an evidence-based practice will have a larger impact for your clients than a practice that is just "supported" or "promising," (though positive outcomes may be more likely).
This may seem like a semantic argument—the rigor of research evidence for a practice does provide us with a lot of information and some degree of confirmation that positive outcomes will be achieved when implementing the evidence-based practice. Using a "well-supported" practice as opposed to a "supported" practice may involve less risk to the implementing organization, for several reasons—not the least of which is political.
What Evidence-based Practice Designations Do and Do Not Tell Us
A practice achieving a "well-supported" designation may have the additional risk-minimizing benefits of:
- a wide range of replications confirming positive outcomes;
- an active research and practice community;
- and wider recognition at state and federal agency levels.
But, crucially, higher "evidence-based" designations do NOT necessarily indicate:
- a better fit to an organization or community's needs;
- a greater impact on client outcomes;
- or a greater return-on investment for the community.
If our goal is to improve outcomes for clients and achieve the greatest impact in our communities—a goal we can all agree on—we need to weigh practices with more information about the size of the impact we can expect. In research terms, we call this effect size—and effect size is not often summarized by these easy-to-digest designations.
Consider Effect Size When Choosing an Evidence-based Practice
This British Educational Research Association article about effect size does such an excellent job of summarizing the value of the metric, it makes my job here easy. "[Effect size] is particularly valuable for quantifying the effectiveness of a particular intervention, relative to some comparison," writes author Robert Coe. "It allows us to move beyond the simplistic, 'Does it work or not?' to the far more sophisticated, 'How well does it work in a range of contexts?'"
It's a no-brainer—of course we want to know how well an intervention works, not just if it works. Evaluating effect size may require a deeper dive into research evidence than a cursory glance at evidence-based designations on a practice clearinghouse—but it's a crucial consideration when choosing an evidence-based practice.
Our blog previously covered advice for how to choose an evidence-based practice, which explains how you can narrow down your search. Once you've narrowed down the noisy field, you can spend your time efficiently by gathering this information only for practices you already expect to be a good fit for your organization.
Some practices with substantial research foundations or active research communities will have effect size meta-analyses available that will help you weigh comparative outcomes. In addition, the Washington State Institute for Public Policy publishes cost-benefit analyses of evidence-based practices that incorporate their own calculations of effect size based on a practice's research evidence.
Our own Navigation Guide & Practice Matching Resource summarizes effect size and return-on-investment information, where available, for each of the practices we match to your organizational needs, circumstances and culture. We think our interactive tool is an invaluable resource for cutting through the noisy evidence-based practice "marketplace," narrowing your search, and providing individualized insight into particular practices. We even put together a sample of our organizational insights and practice matching report, so you can see first-hand the insight we offer leaders after they complete our interactive modules.
While there are no shortcuts to socially significant outcomes, we hope that our resources will empower leaders of organizations to approach evidence-based practice implementation as more informed advocates for their communities and their clients. Our mission is to close the science to practice gap—and we want to do that by helping you succeed. Get access to individualized insights using this link.