In the previous post, I described the difference between efficacy and effectiveness, an increasingly important concept in clinical research and healthcare. After stressing the importance of effectiveness research to health policy planning and patient decision-making, I summarized seven criteria for identifying effectiveness studies. Finally, I asked whether these criteria could be re-purposed beyond a medical intervention to inform how we measure the effectiveness of software systems used to conduct clinical trials.
Is it possible to assess clinical trial software through the lens of effectiveness, as opposed to just efficacy?
I believe that it’s not only possible, but crucial. Why? We all want to reduce the time and cost it takes to deliver safe, effective drugs to those that need them. But if we don’t scrutinize our tools for doing so, we risk letting the status quo impede our progress. When lives are on the line, we can’t afford to let any inefficiency stand.
In this post, I adapt the criteria for effectiveness studies in clinical research into a methodology for evaluating the effectiveness of clinical research software. I limit the scope of adaptation to electronic data capture (EDC) systems, but I suspect that a similar methodology could be developed for CTMS, IVR, eTMF and other complementary technologies. If I open a field of inquiry, or even just broaden one that exists, I’ll consider it time well spent.
For pure pathogen-killing power, it’s hard to beat a surgeon’s hand scrub. Ask any clinician, and she’ll tell you how thoroughly chlorhexidine disinfects skin. If she’s a microbiologist, she’ll even explain to you the biocide’s mechanism of action–provided you’re still listening. But how would the practice fare, say, as a method of cold and flu prevention on a college campus? Your skepticism here would seem justified. After all, it’s hard to sterilize a cough in the dining hall.
Efficacy and effectiveness. It’s unfortunate their phonetics are so close, because while the terms do refer to relative locations along a continuum, they’re the furthest thing from synonyms, as the ever accumulating literature on the topic will attest.
In this post and the one that follows, I’d like to offer some clarity on efficacy vs. effectiveness and illustrate the value that each type of analysis offers. If nothing else, what emerges should provide an introduction to the concepts for those new to clinical research. But I have a more speculative aim, too. I’d like propose standards for assessing trial technology through each of these lenses. Why? Because while we’ve been asking whether a particular technology does what it’s explicitly designed to do, as we should and must, we may have forgotten to ask a critical follow-up question: Does it improve the pace and reliability of our research?