To counter bias, counterbalance

wooden scale with dice in background
On a scale from 1 to 10, with 10 representing utmost importance, how important is a healthy diet to you?

Do you have your answer? If so, I’ve got another question.

Which appeals to you more, a cozy bed or a luxurious meal?

Yeah, me too.

These are hardly clinical questions, but as far as survey items go, they’re instructive examples. The wording for each is clear. The response options are distinct. The structures, a scale and a multiple choice, are familiar. But if we want valid answers to these questions, we’ve got some work to do.

When designing a survey, it’s easy to overlook the effects its format could have on the responses. But those effects are potent sources of bias. In the example above, the first question primes you to think about diet and health. In doing so, it stacks the deck against the “luxurious meal” response in the following question. But the trouble doesn’t end there. Although “bed” and “meal” make for a short list, one of them appears before the other. The primacy effect–the tendency of a respondent to choose the first in a list of possible responses, regardless of question content–puts “luxurious meal” at a further disadvantage.

The good news is that surveyors (and data managers) have tools to mitigate these biases. Modern EDC allows you to systematically vary both question and response option order, either by randomly selecting from a set of all possible permutations, or rotating through a block of permutations one participant at a time. The practice, called counterbalancing, guards against unwanted order effects.

But it isn’t a cure all. Consider the practice of rotating through all permutations of your response options. No matter how a set of response options are ordered, one of them has to be placed first. The primacy effect, then, isn’t so much as diminished as it is distributed among all the response options. To illustrate, suppose we ask the two questions above in alternating order to 1,000 respondents, all of whom responded. In the end, you may discover that 82% of the “bed or meal” respondents chose “bed,” while only 16% of the “meal or bed” respondents chose “bed.” Results like these ought to make you suspicious. If there’s no reason to believe the two cohorts differ (apart from the phrasing of the question posed to them), it’s premature to conclude that the population is split almost evenly along their preferences. The majority of the respondents selected whichever option they encountered first, so it’s much more likely that you’ve confirmed the power of the primacy bias.

The same caveat applies to question order. Imagine that our example survey always posed the “bed or meal” question before the “healthy diet” questions. Regardless of how the respondent answers the first questions, she’s now in a state of mind that could influence her next response. (“Ooh, I love luxurious meals. I guess a healthy diet isn’t that important to me,” or “I need better sleep more than I need a rich entree. I guess I healthy diet is important to me.”) To counterbalance, we might alternate the order in which these questions appear. Still, priming may occur in both orderings.

So how do we know if order effects have influenced our results? (Perhaps the better question is: how do we determine the degree to which order effects have influenced our results?) First, it’s important to know which variant of the survey each respondent answered, where variant refers to a unique order of questions and response options. Our example survey comes in (or should come in) four variants:

  1. Rate the importance of diet, then choose between meal or bed
  2. Rate the importance of diet, then choose between bed or meal
  3. Choose meal or bed, then rate the importance of diet
  4. Choose bed or meal, then rate the importance of diet

All respondents, then, fall into exactly one of these four “variant cohorts.” Let’s assume further that these cohorts differ only in the survey variant they answered; that our experimenters randomly selected the respondents from the same target population, and administered variant 1 to respondent 1, variant 2 to respondent 2, and so on in a cycle.

If, when comparing these cohorts, we find their aggregate responses diverging significantly from one another, we should suspect that ordering effects have distorted our results. All things being equal, the greater the divergence, the more significant the impact of order effects. Our experimenters were careful in recruiting similar respondents, after all, so the profile of responses from any subset should more or less match the profile of responses from any other subset. If that’s not happening, something other than question content is at play.

Precisely quantifying the impact of order effects is the business of professional statisticians, a noble breed from which the present writer stands apart. But as data managers, we owe it to good science to understand the concepts at play and to stand vigilant against their influence. In the end, the truth may not be balanced. But our instruments for finding it should be.

Click the image below to experiment with a counterbalanced form

Page of web form showing a question and four possible responses

Spotlight on: combinatorics!

Brady Bunch on staircase
One plan for The Brady Bunch Hour (1976, on ABC) was to open each show with a different order of Bradys along this iconic staircase. Unfortunately, the show was canceled after 9 episodes, leaving viewers to imagine the remaining 362,871 lineups.

How many ways are there to order n distinct items? Let’s ask the Brady Bunch!

In the photo to above, Cindy stands at the top of the staircase. But it might just as well have been Greg, or Marcia, or even Alice. (She is practically family.) In fact, the director might have chosen any one of the 9 Bradys (or honorary Bradys) to take the top spot. So there are at least 9 ways to arrange this loveable clan. But once the top spot is claimed, we have 8 choices remaining for the next spot. Multiply 9 possibilities for the top spot by 8 possibilities for the second, and we discover that there are at least 72 ways to arrange this brood. But, much like reunion specials and spin-offs, the madness doesn’t end there. We now have to fill the third spot from the 7 remaining Bradys. Multiple the 72 combinations for spots 1 and 2 by the 7 possibilities for spot 3, and we’ve already hit 502 line-ups. Keep going, and you’ll discover that there are 362,880 ways to order one of America’s favorite families alongside one of America’s ugliest staircases.

Of course, you recognize the math here. It’s just 9 factorial. And while n-factorial grows pretty darn fast as n grows, these values pose little to no challenge for computing devices. OpenClinica happens to run on computing devices, so we have no problems with these values either. Combine that performance with our features for generating random numbers (or changing form attributes according to participant order or ID, or both), and you have all the tools you need to implement counterbalancing on any scale.

And that’s much more than a hunch.

Souvenirs from Baltimore (SCDM 2019)

Thank you to everyone who helped make SCDM 2019 another fantastic learning opportunity. We were delighted to catch up with old friends and make dozens of new ones. If you weren’t able to visit our booth, attend our product showcase, or catch our panel discussion on key performance indicators, don’t worry — we captured the insights for you. You can download articles, best practices, and more right from this page.

Register now for OC19: All Hands on Data

OC19 All Hands on Data

Register now!

 

This year, sail the seas of OC4 in Santander, Spain.

This year, it’s all about discovery and doing. We’ll spend our time together working directly in OC4: creating studies, building forms, and becoming familiar with the dozens of new features and enhancements that continue to make our current solution the solution data managers can rely on for performance, flexibility, and security.

Two days packed with 30- to 90-minute workshops on:

  • Multiple queries, multiple constraints, and item annotations
  • Hard edit checks
  • Moving from datamart to Insight
  • Insight for key performance indicators (KPIs)
  • The power of external lists
  • Collecting and safeguarding Protected Health Information (PHI)
  • OC4 APIs
  • Data imports
  • Single sign on
  • Conditional event scheduling
  • An early look at Form Designer
  • FAQ on OIDs
  • XPath functions every user should know
  • CDASH forms
  • Getting to SDTM

Want to take part in OC19 but can’t travel to Spain? Register and join us via webcast! (Super User Trainees must attend in person.)

All registrants will receive access to an OC4 sandbox study in advance of the conference.

Register now!

 

 

How’s your health? Introducing a mobile-friendly SF-12

We’ve blogged about ePRO form design and the power of real-time calculation previously. Today we invite you to experience both on one of the most thoroughly validated self-reported health instruments of the last 30 years: the SF-12.

Take it for a spin one your smartphone, tablet, or laptop. We’ve paginated this form to minimize the chance of missing any item and all but eliminate scrolling. Scoring algorithms built into the form deliver immediate results. Yet the simplicity and familiarity of the paper form remain.

Instruments like these will only grow in importance, as regulatory bodies and payers continue to call for more real-world evidence. These same stakeholders are also embracing digital: unlike paper, electronic forms capture the date and time of entry (helping to avert “parking lot syndrome“), and can even prompt a participant to revisit skipped items. The result is a dramatic increase in data quality and response rates, along with a concomitant reduction in delays and transcription costs.

Why wait for data, only to discover how compromised it is? Start building your smart, responsive ePRO forms now!

 

Complete this form on your smartphone! Just go to bit.ly/ocsf-12 on your device’s default browser.

It’s just a form. What’s to know?

If you’re new to clinical data management, that question is understandable. You’ve never had any trouble building surveys online, after all. You asked for numbers, and got numbers. Solicited preferences, got preferences. What difference should it make now that the data you need is medical?

Experienced clinical data managers know the answer all too well. Data that concerns the safety and efficacy of a treatment, or that’s meant to describe the course of a disease, is the informational equivalent of dynamite. Handled properly, it can open new avenues. Handled improperly, it can lead to disaster. In any case, how we collect this data is heavily regulated.

Don’t let your efforts to capture better data, faster, end in an explosion. We’ve produced The Ultimate eCRF Design Guide to help you build forms that will:

  • deliver the highest quality data
  • speed time to capture
  • enable the widest possible integration
  • facilitate robust and rapid analysis
  • make regulatory submissions smoother

There are tools for the newcomer and veteran within these pages, so register for free now, and be sure to subscribe to updates.

Stop, in the Name of Accuracy! An Introduction to Data Validation

traffic lightMistakes happen in the course of data entry. A research coordinator, intending to input a weight of 80 kilograms, leaves the field before striking the “0” key. Her colleague, completing a field for temperature, enters a value of 98, forgetting that the system expects the measurement in Celcius. But no adult enrolled in a clinical study weighs 8 pounds. And the patient with a body temp of 98 degrees Celsius? “Fever” is something of an understatement.

Left standing, errors like the ones above distort analysis. That’s why data managers spend so much time reviewing submitted data for reasonableness and consistency. What if it were possible to guard against introducing error in the first place? With electronic forms, it is possible.

“Edit checks,” sometimes called “constraints” or “validation,” automatically compare inputted values with criteria set by the form builder.  The criteria may be a set of numerical limits, logical conditions, or a combination of the two. If the inputted value violates any part of the criteria, a warning appears, stating why the input has failed and guiding the user toward a resolution (without leading her toward any particular replacement).

Edit checks may be simple or complex; evaluate a single item or a group of related items; prevent the user from moving on or simply raise a flag. You can learn all about these differences below. The goals of edit checks are universal: higher data quality right from the start!

Check yourself

Setting edit checks appropriately is all about balance. Place too many checks, or impose ranges that are especially narrow, and you’ll end up raising alarms for a lot of data that’s perfectly valid. That will slow down research coordinators who simply want to get you the data you need. Place too few checks, or allow any old values, and you’ll open the gates to a flood of nonsensical data. You or a data manager colleague will then need to address this data with the clinical site after it’s submitted. Wait too long, and you could discover that the site can’t determine what led to the error in the first place.

While there’s no exact formula for striking the right balance, there are guidelines. Any value that could signal a safety issue ought to receive a lot of scrutiny. For example, in a study investigating a compound known to impact kidney function, you’ll want to place careful constraints around an item asking for a glomerular filtraton rate. The same goes for measures tied to eligibility or constitutive of primary endpoints. On the other hand, it doesn’t make sense to enforce a value for height that’s within 10% of a population mean. Moderately short and tall people enroll in studies, too!

Variety is the spice of edit checks

All edit checks share the common objective of cleaner data at the point of entry. They also share a rigorous and logical method. Input is either valid or not, and the determination is always objective. Beyond this family resemblance, though, edit checks differ in their scope and effects.

Hard vs. soft

Hard edit checks prevent the user inputting data from proceeding to the next item or item group. Note that a validated system will never expunge a value once submitted, even if it violates a hard check. Rather, it will automatically append a query to the invalid data. Until the query is resolved, the form users won’t be able to advance any further on the form.

Soft edit checks, by contrast, allow the user to continue through the form. However, the user won’t be able to mark the form complete until the query attached to the check is resolved.

Hard and soft edit checks each have their place. If an out of range value would preclude further study activities, a hard edit check may be justified, as it sends a conspicuous “stop and reassess” message to the clinical research coordinator. Where an invalid piece of data is likely to represent a typo or misunderstanding (e.g. a height of 6 meters as opposed to 6 feet entered on a physical exam form), a soft edit check is preferable.

Univariate vs. multivariate

Univariate edit checks evaluate input against range or logical constraints for a single item–for example, the value for Height, in inches, must be between 48 and 84.

Multivariate edit checks, by contrast, place constraints on the data inputted for two or more fields. “If, then” expressions often power these checks: if field A is selected, or holds a value within this range, then field B must meet some related set of criteria. If a form user indicates a history of cancer for a study participant, a related field asking for a diagnosis will fire its edit check if a cancer diagnosis isn’t provided.

When input fails to meet a multivariate edit check, it’s important for the warning message to state which item values are responsible for the conflict. Suppose a research coordinator enters “ovarian cyst” on a medical history form for a participant previously identified as male. A well-composed error message on the medical history item will refer the user to the field for sex.

Standard vs. protocol-specific

Standard edit checks, such as those placed on items for routine vital signs, do not vary from study to study. Their value lies in their re-usability. Consider a check placed on the item for body temperature within a Visit 1 Vital Signs form; one, say, that sets a range between 96 and 100 degrees Fahrenheit. That check can follow that item from form to form, just as the form may be able to follow the Visit 1 event from study to study. There are no experimental reasons to set a range wider or more narrow than this commonly expected one.

A protocol-specific edit, by contrast, enforces on an item a limit or threshold dictated by the protocol. Imagine a study to determine the reliability of a diagnostic tool for prostate cancer in men at least 50 years old. The eligibility form for such a study will likely include protocol-specific edit checks on the items for participant sex and date of birth. Or consider an infectious disease study whose patient population requires careful monitoring of their ALT value. In this context, a value that’s just slightly above normal may indicate an adverse event, so the acceptable range would be set a lot narrower than it would be for, say, an ophthalmological study.

Query, query (when data’s contrary)

A research coordinator who enters invalid data may not know how to correct their input, even with the guidance of the warning message. Or their input may be perfectly accurate and intended, while still falling outside the range encoded by the edit check. In these cases, your EDC should generate a query on the item. Queries are virtual “red flags” that attend any piece of data that either:

  • fails to meet the item’s edit check criteria
  • raises questions for the data manager or data reviewer

The first kind of query, called an “auto-query,” arises from programming. The system itself flags the invalid data and adds it to the log of queries that must be resolved before the database can be considered locked. The second kind of query, called a “manual query,” starts when a human, possessing contextual knowledge the system lacks, indicates her skepticism concerning a value. Like auto-queries, manual queries must be resolved before the database can be locked.

To resolve or “close” an auto-query, the user who first entered the invalid data (or another study team member at the clinical site) must either:

  • submit an updated value that meets the edit check criteria
  • communicate to the data manager that the flagged data is indeed accurate, and should stand

The data manager may close a query on data that violates an edit check. In these cases, she is overriding the demands of the validation logic, but only after careful consideration and consultation with the site.

To resolve a manual query, the site user and data manager engage in a virtual back and forth–sometimes short, sometimes long–to corrobate the original value or arrive at a correction. A validated EDC will log each question posed and answered during this exchange, so that it’s possible to reconstruct when and why the value under consideration changed as a result.

Resolving a query isn’t just a matter of removing the red flag. If the data manager accepts the out of range value, she must indicate why. If the research coordinator inputs a corrected value, she too must supply a reason for the change as part of the query back and forth. The goal is to arrive at the truth, not “whatever fits.”

Electronic case report form with fields

In praise of skip logic

woman running happily through field having escaped a maze

“Please answer the following three questions if you answered ‘yes’ above.”

Instructions like the above are common on paper forms. A biological male, for example, won’t have a history of pregnancy. Asking him questions on this topic wastes his time, contributes to survey fatigue, and makes him more likely to abandon the form. When a users is forced to chart their own way through a form, the chances of missing a critical question increase. Meanwhile, some portions of your forms will be destined for irrelevance, which is a waste of paper and an encumbrance on the data compiler responsible for sifting through completed and skipped boxes.

Enter electronic case forms with skip logic. As with scores and calculations, skip logic takes advantage of the digital computer’s biggest strength: it’s ability to compute. Here, instead of summing or dividing values, the form evaluates the truth of a logical expression. Did the user either select ‘Yes’ for item one or check the box for ‘B’ on item two? Did she include a cancer diagnosis on last month’s medical history form? (Variables in skip conditions can range over multiple forms.) Form logic can deduce the answer instantly, and behave differently depending on what that answer is: showing or hiding an additional item, further instructing the coordinator or participant, or alerting the safety officer. The results? A better user experience, cleaner data capture, and no wasted screen space!

Combating survey fatigue

Even the most diligent research coordinator can find herself procrastinating when it comes to data entry, only to battle frustration when she does finally attempt to tackle the study’s eCRF. The same is true of patients, who often face the additional challenge of poor health. This resistance to starting and completing forms is called survey fatigue, and it’s very real. Survey fatigue slows the pace of data acquisition and undermines quality, as respondents consciously or subconsciously overlook items or supply their “best guesses” simply to finish the work. As data managers and form builders, we need to consider the respondent’s experience at all times. This includes asking only for information required for analysis and possessed by the researcher or study participant. Never ask these respondents to perform a task more ably performed through form logic and calculation. That includes applying skip logic to ensure that all of the questions we ask are relevant!

Let’s get logical!

Skip logic (also known as branching logic, conditional logic, or skip patterns) is the technical name for a common type of command:

If this is true, then do that; otherwise, do this other thing.

What this refers to may be simple; for example, the user selecting ‘yes’ on a yes or no question. Alternatively, this might be quite complex; for example, the user selecting both B and C, but not D, on item 10 or else responding ‘no’ to at least five items between items 11 through 20.  This is always either true or false, depending on whether the input conforms to an expression the data manager has written with Boolean algebra.

Because this is either true or false (and not both), either that or the other thing must occur (and not both). As with the conditions for this, it’s up to the form builder to state what that and the other thing are. Usually, she will try to model the protocol with this conditional command. For example: “Take this measurement. If it falls within this range, take this additional measurement. Otherwise, to proceed to the next assessment.”

The form below provides examples of skip logic with increasingly complexity. See if you can recognize the conditional command behind each one.

 

Boolean algebra: the rules at the heart of skip logic

At the foundation of all digital electronics lie three elementary ideas: AND, OR, and NOT. These are the basic operators of Boolean algebra. But they’re not just for circuit design. Facility with these operators is a must for anyone who wants to design a form that’s truly responsive to input. If you’re new to these concepts, check out this helpful video from PBS LearningMedia. From there, you can learn how to evaluate a Boolean expression for different inputs using truth tables. Finally, you’ll be able to write your own Boolean expressions, even very complex ones, to “program” your form to behave in certain ways based on certain inputs.

Thoughts on eligibility forms in the Journal for Clinical Studies

Eligibility is more than a checklist. It’s an essential instrument for patient safety and study integrity.  Recently, the Journal for Clinical Studies invited OpenClinica to share recommendations for eligibility forms that could best deliver that safety and integrity. We were honored to contribute to their most recent issue. Take a read and let us know what you think!

page 1 of a journal article on eligibility forms