How’s your health? Introducing a mobile-friendly SF-12

We’ve blogged about ePRO form design and the power of real-time calculation previously. Today we invite you to experience both on one of the most thoroughly validated self-reported health instruments of the last 30 years: the SF-12.

Take it for a spin one your smartphone, tablet, or laptop. We’ve paginated this form to minimize the chance of missing any item and all but eliminate scrolling. Scoring algorithms built into the form deliver immediate results. Yet the simplicity and familiarity of the paper form remain.

Instruments like these will only grow in importance, as regulatory bodies and payers continue to call for more real-world evidence. These same stakeholders are also embracing digital: unlike paper, electronic forms capture the date and time of entry (helping to avert “parking lot syndrome“), and can even prompt a participant to revisit skipped items. The result is a dramatic increase in data quality and response rates, along with a concomitant reduction in delays and transcription costs.

Why wait for data, only to discover how compromised it is? Start building your smart, responsive ePRO forms now!

 

Complete this form on your smartphone! Just go to bit.ly/ocsf-12 on your device’s default browser.

It’s just a form. What’s to know?

If you’re new to clinical data management, that question is understandable. You’ve never had any trouble building surveys online, after all. You asked for numbers, and got numbers. Solicited preferences, got preferences. What difference should it make now that the data you need is medical?

Experienced clinical data managers know the answer all too well. Data that concerns the safety and efficacy of a treatment, or that’s meant to describe the course of a disease, is the informational equivalent of dynamite. Handled properly, it can open new avenues. Handled improperly, it can lead to disaster. In any case, how we collect this data is heavily regulated.

Don’t let your efforts to capture better data, faster, end in an explosion. We’ve produced The Ultimate eCRF Design Guide to help you build forms that will:

  • deliver the highest quality data
  • speed time to capture
  • enable the widest possible integration
  • facilitate robust and rapid analysis
  • make regulatory submissions smoother

There are tools for the newcomer and veteran within these pages, so register for free now, and be sure to subscribe to updates.

Stop, in the Name of Accuracy! An Introduction to Data Validation

traffic lightMistakes happen in the course of data entry. A research coordinator, intending to input a weight of 80 kilograms, leaves the field before striking the “0” key. Her colleague, completing a field for temperature, enters a value of 98, forgetting that the system expects the measurement in Celcius. But no adult enrolled in a clinical study weighs 8 pounds. And the patient with a body temp of 98 degrees Celsius? “Fever” is something of an understatement.

Left standing, errors like the ones above distort analysis. That’s why data managers spend so much time reviewing submitted data for reasonableness and consistency. What if it were possible to guard against introducing error in the first place? With electronic forms, it is possible.

“Edit checks,” sometimes called “constraints” or “validation,” automatically compare inputted values with criteria set by the form builder.  The criteria may be a set of numerical limits, logical conditions, or a combination of the two. If the inputted value violates any part of the criteria, a warning appears, stating why the input has failed and guiding the user toward a resolution (without leading her toward any particular replacement).

Edit checks may be simple or complex; evaluate a single item or a group of related items; prevent the user from moving on or simply raise a flag. You can learn all about these differences below. The goals of edit checks are universal: higher data quality right from the start!

Check yourself

Setting edit checks appropriately is all about balance. Place too many checks, or impose ranges that are especially narrow, and you’ll end up raising alarms for a lot of data that’s perfectly valid. That will slow down research coordinators who simply want to get you the data you need. Place too few checks, or allow any old values, and you’ll open the gates to a flood of nonsensical data. You or a data manager colleague will then need to address this data with the clinical site after it’s submitted. Wait too long, and you could discover that the site can’t determine what led to the error in the first place.

While there’s no exact formula for striking the right balance, there are guidelines. Any value that could signal a safety issue ought to receive a lot of scrutiny. For example, in a study investigating a compound known to impact kidney function, you’ll want to place careful constraints around an item asking for a glomerular filtraton rate. The same goes for measures tied to eligibility or constitutive of primary endpoints. On the other hand, it doesn’t make sense to enforce a value for height that’s within 10% of a population mean. Moderately short and tall people enroll in studies, too!

Variety is the spice of edit checks

All edit checks share the common objective of cleaner data at the point of entry. They also share a rigorous and logical method. Input is either valid or not, and the determination is always objective. Beyond this family resemblance, though, edit checks differ in their scope and effects.

Hard vs. soft

Hard edit checks prevent the user inputting data from proceeding to the next item or item group. Note that a validated system will never expunge a value once submitted, even if it violates a hard check. Rather, it will automatically append a query to the invalid data. Until the query is resolved, the form users won’t be able to advance any further on the form.

Soft edit checks, by contrast, allow the user to continue through the form. However, the user won’t be able to mark the form complete until the query attached to the check is resolved.

Hard and soft edit checks each have their place. If an out of range value would preclude further study activities, a hard edit check may be justified, as it sends a conspicuous “stop and reassess” message to the clinical research coordinator. Where an invalid piece of data is likely to represent a typo or misunderstanding (e.g. a height of 6 meters as opposed to 6 feet entered on a physical exam form), a soft edit check is preferable.

Univariate vs. multivariate

Univariate edit checks evaluate input against range or logical constraints for a single item–for example, the value for Height, in inches, must be between 48 and 84.

Multivariate edit checks, by contrast, place constraints on the data inputted for two or more fields. “If, then” expressions often power these checks: if field A is selected, or holds a value within this range, then field B must meet some related set of criteria. If a form user indicates a history of cancer for a study participant, a related field asking for a diagnosis will fire its edit check if a cancer diagnosis isn’t provided.

When input fails to meet a multivariate edit check, it’s important for the warning message to state which item values are responsible for the conflict. Suppose a research coordinator enters “ovarian cyst” on a medical history form for a participant previously identified as male. A well-composed error message on the medical history item will refer the user to the field for sex.

Standard vs. protocol-specific

Standard edit checks, such as those placed on items for routine vital signs, do not vary from study to study. Their value lies in their re-usability. Consider a check placed on the item for body temperature within a Visit 1 Vital Signs form; one, say, that sets a range between 96 and 100 degrees Fahrenheit. That check can follow that item from form to form, just as the form may be able to follow the Visit 1 event from study to study. There are no experimental reasons to set a range wider or more narrow than this commonly expected one.

A protocol-specific edit, by contrast, enforces on an item a limit or threshold dictated by the protocol. Imagine a study to determine the reliability of a diagnostic tool for prostate cancer in men at least 50 years old. The eligibility form for such a study will likely include protocol-specific edit checks on the items for participant sex and date of birth. Or consider an infectious disease study whose patient population requires careful monitoring of their ALT value. In this context, a value that’s just slightly above normal may indicate an adverse event, so the acceptable range would be set a lot narrower than it would be for, say, an ophthalmological study.

Query, query (when data’s contrary)

A research coordinator who enters invalid data may not know how to correct their input, even with the guidance of the warning message. Or their input may be perfectly accurate and intended, while still falling outside the range encoded by the edit check. In these cases, your EDC should generate a query on the item. Queries are virtual “red flags” that attend any piece of data that either:

  • fails to meet the item’s edit check criteria
  • raises questions for the data manager or data reviewer

The first kind of query, called an “auto-query,” arises from programming. The system itself flags the invalid data and adds it to the log of queries that must be resolved before the database can be considered locked. The second kind of query, called a “manual query,” starts when a human, possessing contextual knowledge the system lacks, indicates her skepticism concerning a value. Like auto-queries, manual queries must be resolved before the database can be locked.

To resolve or “close” an auto-query, the user who first entered the invalid data (or another study team member at the clinical site) must either:

  • submit an updated value that meets the edit check criteria
  • communicate to the data manager that the flagged data is indeed accurate, and should stand

The data manager may close a query on data that violates an edit check. In these cases, she is overriding the demands of the validation logic, but only after careful consideration and consultation with the site.

To resolve a manual query, the site user and data manager engage in a virtual back and forth–sometimes short, sometimes long–to corrobate the original value or arrive at a correction. A validated EDC will log each question posed and answered during this exchange, so that it’s possible to reconstruct when and why the value under consideration changed as a result.

Resolving a query isn’t just a matter of removing the red flag. If the data manager accepts the out of range value, she must indicate why. If the research coordinator inputs a corrected value, she too must supply a reason for the change as part of the query back and forth. The goal is to arrive at the truth, not “whatever fits.”

See some example edit checks in action

In praise of skip logic

woman running happily through field having escaped a maze

“Please answer the following three questions if you answered ‘yes’ above.”

Instructions like the above are common on paper forms. A biological male, for example, won’t have a history of pregnancy. Asking him questions on this topic wastes his time, contributes to survey fatigue, and makes him more likely to abandon the form. When a users is forced to chart their own way through a form, the chances of missing a critical question increase. Meanwhile, some portions of your forms will be destined for irrelevance, which is a waste of paper and an encumbrance on the data compiler responsible for sifting through completed and skipped boxes.

Enter electronic case forms with skip logic. As with scores and calculations, skip logic takes advantage of the digital computer’s biggest strength: it’s ability to compute. Here, instead of summing or dividing values, the form evaluates the truth of a logical expression. Did the user either select ‘Yes’ for item one or check the box for ‘B’ on item two? Did she include a cancer diagnosis on last month’s medical history form? (Variables in skip conditions can range over multiple forms.) Form logic can deduce the answer instantly, and behave differently depending on what that answer is: showing or hiding an additional item, further instructing the coordinator or participant, or alerting the safety officer. The results? A better user experience, cleaner data capture, and no wasted screen space!

Combating survey fatigue

Even the most diligent research coordinator can find herself procrastinating when it comes to data entry, only to battle frustration when she does finally attempt to tackle the study’s eCRF. The same is true of patients, who often face the additional challenge of poor health. This resistance to starting and completing forms is called survey fatigue, and it’s very real. Survey fatigue slows the pace of data acquisition and undermines quality, as respondents consciously or subconsciously overlook items or supply their “best guesses” simply to finish the work. As data managers and form builders, we need to consider the respondent’s experience at all times. This includes asking only for information required for analysis and possessed by the researcher or study participant. Never ask these respondents to perform a task more ably performed through form logic and calculation. That includes applying skip logic to ensure that all of the questions we ask are relevant!

Let’s get logical!

Skip logic (also known as branching logic, conditional logic, or skip patterns) is the technical name for a common type of command:

If this is true, then do that; otherwise, do this other thing.

What this refers to may be simple; for example, the user selecting ‘yes’ on a yes or no question. Alternatively, this might be quite complex; for example, the user selecting both B and C, but not D, on item 10 or else responding ‘no’ to at least five items between items 11 through 20.  This is always either true or false, depending on whether the input conforms to an expression the data manager has written with Boolean algebra.

Because this is either true or false (and not both), either that or the other thingmust occur (and not both). As with the conditions for this, it’s up to the form builder to state what that and the other thing are. Usually, she will try to model the protocol with this conditional command. For example: “Take this measurement. If it falls within this range, take this additional measurement. Otherwise, to proceed to the next assessment.”

The form below provides examples of skip logic with increasingly complexity. See if you can recognize the conditional command behind each one.

 

Boolean algebra: the rules at the heart of skip logic

At the foundation of all digital electronics lie three elementary ideas: AND, OR, and NOT. These are the basic operators of Boolean algebra. But they’re not just for circuit design. Facility with these operators is a must for anyone who wants to design a form that’s truly responsive to input. If you’re new to these concepts, check out this helpful video from PBS LearningMedia. From there, you can learn how to evaluate a Boolean expression for different inputs using truth tables. Finally, you’ll be able to write your own Boolean expressions, even very complex ones, to “program” your form to behave in certain ways based on certain inputs.

Thoughts on eligibility forms in the Journal for Clinical Studies

Eligibility is more than a checklist. It’s an essential instrument for patient safety and study integrity.  Recently, the Journal for Clinical Studies invited OpenClinica to share recommendations for eligibility forms that could best deliver that safety and integrity. We were honored to contribute to their most recent issue. Take a read and let us know what you think!

page 1 of a journal article on eligibility forms

What’s the score? Real-time scoring of the PHQ-9

Data capture screen illustrating scoring on a tablet

Heart disease. Lung cancer. Type II diabetes.

You. Me. The barista at the coffee shop.

For all the differences between the diseases above, each presents, if it does present, with a certain severity.  For all the varied experiences of the people above, each bears some risk of developing the diseases. How do we evaluate that severity? How do measure the risk? The answer is with a score.

What is a score?

A score is a value on an ordinal scale, used to classify the severity of a condition or to predict its future course. (Only a rigorous validation study can establish if and how well the score predicts.)  Instruments for generating scores take more basic measures like weight, blood pressure, or the presence or absence of some biomarker as their inputs, then combine these inputs in mathematically explicit ways. Crucially, a given score is calculated the same way from setting to setting and study to study, thus endowing the score with universal meaning.

Why are scores useful?

Scores characterize, classify, and predict. In cases of trauma or disease, a score (or a stage, or a grade) is what makes prognoses and treatment decisions possible. With scores so essential to clinical practice, it’s hardly surprising we encounter them so frequently in research. Eligibility criteria may set bounds to acceptable scores, to ensure safety or to tailor the investigation to a particular patient profile. A change in score over time may represent a primary outcome, suggesting a therapy’s superiority or inferiority to some comparator in reducing a disease burden or improving quality of life.

Scores are not comprehensive descriptions of a patient’s disease, much less of the patient herself. They are never perfectly predictive.  They are quantitative heuristics whose success in classifying the stage or severity of a disease, or in predicting the risk of its development, has been established through statistical studies.

How do researchers calculate scores?

Most (not all) scores are matters of simple arithmetic. Measure A, B, and C, then add them together. If D is present, add one. If D is absent, subtract one. A researcher mentally calculating a score for one patient in a calm, quiet setting stands a good chance of doing so correctly. But as inputs grow larger (34 x 72, say, instead of 4 + 9), so does the chance of a miscalculation. Asked to perform mental calculations again and again, for dozens of study participants, the researcher is all but guaranteed to make at least one mistake.

How does EDC make working with scores easier?

When it comes to computation, it’s hard to beat, well, a computer. Collecting data electronically facilitates rapid, accurate operations on that data. When data capture is web-based, the calculations may be shuttled between the clinical site and data management office almost instantly. That rapid exchange optimizes every stage of trial conduct. Real-time scoring at screening visits can stratify participants into cohorts. Scores that signal an adverse event can immediately trigger workflows for stabilizing the participant and submitting safety reports. When the time comes to analyze results, a portion of the statistical labor is already done.

Can you give me an example?

I was hoping you’d ask. Below you’ll find two presentations of the Patient Health Questionnaire-9 (PHQ-9), an instrument for screening, diagnosing, monitoring and measuring the severity of depression. The first presentation assumes out-of-clinic, ePRO use, where study managers expect most participants to respond on their own smartphone. The form will render on any web-enabled device, but the pagination is set to display one item at a time, for ease-of-use on smaller screens. The second presentation assumes in-clinic use on a tablet. Depending on the protocol, the researcher may administer the PHQ-9 through an interview with the participant, or the participant may complete the questionnaire on her own.  In both cases, best practices in ePRO form design dictated layout and behavior.

OpenClinica forms support real-time scoring through syntax any data manager can learn quickly. Data managers have complete control over when and if the score is displayed on the form, rather than simply saved to the database. Scores may be built into forms for patient-reported outcomes (ePRO) as easily as they are for clinic visit forms. And conditional logic can trigger additional questions or other workflows based on specific scores.

So give your study’s major eClinical system a chance to do what it does best, and score one for speed and accuracy.

 

 

Click here to view a tablet-based presentation of the PHQ-9.
Click the image above to view a tablet-based presentation of the PHQ-9.

Need a Glomerular Filtration Rate? Let the form do the math!

When it comes to math, the modern eCRF is no slouch. You’ve seen OpenClinica forms add, substract, multiply, and divide values at lightning speed. But these operations barely scratch the service.

OC4’s form engine supports a wide array of mathematical functions, from arcsins to exponents, all defined by clear, human-readable expressions. What does this mean for your clinical trial? As the data manager, you can help your site users derive complex measures, like the one below, with a consistent, error-free method. Meanwhile, site users never need to launch their calculator app or recall the right formula to apply.

The example below relies on a combination of “if then” logic and calculation to (1) identify the formula relevant to the participant’s race, sex, and serum creatinine, and (2) apply the formula to the input supplied by the user. The form completes both of these tasks in milliseconds. Give it a try below!

Did you know? Glomerular filtration rate (GFR) is an estimate of how much blood passes through the glomeruli of the kidneys each minute. It is an indicator of kidney function. For adults between the ages of 20 and 59, a GFR below 90 mL/min/1.73m2 may suggest kidney disease, depending on whether other signs of kidney damage (e.g. protein in urine) are present. (Source: “Glomerular Filtration Rate: A Key to Understanding How Well Your Kidneys Are Working” from the National Kidney Foundation)

You can find an authoritative compendium of measures like this at MD+CALC, all of which can be implemented in OpenClinica forms.

 

Customers, download the form here. Requires sign in.

 

Would you like to see a different measure in an OpenClinica form? Leave a comment below!

Save the Date: November 7 and 8 in Santander, Spain

Mark your calendars! This year’s annual gathering will take place on Thursday, November 7 and Friday, November 8 in Santander, Spain. Super User training will be offered from Monday, November 4 through Wednesday, November 6. (For those mostly or entirely unfamiliar with OC4, Super User Training is an effective way to master the fundamentals of our solution before diving into the advanced use cases we’ll cover on Thursday and Friday.)

This year, it’s all about discovery and doing. We’ll spend our time together working directly in OC4: creating studies, building forms, and becoming familiar with the dozens of new features and enhancements that continue to make our current solution the solution data managers can rely on for performance, flexibility, and security.

Details are still coming together. Here are the basics:

  • Anyone wishing to take part in OC19 will be able to do so in person or online.
  • Registrants will receive access to an OC4 sandbox study in advance of the conference.

Interested in a special use case or how to? Email bfarrow@openclinica.com.

The Four Criteria of a Perfect Eligibility Form: A Success Guide

Looking for another success guide? See our guides on cross form intelligence, date formatting, ePRO form design, and site performance reporting

In the months ahead, Journal for Clinical Studies will publish a detailed guide to designing eligibility forms–a guide authored by OpenClinica! The complete contents are embargoed until they appear (for free) on the journal’s website. As soon as it’s published, we’ll provide a link to it here. In the meantime, here’s a brief excerpt and an interactive form illustrating one of the guide’s four core principles, “Make your forms carry out the logic.”

from The Four Criteria of a Perfect Eligibility Form: A Success Guide, forthcoming in Journal for Clinical Studies

Think a moment about the human brain. Specifically, think about its capacity to carry out any logical deduction without flaw, time and again, against a background of distractions, and even urgent medical issues.

It doesn’t have the best track record.

Even the most logical research coordinator could benefit from an aid that parses all of the and’s, or’s, and not’s scattered throughout your study’s eligibility. A good form serves as that aid. Consider the following inclusion criteria, taken from a protocol published on clinicaltrials.gov.

Inclusion criteria #1 is straightforward enough. (Although even there, two criteria are compounded into one.) By contrast, there are countless ways of meeting, or missing, criteria #2. It’s easy to imagine a busy CRC mistaking some combination of metformin dose and A1C level as qualifying, when in fact it isn’t.

But computing devices don’t make these sorts of errors. All the software needs from a data manager is the right logical expression (e.g., Criteria #2 is met if and only if A and B are both true, OR C and D are both true, etc.) Once that’s in place, the CRC can depend on your form to deliver perfect judgment every time. Best of all, that statement can live under the surface of your form. All the CRC needs to do is provide the input that corresponds to A, B, C, and D. The form then chops the logic instantly, invisibly, and flawlessly.

Test drive the form below to see a smart eligibility form in action. OpenClinica customers, be sure to visit the Eligibility section of the CRF library to download the form definition.

For more on designing forms that capture better data, faster, view our on-demand webinars from December 2018.