The Four Criteria of a Perfect Eligibility Form: A Success Guide

Looking for another success guide? See our guides on cross form intelligence, date formatting, ePRO form design, and site performance reporting. OpenClinica customers, be sure to visit the Eligibility section of the CRF library.

What do world-class universities and too many of today’s trials have in common?

They’re really hard to get in to.

That’s as worrisome a problem for clinical researchers as it is for high school seniors. Stringent criteria make recruitment more difficult, a burden for sites, sponsors, and even statisticians. Even if enrollment does proceed apace, limitations crop up at the analysis stage: the stricter the qualifications, the less generalizable the results. The mantra is common by now: we need to open the gates wider if we want to gather robust, real-world evidence of how new treatments impact disease.

Yet inclusion and exclusion criteria aren’t going anywhere, especially in early phase trials. Investigators who fail to evaluate patients against eligibility criteria compound the risk that’s a part of all interventional trials. Good experimental science requires criteria, too. Imagine you’re launching a one-year study to test whether once-a-day ColdFreeze reduces occurrences of upper respiratory tract infection. You can’t give all your spots to those who suffered a lot of colds in the prior year. When the positive results pour in, it won’t be ColdFreeze that deserves the headlines, but regression to the mean.

Of course, most of us are quickly out of our depth when it comes to drafting criteria for a particular study. This is where biostatisticians shine. Eventually, though, those criteria need to inform study operations. How should we approach eligibility at the level of data management and workflow optimization? Here, I’ll offer some best practices and share an example eligibility form that puts them into action.

 

#1 Make eligibility a form.

We’ll start with the obvious. A form that comprises your inclusion and exclusion criteria does more than drive protocol compliance. It makes confirming eligibility easier, which saves monitoring time and cost. A well-designed eligibility eCRF also encourages your CRCs (clinical research coordinators) to review the criteria in a predetermined order; the order you’ve established to maximize efficiency.

 

#2 Fail early. (And cheaply.)

No one likes to invest a lot of time or money in a project that will eventually hit an impasse. But when it comes to trials, there’s more at stake than frustration or finances. Quickly disqualifying a potential participant frees up more time for the site to:

  • match that patient with suitable care or alternative research, and
  • screen more patients who may be a fit for your trial.

That’s a win for all parties. But how do you ensure that disqualification, if it is going to occur, occurs early in the evaluation? Here we’re faced with an optimization problem that includes three key variables:

  1. which criteria are most likely to disqualify a patient,
  2. which are the quickest to evaluate, and
  3. which are the most cost-effective to evaluate.

Let’s start by considering the happy case where those three properties coincide. Suppose that participation in your neuroscientific study is restricted to the chosen few of us who are left-handed. A CRC working on your study could disqualify 90% of randomly selected persons in the time it takes to witness a signature. The cost? A piece of paper. So there’s excellent reason to make the handedness criteria the first one assessed, and thus the first item on your form. (You’re not obliged to follow the protocol’s order of criteria. You just need ensure that your form applies them all.)

Protocol often bury their most productive disqualifier deep within the exclusion list. That’s a rational strategy if evaluating that “buried” criteria is unduly time-consuming. Suppose that five, 12-minute tests, conducted serially, each pose a 10% chance of disqualifying a study candidate. In that case, it makes sense to conduct those tests before an hour-long one, even if that longer test comes a 40% chance of disqualification. In the first case, the chance of disqualification is actually 41% after an hour (1 – .95). That’s not a big improvement by itself. But note also that the average time to reach a qualification decision using the five-test method is a little over 49 minutes. Think what could be accomplished in the 11 minutes saved per patient amassed over hundreds of patients.

But what if those five tests each cost $20 to conduct, while the single test with the 40% failure rate costs $60? Suddenly, that 1% boost becomes a lot less appealing. Why? Assume again that sites would conduct the five tests sequentially, stopping after the first failure. Sites would then need to test the average candidate a little more than four times. (10% of patients will take and fail exactly one test, 9% of patients will pass the first test but fail the second, etc. The average for all comers is 4.1 tests.) Ultimately, the site or sponsor would spend 50% more in hard dollars using the five test method to disqualify roughly the same number of candidates.

Generally, then, criteria that are likely to disqualify, as well as inexpensive and quick to evaluate, ought to come first. Costly, time-consuming, and less decisive criteria should fall further down the list. Inevitably, trade-offs will occur. Which should take priority: criteria that are easy to check and somewhat likely to disqualify, or difficult to check but very likely to disqualify? In these cases, perfect mathematical rigor may be impossible. Often, it’s not even necessary–most criteria can be assessed simply by consulting the patient’s chart. But thinking like an economist, even an amateur one, about how to fail early and cheaply could pay big dividends for everyone involved.

 

#3 Make your forms carry out the logic.

Speaking of failure, think a moment about the human brain. Specifically, think about its capacity to carry out any logical deduction without flaw, time and again, against a background of distractions and urgent medical issues.

It doesn’t have the best track record.

Research coordinators typically boast sharper than average minds, especially if they’re left-handed. But even they could benefit from a reliable aid in parsing all of the and’s, or’s, and not’s scattered throughout your study’s eligibility. A good form serves as that aid. Consider the following inclusion criteria, taken from a protocol published on clinicaltrials.gov.

 

 

Inclusion criteria #1 is straightforward enough. (Although even there, two criteria are compounded into one.) By contrast, there are countless ways of meeting, or missing, criteria #2. It’s easy to imagine a busy CRC mistaking some combination of metformin dose and A1C level as qualifying, when in fact it isn’t.

But computing devices don’t make those errors. All the software needs from you is the right logical expression (e.g., Criteria #2 is met if and only if A and B are both true, OR C and D are both true, etc.) Once that’s in place, the CRC can depend on your form to deliver perfect judgment every time. Best of all, that statement can live under the surface of your form. All the CRC needs to do is provide the input that corresponds to A, B, C, and D. The form then chops the logic instantly, invisibly, and flawlessly.

The form snippets below show criteria #2 applied to four sets of inputs. Nowhere is the user asked to determine the truth of the compound statement built up out of those or’s and and’s. Rather, the form consults a truth table “behind the scenes” to return the result.

 

 

Form engines will vary in the syntax needed to build up these logical formulas. But the concepts themselves are either already familiar to you or easily grasped. So build reasoning into your forms and spare your sites all that deductive work!

 

#4 Move beyond ‘yes’ or ‘no.’

The criteria offers so far have placed a premium on in-clinic efficiency; getting to the right answer quickly for a particular study participant. But eligibility eCRFs can serve another goal. As long as your form is collecting the “logical inputs” as described above, you’ll eventually gather a mass of fine-grained data about study candidates who did not meet eligibility. And that data is worth your consideration if the protocol ever needs to be amended. Was the range set by the hemoglobin criteria broad enough? Is a medical history that precludes hypertension really necessary to evaluating safety and efficacy? If so, maybe version 2 can modify those criteria.

Of course, as with any statistical consideration, it’s easy to get in trouble. You (or, more likely, your biostatistician) will need to carefully consider whether any changes in eligibility will undermine your study’s ability to test its explicitly stated hypothesis. Recall, too, that if you’re working with data from only some of your disqualified candidates (i.e. those for whom the form was completed), that data may not be representative of all disqualified candidates, much less of the target population at large. In other words, more data isn’t always better, and there are regulations in place–valuable ones–to stop us from collecting data that isn’t germane to a study. It’s critical that you and your team hold these caveats in mind. But even aside from statistical inferences, collecting eligibility data “from the ground up” can reduce a good deal of doubt about whether a candidate really did meet or miss a criteria. That’s a tremendous boon to your monitors.

“Of course we need to uphold eligibility criteria. But is the form worth this much thought?” That’s an understandable response. But as data managers and study operations professionals, it’s up to us to squeeze every drop of efficiency and quality out of the work we do. We owe it to our sites and patients. Consent and eligibility constitute the foundation of their study journey. There are no better places that to start making good on our responsibility.

 

Bring these practices above to life with the example form below. For more on designing forms that capture better data, faster, view our on-demand webinars from December 2018.

Why you should make better forms your top data management resolution for 2019

Chances are you’ve already set personal goals for the new year. But have you set professional ones? If not, let me suggest the most meaningful data management resolution you can make for 2019.

“I will build better forms.”

Of all the aspects of eClinical, why rally around forms? For us, the answer is simple. Of all the tools in your toolbelt, optimized forms offer you the greatest leverage in capturing clean data promptly.

Just think about it. You don’t have control over the buzz of clinical and research activity at your sites. You don’t have control over source documents. And you can’t personally visit all your sites, train all your CRCs, or SDV all the items in your study.

So how do you bring order to the (mostly) controlled chaos of a clinical study? You encourage prompt entry of accurate data with forms that are smart, standardized, and, yes, even appealing. Think about what capable forms deliver at the point of entry and downstream:

  • Timely data entry from CRCs who are thrilled to use your beautiful eCRFs
  • More accurate data, thanks to specific, real-time edit check messages
  • Less missing data, thanks to sensible skip logic and clear instructions
  • Reduced SDV burden, as more and more of your clean, flexible forms become the source
  • Reduced time to database lock
  • Easier analysis, thanks to sophisticated “in form” scoring and calculations
  • Smoother submission, with CDISC-standardized exports

Don’t get us wrong. Tools that expedite study design and user management, fast and reliable system performance, rock-solid security – these are crucial too. But forms are where you, your CRCs, and your data live, day in and day out. So in terms of overall study success, the “ROI” on perfecting your forms is hard to beat.  

That’s why we’ll never take our eyes off this so-called fundamental. In fact, we devoted the last few months of 2018 to assembling the best thinking on forms. Not just our thinking, but yours, and that of experts. You can see what we’ve been up to by reading our blog series on cross-form logic or streaming our two December webinars. And we hope you’ll let us know what (in addition to better forms, of course) will change the clinical research landscape this year. Take the poll below!

 

View the webinars

Click here to sign into the webinar library.

Read the posts

Take the poll

Which of the following will make the biggest impact on clinical data in 2019?

Deep learning/AI
Risk-based monitoring
Enhanced security (e.g. blockchain… or quantum!)
eSource
Wearables

Master form design with these two on-demand webinars

Equipped with the right system, data managers today have more tools than ever before to capture high-quality right at its source. But what can the “right system” do? And how should data managers deploy those capabilities to prompt accurate, efficient entry from site staff and participants?

We hosted two webinars this month to answer those questions. Now you can watch them on demand. In Kitchen Sink, you’ll spend an economical 30 minutes understanding how OpenClinica’s form capabilities – from cross-form intelligence to modern, multi-media question types – all work together to serve as the user’s partner in capturing better data, faster. In Good Form, we step back to understand the proper role of these capabilities (not all scales are Likerts!) and climb inside the heads of CRCs and participants to better craft our forms for these study VIP’s.

Click either image below to sign into our on-demand webinar library. Then watch, share, and respond with comments to add your expertise to the conversation.

The Kitchen Sink: See everything OpenClinica forms can do for you

In just thirty minutes, explore all of OpenClinica’s form capabilities, each doing its part to ensure better data, faster! See how skip logic, autocomplete, clickable image maps, real-time edit checks, autosave and a LOT more all work together on beautiful UX to drive cleaner data from the start.

Recording now available. Enter our webinar library.

Download the kitchen sink form (customers only)

 

What can cross-form do for you? (Part 3 of 3)

In the examples given here and here, our cross-form logic depended on data with a known location. In the first case, we knew exactly which event, form, and item to turn to in order to retrieve participate sex and date of birth. In the second case, each of our event dates marked the start of a unique, one-time event, so “finding their address” within the database was a straightforward process.

Now where did I leave that item value?

But what happens when we need to reference data with an indeterminate location, supposing that it even exists? In these cases, we may need to walk around a remote neighborhood, comparing building shapes and sizes, before we find what we’re looking for.

Consider a study that requires drug cessation if a certain adverse event recurs within 90 days. For an Alzheimer’s study, that adverse event may be detection of ARIA (amyloid-related imaging abnormalities) on an MRI scan. Suppose that a second presentation of ARIA within 90 days of the first means that the participant must discontinue study drug. What better occasion could there be for “checking the records” than while reporting a new ARIA? Checking the records here means:

  • retrieving the start dates of any previous AE whose report indicated ARIA
  • calculating the days’ difference between the most recent of the dates above with the new ARIA presentation date
  • showing an alert if that difference is less than 91 days

It’s hardly complex, but a busy CRC working with dozens of participants in multiple trials may forget to follow the process. Cross-form logic, on the other hand, never forgets. The screenshots below depict the result of a third AE report for a single participant. Of the two previous reports, the first indicated detection of ARIA on 1-Nov-2018.  Because the newest ARIA, on 24-Jan-2019, falls within 90 days of the prior one, the form displays instructions to discontinue study drug. 

 

The AE log for this participant shows two reports. The first, documenting an AE on 1-Nov-2018, indicates ARIA.
Here the CRC is reporting a new presentation of ARIA, on 24-Jan-2019. That date is fewer than 91 days after the previous ARIA of 1-Nov-2018. As a result, the form displays the relevant instructions from the protocol.

Few questions are too complex for cross-form logic to answer and act upon. If you can state a rule in logical or mathematical terms, you can most likely implement it using a straightforward expression, no matter how many other forms you need to reference. The OpenDataKit library of XPath functions offers a wealth of tools you can combine to create smart, versatile forms that collaborate with researchers.  So don’t let your innovation stop with drug development or study design: carry it through to your forms!

What can cross-form do for you? (Part 2 of 3)

In the previous post, we presented a cross-form example of clinical data collected in one event factoring into the normal lab range for a subsequent event. But clinical data aren’t the only factors that drive decisions. When an event occurred may determine when it should happen next. Dosing visits provide a common example. Depending on the protocol, dosing might occur at precise intervals (e.g. exactly 21 days between doses) or within windows (e.g. at least 7 days and no more than 10 days from the previous dose). Your EDC system should be able to enforce either type of scheduling, by reading not only the dates entered into forms, but dates found in form and event metadata.

In the example illustrated below, the form makes calculations between the start of a current event (“Dosing Visit 2”) and the start of the previous visit (“Dosing Visit 1”). According to this imaginary protocol, no fewer than 7 and no more than 10 days may elapse between these two visits.

  • If dosing visit 2 occurs within this range, the form guides the site-based user on how to prepare the dose.
  • If dosing visit 2 has a start date fewer than 7 days after dosing visit 1, the form displays instructions not to proceed, and provides the earliest and latest start dates for the visit.
  • Finally, if dosing visit 2 has a start date greater than 10 days after dosing visit 1, the form displays instructions to submit a protocol deviation note.

All of these calculations and feedback take place instantaneously.

For this participant, Dosing Visit 2 is within window.
For this participant, Dosing Visit 2 has been scheduled for too early a date.
For this participant, Dosing Visit 2, were it to occur, would happen beyond the 10 day maximum from Dosing Visit 1.

Up next: check for recurring Adverse Events

 

What can cross-form do for you? (Part 1 of 3)

Your study database has just locked. To celebrate, you decide to treat your two in-house monitors to dinner in the city. You’d like to offer your colleagues a choice of three restaurants. Take a moment and imagine which three restaurants you’d choose. Got it? Now suppose you recall that one of your monitors follows a gluten-free diet. Does that change your selection? If all of your initial picks specialized in wheat pasta, it ought to.

What’s good for dinner plans is essential for study conduct, when data quality and safety are on the line. An unremarkable value for height in one participant ought to trigger a query for another; for example, for a teen and a six-year-old enrolled in a pediatric trial. To evaluate the input in one field based on data in another, data managers rely on cross-field edit checks. If the fields are part of distinct forms, that evaluation is known as a cross-form edit check.

At OpenClinica, we extend this capability. We give data managers a tool to make their forms responsive to any element they choose from their database, from the participant’s most recently recorded blood pressure to the start date of the last dosing visit. Using easy-to-understand syntax in the form definition, data managers may reference any item in the database to trigger dynamic edit checks, make calculations, show/hide relevant information, and even change the content and logic of the active form. Your forms can now “know it all” to present a fully contextualized data capture experience.

We developed this feature to help our users:

  • capture more consistent, higher quality data,
  • drive protocol compliance,
  • highlight potential safety issues, and
  • mitigate the risk of unnecessary study procedures.

The capability goes far beyond cross-form edit checks. This is total study intelligence. Below is the first of three case studies we’ll present this month that illustrate the difference. If your study has requirements that resemble these, don’t hesitate to contact us for a more in-depth guided tour.

Case #1: Age- and Sex-Dependent Normal Lab Range

One person’s high blood pressure is another person’s normal. Why? The primary reasons are differences in age and sex. When it comes to lab results, “normal” may also be a function of the specific lab conducting the analysis. But not all studies rely on lab-specific ranges. For clarity’s sake, let’s imagine a study that will evaluate lab values against standards that are lab-independent. Known as “textbook ranges”, these ranges define the upper and lower bound of normal based solely on patient-specific factors. Below is a table indicating the lower and upper limits of normal for DHEA-S in adult males and females:

Chances are that a form associated with a screening event has already captured a participant’s sex and date of birth. A subsequent event may include a lab form. It’s inefficient to ask for the participant’s sex and age on this lab form. Sex has already been documented, after all, and age requires a calculation that compares the specimen collection date with the participant’s date of birth. Asking a CRC to make that calculation opens the door to error, especially if collection occurred close to the participant’s birthday. But age and sex information is required to determine whether an entered value is above or below the limits of normal.

Enter cross-form logic, which handily “pulls in” the required data from an external form. In the current case, an expression within the form compares the specimen collection date with the externally-supplied date of birth, to calculate participant age. That age, together with the externally-supplied sex and lower and upper limits indicated within the lab form, are all it takes to instantly evaluate the lab value against the appropriate range for the participant. Results of that evaluation may trigger or hide additional fields, get piped into question or response text for a separate item, or simply provide instructions, as shown in this video.


 

Up next: comparing event dates to ensure that dosing occurs within window.

Get better dates (in your eCRFs)

Dating is hard, especially if you’re not aware of the relevant conventions and etiquette. The same could be said for collecting date information in your EDC. As technologists enamored with data above all, those of us here at OpenClinica probably aren’t your best source for romantic advice. But if you’re searching for the most efficient way to capture unambiguous and properly formatted date information in your eCRF, prepare to swoon. Here are four tips for working with dates.

Tip #1: Allow for full and partial dates.

Requiring a full date when only a certain month or year are known to a CRC or participant is a major hazard for analysis. If that full date field is required, it’s quite possible that the user will select a placeholder for day of the month–the 1st or 15th, say–when that piece of information is unavailable to her. The value in the database, then, implies a level of specificity that wasn’t intended.

To avoid this pitfall, ensure that the individual entering the date indicates which portions are known and which are unknown (“UNK”). Then, offer the corresponding field for input.

 

 

During analysis, your statisticians will need to convert entered partial dates into imputed ones, using clear and consistent rules. So it’s important that your EDC support aggregating full and partial dates into a single item.

Tip #2: Offer a user-friendly UI that boosts accuracy.

Suppose that a participant knows that she took a dose of medication on the first Monday of the month. You don’t want her to leave the ePRO form in search of a calendar to retrieve that date. Similarly, for a CRC who prefers point-and-click wherever possible, you want a UI that works with her style, to encourage prompt entry. The same is true for a CRC who prefers to type.

Human factors like these impact both data quality and speed. That’s why they are a major design consideration for OpenClinica, and ought to be for you, as well. The datepicker shown here is the clear and flexible standard we rely on, supporting point-and-click, typed entry, and convenient scrolling through months, years, and even decades. Make sure your mechanism offers the same virtues.

Tip #3: Let your forms do the math.

Exclusion criteria for some trials require no history of a certain condition for a specified amount of time; for example, no history of melanoma for the last five years. In this case, performing the “date math” mentally may not pose a big challenge. But the situation becomes more complex when, rather than being disqualified, a potential participant is assigned to a particular cohort based on the date of some clinical event (or even multiple events). Error-free calculation then becomes as difficult as it is paramount. A capable form engine is your saving grace in these situations. Your EDC ought support real-time calculations, and respond in a protocol-compliant way based on the results. Use your system’s calculation and logic functions to hide or require certain fields, enforce eligibility rules, makes assignments, and otherwise ensure proper workflows.


For optimal viewing, expand the video to fill size using the arrows icon in the lower right-hand corner.

The example shown here employs skip logic and date math to assign a cohort based on the date that a CT scan confirmed the absence of melanoma. Three years or more of complete response triggers assignment to cohort A, less than three years to cohort B.

Tip #4: Follow the standards.

The final step when working with dates is to format them in accordance with a recognized standard. CDISC is arguably the most recognizable and robust. Following ISO 8601 standards, CDISC takes YYYY-MM-DD as its date format.

Portion of a CDISC ODM XML 1.3 extract from OpenClinica.

Register for “Good Form: Designing Your eCRFs for Better Data, Faster”

Want to maximize data quality and speed? Focus on your forms. From layout to item order, skip logic to edit checks, there are no “minor factors” when it comes to getting clean data from the start.

Understand these factors by attending our webinar, “Good Form: Designing Your eCRFs for Better Data, Faster,” taking place Monday, December 3 at 11am EST (4pm UTC). You’ll learn to look at your eCRFs through the eyes of your CRCs, monitors, and participants. You’ll see proven best practices in eCRF design that minimize queries and time to entry, while maximizing the integration potential of your data through standardization.

If designing study forms is among your responsibilities, you can’t afford to miss this webinar. Topics include:

  • The optimal form layout for different users, settings, and data types
  • Edit check strategies (e.g. how strict is too strict)
  • When to use pick lists, multi-select, radio buttons, likert scales, and more
  • Ensuring usability across devices
  • CDISC and CDASH
  • Building for import, automation, and eSource
  • … and more

Configurable roles, participant limits, and more now part of OC4

Starting today, data managers building their studies in OC4 will find a whole new level of control at their fingertips.  Our newest release allows data managers to:

  • Enforce a Participant ID naming convention of their choosing (e.g. [site number]-[participant ordinal])
  • Restrict users from adding participants once an expected number of participants is reached
  • Configure roles and permissions
    • “Clone” and apply a unique name to a pre-defined user role
    • Grant users of that role, and only such users, access to particular forms
  • Leverage REST API services to:
    • add a participant
    • add participants in bulk
    • export a list of participant IDs
    • import data into forms

While role configuration is arguably the keynote of this release, all of these features enable managers to exert fine-grained control when study size, design, or duration require it.

Participant ID Naming Conventions

Consider a large, multi-center study expected to enroll hundreds or even thousands of participants. In these cases especially, data managers, trial managers, and monitors need a compact, informative lexicon to quickly gain insight into study progress, put issues into context, and refer questions to the best source of information. A standardized, meaningful participant ID is a key component of that lexicon.

Enforcing a standardized ID in an OC4 study is simple. Just head to your Settings tab, edit the Participant ID Settings, and select System-generated as your method of ID creation. Then apply or adapt the template provided.

Using simple syntax, data managers can define a convention that makes an ID truly informative. When site users add a participant, the convention is automatically applied.  Following the example above, GER002-005 easily translates as the 5th participant enrolled at Site 002 in Germany. If participants GER002-005 and GER007-20 both show a randomization date of today, we can quickly and easily glean some useful information: at least 25 participants have been randomized in at least 2 activated sites in Germany, with Site 007 having randomized 4 times as many patients as Site 002. While users of OpenClinica Insight already have up-to-date metrics like this at their disposal, other users will now have an easier time extracting them from spreadsheets: it’s easy to parse out relevant information from a string in Excel when that information is always in the same place.

Participant Limits

Regulatory bodies frequently mandate enrollment caps in high-risk studies. Exceeding that cap then becomes a major compliance hazard. The latest release of OC4 offers a fail-safe to protect against that risk, which can be significant in fast-recruiting, multi-center trials.

When the checkbox below ‘Expected Number of Participants’ is selected, the expected number becomes a hard limit. No user can add a participant, whether at the site or study level, once the number of (non-removed) participants reaches that limit.

Role and permission configuration

Some forms are meant only for properly trained users. Cognitive assessment ratings provide one such example. Administering the MMSE or ADAS-Cog requires more than just knowing what those acronyms stand for. (Curious?) Those scales are only meaningful in the hands of trained cognitive raters. As such, a form including the MMSE or MoCA that’s available to any user with data entry responsibilities jeopardizes data quality and risks protocol deviations. The latest release of OC4 enables data managers to restrict access to these forms.

The first step is to tag those forms you need to restrict. You can customize the color and name.

Once created, you can apply the permission tag to all relevant forms.

Finally, you’ll create a custom role with permissions to access the forms you’ve tagged.

Custom roles are based on standard roles at either the study level (Data Manager, Data Specialist, Data Entry Person, and Monitor) or site level (Investigator, Clinical Research Coordinator, or site-specific Monitor). Customs roles come with the same core permissions as their standard counterpart. However, by allowing access to tagged forms, data managers enable only users with the custom role to access equivalently tagged forms. (These users may also access untagged forms.)

In the example above, only users with the RATER role have access to the MMSE and ADAS-COG forms. Other users will not be permitted to: open the form in edit mode, review-only mode, or read-only mode; view queries on the form; SDV the form; extract the form clinical data in a dataset or participant casebook; or view common event tables for the form on the Participant Details page. However, the clinical data can be piped (e.g. though a cross-form note or calculation) into a form that is readable by users without the RATER tag. In this way, read and write permissions are separable.

REST API services

The capabilities of our REST API continue to expand in this release as well. It’s not easier than ever to…

  • add a participant
  • add participants in bulk
  • extract a list of participants
  • import data

… through web services. OC4 uses the open source framework Swagger to help our users build and test their APIs. You can find the link on your study’s Administration page.

Last night’s release was the 5th overall since the creation of OC4 last year (not counting interim enhancements). It’s our most ambitious release to date, and in keeping with our mission to let data managers take total control with ease. As always, we welcome your input through the comments section below, and we look forward to continuing our close and successful collaboration with OC4 users. Not using OC4? Let us show you the power that awaits!