The OC17 Program is here, and it’s bursting at the seams

Click here to learn more and register!

The OpenClinica community has proven once again its passion for innovation and knowledge sharing. We received a record number of abstracts exemplifying the OC17 theme, Making the Complex Simple. Best of all, the diversity of expertise among our community yielded a set of sessions that covers a wide swath of data management challenges and solutions. To claim that OC17 “has it all” would be a cliché (and, given the breadth of our field, an obvious exaggeration). But with sessions on pseudonymization, patient registries, biobanking, medical imaging integration, and more, this year’s conference will deliver case studies and how-to’s that almost certainly bear on your research. So let me an offer another, more defensible, another cliché: OC17 has something–and more likely two or three things–for everyone.

Below you’ll find the program as of today, September 26th. (Order is subject to change.)

Monday, December 4: Track 1

Plenary session, “The Story of OC4” | Cal Collins, CEO – OpenClinica

OC4’s ultra-capable forms | Ben Baumann, COO – OpenClinica

Multi-system subject tracking, screening automation, and data exportation | Patrick Murphy – RTI

50,000 subjects, 15 countries, and 1 (multilingual) OpenClinica study | Gerben Rienk Visser – TrialDataSolutions

OC4 architecture | Krikor Krumlian, CTO – OpenClinica

Risk-adapted strategies using OC in an observational trial with a marketed product | Edgar Smeets, PhD, CCRA – Smeets Independent Consultant – SIC

A Plan for Getting Started with Risk-Based Monitoring | Artem Adrianov, PhD, CEO – Cyntegrity

Monday, December 4, Track 2

Plenary session, “The Story of OC4” | Cal Collins, CEO – OpenClinica

Essential reports for the CRC, data manager, monitor and sponsor | Bryan Farrow, eClinical Catalyst – OpenClinica

EUPID services for pseudonymization and privacy-preserving record linkage in OpenClinica | Markus Falgenhauer – Austrian Institute of Technology (AIT)

MIMS-OC – A medical image management system for OpenClinica | Martin Kropf – Charité Berlin

Working with OC Insight – OpenClinica’s Configurable Data Visualization Dashboard | Lindsay Stevens, CTDM Specialist – OpenClinica

The RadPlanBio approach to building biomedical research platforms around OpenClinica | Tomas Skripcak – German Cancer Consortium

How to Implement SDTM in Your Study | Mark Wheeldon, CEO – Formedix

Tuesday, December 5, Track 1

Keynote address | Dr. Andrew Bastawrous – Peek Vision

Late-breaking sponsor session | Announcement forthcoming

eConsent as a validated application of OpenClinica’s Participate forms | Brittney Stark, Project Manager – OpenClinica

WORKSHOP: Designing, Publishing, and Sharing Your Study in OC4 | Laura Keita, Director of Compliance and Training – OpenClinica

WORKSHOP: Inside OpenClinica’s APIs | Krikor Krumlian, CTO – OpenClinica

Tuesday, December 5, Track 2

Keynote address | Dr. Andrew Bastawrous – Peek Vision

Combining a nationwide prospective patient registry and multiple RCTs with OpenClinica | Nora Hallensleben and Remie Bolte – Dutch Pancreatitis Study Group

Open Conversation: A dialogue between OpenClinica and OpenSpecimen | Srikanth Adiga, CEO – OpenSpecimen; Aurélien Macé – Data Manager, FIND Diagnostics

WORKSHOP: A Tour of OpenClinica Modules and Integrations | Mike Sullivan, Senior Account Executive – OpenClinica

WORKSHOP: Making the Jump from OC3 to OC4 | Iryna Galchuk, Solutions Engineer – OpenClinica

Details are still in the works, but we will host a “cocktails, conversations, and demos” reception on Monday evening, and cruise Amsterdam’s canals on Tuesday evening.

We hope you can join us for a pair of memorable days, professionally and culturally. Learn more and register here!

 

The Change Business

You’re in the change business.

Is that what you thought of as you arrived at work today? It’s true. Working in clinical research, you bring positive change to the world, through the discovery, testing, and dissemination of therapies that improve people’s health.

No doubt your role (and mine) is a lot more specific and focused than that. It has to be, because clinical research is all about the details. To achieve the big changes, we need to implement, control, and communicate many other, smaller changes on a constant basis.

Sometimes, change is initiated from within. Sometimes, it’s imposed from outside. And at times, the whole context shifts. This last kind of change has dominated research for the past few years. Mobile technology, patient-centricity, healthcare upheavals, economic pressures, real-time monitoring, and genomic medicine are changing the context of how we approach research, and the nature of what we’re trying to accomplish.

Responding to this type of change requires clarity, purpose, and vision. A decade ago, a few of us started the OpenClinica project to inject a sorely needed dose of flexibility and accessibility into the clinical trials technology landscape. Now, we’re working to make it easy to adapt to the complexities of the new research ecosystem, while continuing to prioritize our original principles of flexibility and accessibility.

The biggest changes to OpenClinica in years are coming next month. As I’ll illustrate in my next post, closer to release, the new OpenClinica is designed to be both easy and powerful. It’s fast to adopt, simple to learn, and a joy to use, whether you’re a data manager, CRA, investigator, or study subject.

One thing that will never change is our commitment to the success of our customers and our community.

Get Your Queries Under Control (Part 1: Query Aging)

Getting to “no further questions” can seem like an endless task

We may all face death and taxes, as Ben Franklin quipped, but data managers are most certainly guaranteed yet a third inconvenience: queries. As long as human beings play a role in data capture, some fraction of this data will defy either explicit validation rules, common sense, or both. These entries, when left unresolved, not only create more work for monitors and coordinators, but they delay interim and final locks. And time added to a trial is cost added to a trial.

As dispiriting as this sounds, data managers employing EDC enjoy tremendous advantages in “query control” over their paper-using peers. Paper doesn’t support real-time edit checks, our most effective vaccine in the primary prevention of queries. Neither does paper allow for real-time reporting on the status of our queries. That reporting is crucial in any effort to close existing queries, reduce the number of queries triggered in the future, and shorten the amount of time queries remain open.

By itself, a long list of open queries won’t signal alarming query trends. For that, metrics are required. In a series of posts starting with this one, I’d like to offer guidance on those metrics.

Metric #1: Query Aging

You are a data manager in a study of adults with diabetes. On Monday, a coordinator at one of your sites submits a form indicating a patient weighed in at 15 lbs. Imagine that a query was opened that same day, whether automatically by the EDC system or manually through your own vigilance. The clock is now ticking: how long will it take for the coordinator to correct the data point?

In actual studies, there are dozens or hundreds or even thousands of such clocks ticking away. Some queries may be just hours old, while others could be months old (help!). Eventually, all of them will need to get to close in order for database lock to occur. The key to locking on time is to effectively manage, from the very start, both the overall number of open queries and the proportion of open queries that are old.

Why does age matter?

The further an open query recedes into the past, the more difficult it becomes to resolve, as new patients and new tasks push the issue further down on the coordinator’s to do list. Clinicians involved in the original assessment may depart or find their memories of the relevant event fading. We let queries linger at our own risk.

Long, complex studies enrolling hundreds of patients can easily generate thousands of queries. While each open query will have its own origin date, defining age brackets can help you characterize the distribution of their ages. Age brackets are ranges set by a lower limit and upper limit, most commonly expressed in days; 0 to 7 days is one example of a bracket, 8 to 14 days is another. Age brackets must cover, without any overlaps, all possible ages of an open query. Therefore, the first bracket always begins with 0 days open, to accommodate the newest queries, while the last bracket will specify no upper limit at all.

Many EDC systems default to a specific set of brackets (usually 4 or 5) in grouping a study’s open queries. For example, an EDC might group all open queries generated less than 8 days ago into the first bracket, and all open queries generated between 8 and 14 days ago into the second bracket. But these dividing lines do not represent universal constants. You are free to set your own lower and upper limits on your brackets, and you should do so based on the event schedule of your study.

Who are you calling “old”?!

What makes study-specific bracket delimitation a best practice? When it comes to open queries, young and old aren’t just relative to one another, but to the amount of time between consecutive visits. If a query from a patient’s first visit isn’t resolved by her second visit, queries generated by that second visit will push the earlier one (psychologically, administratively) into more precarious territory. By way of analogy, you are much more likely to give prompt, accurate answers to questions about your commute from this morning than to questions about your commute yesterday.

You will never completely prevent query “backlogs” from arising in your studies. But you can put in place one preventative measure based on the idea above. First, identify the shortest span of time between any two consecutive visits in your study. (You can dispense with any outliers; e.g. visits that are just two days apart.) Call that shortest span n days. Now, set your query brackets as follows:

  • First (“youngest”) bracket: 0 to n-1 days old
  • Second bracket: n to 2n-2 days old
  • Third bracket: 2n-1 to 3n-3 days old
  • Fourth bracket: 3n-2 to 4n-4 days old
  • Fifth (“oldest”) bracket: 4n-3 days old or older

Suppose, for example, that the shortest interval between visits in your study is 14 days. You would then structure your brackets as follows:

  • First (“youngest”) bracket: 0 to 13 days old
  • Second bracket: 14 to 26 days old
  • Third bracket: 27 to 39 days old
  • Fourth bracket: 40 to 52 days old
  • Fifth (“oldest”) bracket: 53 days old or older

Now that you’ve set your brackets, announce them to your colleagues and your sites. Let them know that effective data management for your study means keeping the proportion of open queries in each of these brackets steady:

  • First bracket: at least 35% of your open queries
  • Second bracket: no more than 30% of your open queries
  • Third bracket: no more than 20% of open queries
  • Fourth bracket: no more than 10% of open queries
  • Fifth bracket: no more than 5% of open queries

By holding yourself and sites accountable to keeping roughly 2/3rds of your open queries in the first two brackets, you limit the number of queries falling into older brackets, where resolution becomes more difficult due to the passage of time.

Enforcing the policy

There’s a word to describe a goal that doesn’t have a reward tied to its achievement, or a sanction to its failure: a hope. Below are some methods for igniting a sense of urgency in your sites with open queries.

  • Make a pact. A coordinator will not feel obliged to answer within 3 days a query that it took your team 3 weeks to raise. Your data management team should commit to opening queries within some set period of time following initial data entry. Your discipline will inspire the site’s discipline.
  • Do some good. While you should always check with compliance officers first, consider making a donation to a patient advocacy group based on a study-wide reduction in open queries. For example, you may announce at the start of the month that each percentage point reduction in open queries within brackets 3, 4, and 5 by the end of the month will translate to 10 additional philanthropic dollars, made by the sponsor organization in the name of all participating sites.
  • Recognize the “always prompt” and “much improved” sites. On the last day of each month, email to all your coordinators a query status report. Indicate how many queries are open within each bracket. Mention by name and congratulate:
    • sites with the fewest number of queries per data field completed (limited to sites that have supplied data for some minimum number of fields),
    • sites with the shortest average time to close for all queries opened in the prior month (limited to sites that have had some minimum number of queries opened), and
    • sites that have closed the highest percentage of their open queries since the beginning of the month.

In this way, you are recognizing the continuously conscientious sites along with those striving to improve.

Staying on Track

Effective data management requires discipline from you, your monitors, and your sites, but the effort is well worth it. Interim and final locks that occur on time save all trial stakeholders from a lot of anxiety and frustration. A single open query may not seem like an impediment to clean study data overall, and indeed they are inevitable and understandable. But every day a query remains open contributes to a delay that’s felt most acutely by the patients you and your sites are working so hard to help.

Register for OC17 today and secure early bird pricing!

OC17 register today

We are thrilled to announce the venue and keynote speaker for OC17, our 9th Annual Global Conference, and to open registration for the event.

OC17 Theme and Dates
“Making the Complex Simple”
December 4 – 5 | Sessions and Workshops
December 6 – 8 | Super User Training

Venue
Mövenpick Hotel Amsterdam
Piet Heinkade 11
1019 BR  Amsterdam
Netherlands

Keynote Speaker
Dr. Andrew Bastawrous
Co-Founder and CEO
Peek Vision

Further details, pricing, and a registration form are just a click away. Reserve your spot now to lock in early bird pricing.This year’s theme is “Making the Complex Simple,” and we are proud to offer a space and speaker exemplifying that theme. The Mövenpick is a short taxi ride from the international airport in Amsterdam. We are confident that your stay there will delight with its simple elegance. We are especially honored to welcome Dr. Bastawrous as our keynote speaker. His story on the impact that innovation can have must be seen to be believed.

 

 

We hope you can join us in December as we turn the spotlight on our incredible user community once again. Do not hesitate to email bfarrow@openclinica.com with any questions regarding OC17.

Sincerely,
Ben Baumann, COO

14 Best Practices for ePRO Form Design

Is there any term in data collection more despairing than “abandonment rate”? That’s the percentage of respondents who start inputting data into a form but don’t complete the task. Sometimes, it’s hard not to take that kind of rejection personally.

Sadly, it’s just one problem among dozens that hinder the collection of timely, clean and complete data directly from study subjects. Data managers face challenges from the start. A participant’s compliance with study procedures (and data entry) is always only voluntary, and apart from the occasional stipend, these participants rarely receive compensation in dollars. That’s not to say that the care provided in the course of a study isn’t of value to patients. But that “quid pro quo” mindset isn’t easy to maintain outside the clinical site. Paper diaries and even electronic forms ask a participant for their time and attention, commodities that are in short supply these days. As an industry, we can’t stop looking for ways to minimize the burden on participants. Not if we want to maximize their contribution.

In previous posts, we explored BYOD as a preferred approach to collecting ePRO data. But that’s only half the story. What good are friendly, motivational messages to a participant’s mobile device and computer, if the form to which they’re directed is needlessly long or convoluted? That’s a recipe for heartbreak (form abandonment), or at least a host of trust issues (incomplete or poor quality data).

So, what are the keys to getting your participants to enter all their PRO data, accurately and on time? Not surprisingly, they’re not too different from those we rely on in any budding relationship. Below, I bundle them into four categories.

Make a good first impression

Imagine yourself as a job interviewee. The hiring manager asks you to rattle off the top three responsibilities in every role you’ve ever filled. How coherent will you answer be? A practiced interviewer, interested in getting to know you, would start differently. “Tell me, what’s your top priority in your current role?” She’ll need to gather a lot more information before the end of interview, but she’s set out a comfortable pace for getting it.

The lesson for ePRO form design is clear.

1. Start with a single (or very small number) of questions that can be answered simply. Present these one to three questions, and no more, on the first page.

This best practice is one example of a broader rule: always proceed from the simple to the complex. Don’t ask the participant to choose one of sixteen options. Rather, ask two yes or or no questions that will reduce the options first to eight and then four, and have the user pick from that list. Yes, the latter scenario involves more clicks, but it involves less cognitive labor and less scrolling.

Dress to impress

The “e” in “ePRO” isn’t usually capitalized, but maybe it ought to be. Leave the paper mindset behind. Spatial and color constraints no longer apply, and you can present information in a way that prevents the participant from looking ahead (and becoming discouraged). In short, you’re free to give respondents a form-reading and form-filling experience they actually enjoy. Here are some pointers on visual cues that work with the eyes and minds of your participants:

2. Black or dark grey text on a white background is always the safest default for instructions and form labels. For section headings and buttons, a vibrant color that contrasts sufficiently with the background can help orient the respondent.

3. If you have a choice of fonts, sans serif is preferable. Why? While studies do not point to a clear winner for readability in all contexts, evidence suggests that lengthy text passages that ask the reader to parse several ideas at once are best served by serif fonts, while short prompts and labels are best conveyed with a sans serif font. (And you already know that short and direct is best, right?) The wide variety of serifs in existence make character-recognition more difficult to non-native readers, while san serifs are more likely to remain readable when scaled down.

4. Place the field label above, and left justified with, any field requiring the respondent to type. This “stacked” format suits the portrait orientation with which most smartphone users hold their device. Placing the field label inside the field for the user to type over may save space, but it also causes your label to disappear once the user interacts with the field.

5. Avoid grids. There are contexts where a grid layout is preferable; for example, when recording multiple instances of body position, systolic pressure and diastolic pressure for a patient undergoing blood pressure tests. But these are almost always in-clinic scenarios. For the collection of ePRO data, stay within a single column layout.

6. Paginate. One screen containing six questions, or two screens containing three each? All things being equal, fewer questions on more pages is preferable. Why? Clicking next is easier than scrolling. Also, breaking up a large set of questions into two smaller ones reduces the intimidation factor.

7. Use images as selection options when text could lead to misinterpretation. Not all form engines support this feature, but the ability to present an image in place of (or with) a checkbox or radio button is more than a “nice to have.” Which of the following is more likely to your quality, computable data?

 

or…

 

 

I’ve relied on a few best practices in this example, starting with some very simple questions in order to narrow down the remaining ones and present them in a manner that gives us clean, accurate, and computable data. But the images in particular are what rescue the participant from needless typing and potential confusion. By making the regions within the head illustration clickable, I can capture very discrete data without taxing the participant’s time.

Respect their time

“Press C to confirm or X to cancel.” That’s a familiar formula to anyone who’s received a text message before an appointment. These are easy to appreciate. If I feel I owe any response to the sender, I’m more likely to complete this pseudo-form than any other.

Chances are, however, you may need a little more data than this from your participants. Here’s how you can gather it while respecting your participant’s time.

8. Minimize the number of fields. This advice may seem obvious and simple, but it’s neither. So long as a participant’s date of birth has been documented once, having them input age is never necessary. And if your system is capturing metadata precisely (e.g. a date and time stamp for every change to a field’s content), then you don’t need to ask the participant to record this information. In general, before adding a field, it is helpful to ask:

  • Do I really need this information? If yes, then…
  • Can I reliably deduce or calculate it based on prior data? If no, then…
  • Do I need the participant to supply it? If (and only if) yes, then include the field.

9. Use skip logic. The phrase “if applicable” should never appear in a form, especially forms designed for participant use. If you are asking the question, it had better be applicable. You can ensure that it is by using skip logic. Has the participant eaten that day? Only inquire about nutritional content for that day’s meals if she responds with “yes”.

10. Use branching logic. Branching logic can help a participant isolate the response she wishes to provide by a process of elimination. Suppose you need a participant to input a cancer diagnosis she had received. Given the wide variation in health literacy, we can’t assume she recalls the formal diagnosis. It may be more helpful to solicit this information through a series of questions easier to answer. Did the cancer involve a solid mass? Depending on the participant’s response, the following question might pertain to an organ system (if “yes” is selected) or blood cells (if “no” is selected). Just five yes or no questions can narrow a field of 32 options down to one.

Doesn’t the use of branching logic conflict with strategy of minimizing the number of fields? These are guidelines, not rules, so trade-offs are sometimes necessary. A drop-down menu showing 32 option may represent just one field, but scrolling through that many selections (not all of which will be visible at once) places an enormous hurdle in front of the participant. The mental effort and time spent scrolling on that one field far outweighs any time savings that might be secure by eliminating three fields. Meanwhile, you’ll have frustrated the participant.

11. Use autocomplete. There’s another way of solving the history of cancer problem above. As long as participant can recall any portion of the diagnosis when spelled out, autocomplete can help them retrieve the full term. The best instances of autocomplete reveal all matches as the participant types, so that upon typing “Lym” a choice is immediate presented among:

Acute Lymphoblastic Leukemia (ALL)
Central Nervous System Lymphoma
Hodgkin Lymphoma
etc.

The ubiquity of predictive search among search engines like Google makes autocomplete a familiar experience for your participants.

In an era where summoning a taxi or making a credit card payment is a 20-second tasks, participants will not (and should not) tolerate inefficiency on the web. You are competing with their other online experiences, not just traditional, paper forms. The good news is that you can delight your participants by showing them that even contributing to medical research can be as easy as their navigating their favorite web pages.

Show appreciation

You’ve read 85% of this post! Knowing that doesn’t guarantee you’ll read to the end, but it does make it more likely. Regular, responsive feedback is a powerful spur to action. Here are three ways to interact positively with your participant throughout (and even after) the form-filling process.

12. Convey their progress as the participant completes the form. Reflecting back to the participant the portion of the form they have completed and the portion that they have remaining serves two functions. The first is informative. You’ve anticipated the participant’s question (“how much longer?”) and answered it proactively. The second is motivational. Completing even simple tasks triggers the release of dopamine in our brains. We get a neurochemical rush every time we drop a letter in the mailbox or hit send on an email. 

You can reward your participant throughout the form-filling process by incorporating a dynamic progress bar into your ePRO form. Every page advance is an opportunity to “dose” your participant, visually, with a bit of happiness.

13. Autosave. Batteries die. Smartphones drop to the floor. Thumbs twitch and close screens. None of these scenarios is justification for losing a participant’s work. Your system should capture input on a field-by-field basis; that is, a back-end process should save the participant’s input into a field the moment he or she leaves that field for another. If a participant abandons a form and then returns to it, he or she should be able to resume where they left off. If you can signal back to the participant that their input has been saved with each field transition, all the better, as this leverages the same psychological power as the progress bar.  

14. Show gratitude. Imagine a campaign staffer asking you a series of questions about your views over the phone. You answer the last question and he or she hangs up without so much as a goodbye. Chances are, they’ve lost your vote on account of rudeness alone.

Don’t let this happen to your participants. When they submit a completed form online, they should immediately receive a “thank you” message that is specific to the task they have just completed.

Ensuring the optimal experience for participants supplying ePRO data is more than courtesy: it’s a critical measure for maximizing data quality, quantity and timeliness. So commit to dazzling the people who matter most in your research. Because, as in all relationships, you get back what you give. Click here to learn more about participant friendly ePRO from OpenClinica.

 

Resources

https://www.formassembly.com/blog/web-form-design/
https://www.ventureharbour.com/form-design-best-practices/
https://www.nngroup.com/articles/web-form-design/
https://medium.theuxblog.com/10-best-practices-for-designing-user-friendly-forms-fa0ba7c3e01f

Save the Date! OC17, December 4th and 5th, in Amsterdam

OC17, OpenClinica’s 9th Annual Global Conference, will take place in Amsterdam, on December 4th and 5th this year. This year’s theme? “Making the Complex Simple”

We will offer in-person Super User training from December 6th through the 8th.

Exact venue, times, pricing, and official call for presentations all coming soon. In the meantime, please let us know if you’re interested in attending!

Automate Your Collection of Lab Reference Ranges

Data managers invest a lot of time and attention documenting lab processes, and for good reasons. Regulatory compliance demands it. Also, ensuring the validity and clinical significance of lab results is critical to assessing safety and efficacy. But while necessary, this process is often inefficient and error-prone.

In an ideal clinical study, every lab sample would, within minutes of collection, find its way to a central lab whose equipment was forever up-to-date, whose validations were always fresh, and whose inner workings were transparent to the data manager. But clinical trials aren’t conducted in an ideal world. More often than not, data managers and local lab managers share an ongoing responsibility to document equipment features and report on results collected on a variety of instruments, all calibrated differently. The challenges associated with this process are familiar. Equipment changes. Validations expire. And one lab’s “normal” may be another lab’s “low.”

The task of keeping labs up to date for many data managers is akin to keeping dozens of centrifuges spinning at the same rate, all at the same time. Collecting lab reference ranges from one lab for one analyte may be straightforward, but when the process is multiplied across dozens of analytes and sometimes hundreds of sites, your study can be exposed to significant time delays and human error. Success in this task, like most, hinges on clear expectations and guidance. Here is where good data managers shine. By providing sites with explicit instructions, a deadline, and tools to boost completeness and accuracy, data managers can make the collection of reference ranges a lot less painful and time-consuming.

Anatomy of a Lab Reference Range

Ranges are always defined by either:

  • a standard applied to all labs contributing data to a study (“textbook ranges”), or
  • the individual lab

Often, the difference between the two is minimal, so adopting the textbook range can save time and administrative burden. For measures that are critical to analysis, though, using a textbook range may not be suitable. In that case, each local lab manager (or the site coordinator representing that lab) must communicate to the study’s data manager their “in house” range for all analytes measured in the study. In both cases, a range is not complete unless it specifies

  • the name of analyte
  • the unit of measure
  • the lower limit of normal, by gender and age
  • the upper limit of normal, by gender and age

Even for one analyte, the normal range for a 25-year-old female may differ from that of a 50-year-old female, or a 25-year-old male. Consequently, specifying a range for an analyte often means specifying a number of “sub-ranges” that, taken together, associate a normal range for every possible patient. For example:

In the course of providing comprehensive ranges for dozens of analytes, it’s easy for a lab representative to overlook (or duplicate) an age or gender inadvertently. A well designed, dynamic form for capturing these requirements can help ensure exactly one range is provided for any given individual.

Anatomy of a Lab Reference Range Collection Form

Just as a value without a unit of measure is meaningless, so too is a local lab range that is not tied to a particular lab. Along with their ranges for each study analyte, labs should also provide a set of identifying information. The data manager, as part of her request to provide the ranges and lab information, should also specify the study for which the ranges are being collected. A complete lab reference range collection form includes all of these components.

Specified by the data manager

  • the name of the sponsor and study (avoid abbreviations or paraphrases)
  • which analytes are included in the study, and therefore require ranges from the lab
  • where the lab representative must send the completed file
  • a deadline for completing the file

Entered by the lab representative

  • the name, address, and applicable ID numbers (e.g. CLF, or core laboratory facility, number) of their lab
  • the name of the Principal Investigator for the site and study it serves
  • the effective date of the ranges to be provided
  • the LLN and ULN for each analyte, by gender and age

Tools You Can Use

For users of OpenClinica, we’ve designed a form template that can be used as a reference range collection form, which includes the components listed above. Try it here! Would you like to use a customized version of this form in your study? Contact your client support lead. For those not using OpenClinica, we’ve built an Excel workbook. Download it for free here.

Click either image above to test this form.

Click either image above to download the Excel version.

Staying Current

Regardless of how labs communicate their reference ranges, it’s essential that the communication is ongoing. Changes in equipment or clinical guidelines often occasion changes to upper and lower limits of normal. That’s why an effective date must be documented for all ranges. Good data managers encourage sites to communicate any such changes promptly. Great data managers give them the reminders, and tools, to do so.

We welcome your input on the workbook above, just as we do on our data management metrics calculator. Please let us know what you find most valuable.

A Preview of our DIA Plans

Talks from drug development luminaries. Exhibits that combine “luxury apartment” with “miniature theme park.” And a city that offers some of the world’s best modern architecture.

Those descriptions don’t do justice to what DIA 2017 has in store, but they do fit the experience. As any veteran attendee can attest, there’s an outsized splendor to the conference. But it isn’t splendor for splendor’s sake. Half of it is celebration for the advances the industry has made in bringing life-changing therapies to market. The other half is a rallying cry to bring even more.

 

OpenClinica will be there, to join the celebration and the rallying cry. Attendees can find us at booth 1748, near the central break lounge. And we plan on using our patch of exhibit space to the fullest.

Our goal is simple: we want to thrill everyone there. We’ll do that by giving visitors to our booth an up-close look at the new OpenClinica, featuring a collaborative study designer, forms with beauty and brains, rich, visual reporting, and more. We’ve come to call this “re-engineering the e-clinical experience,” because it’s our hope that software in this industry can shed its reputation as a “utility” and gain one as the way researchers want to work (and the way participants want to contribute).

We think this new experience is as thrilling as, say, journeying through the human brain, or conducting zoological research in an alternate universe. Almost as thrilling, anyway. So we’re bringing along an Oculus Rift to help make our point. Visitors to our booth will get a chance to wear the goggles, grab the controllers, and immerse themselves in some fantastic worlds. Best of all, we’ll raffle the system off to one lucky winner.

But the most valuable offering comes from the attendees. We get to hear directly from them on what’s working, what’s not, and what needs changing in the world of eclinical. We’re confident we’ve addressed many of those needs with our upcoming release, but as company devoted to continuous innovation, we’re never finished learning and iterating on our successes.

Will we see you there? If so, be sure to schedule a visit to our booth, and brace yourself for the e-clinical experience you’ve always imagined.