What is GDPR (And Why Should I Care)?

Some laws govern the use of crosswalks; others, how to clean up after our pet in public spaces.

Then there’s the EU General Data Protection Regulation (EU GDPR).

Now, safety considerations, not to mention lawful conduct, urge us to cross the street along designated paths. (Just as common courtesy persuades most of us to scoop up our terrier’s natural fertilizer from the sidewalk.) But we all let distraction or the rush of the workday get the better of us from time to time. The costs of ignoring the law are small in this case. Rules against crossing a street outside of a crosswalk are rarely enforced here in the United States. If law enforcement does catch us in the act, punishment ranges from a warning to a small fine.

A repeat violation of the EU GDPR, on the other hand, could cost the guilty party 20,000,000 Euro.

So how do you stay in between the lines when it comes to this regulation? Knowledge is the best defense, so without any further ado, here are the basic facts you need to know.

What does the GDPR aim to accomplish?

To quote the homepage of eugdpr.org: “The EU General Data Protection Regulation (GDPR) … was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy, and to reshape the way organizations across the region approach data privacy.”

This is vague, but the second clause does highlight the regulation’s main purpose; namely, “to protect and empower all EU citizens’ data privacy.” Harmonization of data privacy laws may be a boon to data-gathering entities operating in multiple EU countries. But given the amount of text proclaiming the rights and freedoms of data subjects (or stating the duties of data controllers and processors in upholding those rights and freedom), the motivation of the GDPR is clear. This regulation is about individuals’ rights over information about themselves; when it may be obtained, how it must be protected, and what may or may not be done with it.

Chapter 1, Article 1 of the official regulation (“Subject-matter and objectives”) makes this clear in more legal-sounding language:

1. This Regulation lays down rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data.

2. This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.

3. The free movement of personal data within the Union shall be neither restricted nor prohibited for reasons connected with the protection of natural persons with regard to the processing of personal data.

What is “personal data”?

Article 4 sets out 26 definitions, and it’s no coincidence that “personal data” is the first: For the purposes of this Regulation:

1. ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

Worth noting is the reference to “an online identifier”. The regulation considers IP addresses and cookie strings personal data.

But a legal-sounding definition doesn’t capture the sanctity with which personal data is regarded in the EU. With the exceptions of sensitive health and financial information, data about a person in the U.S. is subject to the principle of “finders keepers” (de facto if not by de jure). Corporations routinely lay claim to personal data through an obscure “terms of use” page on their website, or the failure of a customer to explicitly deny the corporation the right to collect data his or her data. In Europe, personal data is an aspect of personal dignity. The GDPR is, among other things, an insistence on this cultural fact in light of an increasingly global and data-driven economy.

Who is obligated to follow it?

The GDPR casts a wide net. All persons or organizations that are reasonably construed as either a “data controller” or “data processor” (regardless of whether data control or processing are that entity’s primary function) are subject to the regulation, if any one of three conditions apply.

Who or what constitutes a “data controller”?

The “data controller” is the “natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data.” Typically, this is the entity at the top of the “data solicitation chain”; in the area of clinical research, the sponsor, academic institution, or CRO/ARO.

Who or what constitutes a “data processor”?

The data processor is “a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller.” Those who play any role in the administration of a database, including software vendors in those cases where the database is digital, are data processors.

What are the conditions under which a data controller or data processor is bound by the GDPR?

If any one of the following conditions obtain for a data controller or data processor, that entity is bound by the GDPR:

  1. The data controller is based in the European Union (regardless of where the data subject is based)
  2. The data processor operating on behalf of the data controller is based in the European Union (regardless of where the data subject is based)
  3. The data subject is based in the European Union

Practically, the safest default assumption is that your research operations are bound by the GDPR. If any participant in your study resides in the EU, or any link in the chain of data custody passes through the EU, or your organization is based in the EU, the GDPR’s applicability is clear.

What must those persons or entities do?

GDPR mandates are best thought of as duties that data controllers and processors have in upholding the rights of data subjects. Articles 12 through 23 enumerate the rights of the data subject. No summary is adequate to convey all of the particular rights, and for that reason it is incumbent on all data controllers and processors to read, understand, and abide by practices which uphold these rights. But for the purposes of this primer, we can think of these rights are granting persons the following powers and assurances.

Powers

  • To provide only those elements of personal data they agree to provide, having been fully informed of the purposes and risks of the data collection
  • To provide such data only to those entities he or she wishes
  • To rectify or request the erasure of their personal data
  • To access the personal data collected from them in a readable and standardized format (note that this does not necessarily mean in a native spoken language)
  • To contest significant decisions affecting them (e.g., those of employment or legal action) that are computed solely by an algorithm operating on their personal data
  • To seek legal redress for the failure of a data controller or data processor to respect these powers or to maintain the following assurances

Assurances

  • The data subject shall not be identifiable from the personal data, through use of secure pseudonymization protocols (e.g. assigning an alphanumeric identifier to either a data subject and/or an element of their personal data, from which publicly identifying information such as the subject’s name, NHS number, address, or birthday cannot be deduced)
  • The data subject will be immediately informed of any breach of their data privacy
  • The data subject’s personal data shall be consulted and processed only for those those purposes disclosed to the data subject as part of obtaining his or her informed consent
  • Data controllers shall request from the data subject only those elements personal data that are essential to the purposes made explicit during the process of informed consent (i.e. data minimization)

What duties do these powers and assurances incur for data controllers and processors? The concept of “data protection by design and default” is useful, if general, place to start. Before data collection begins, data controllers and processors must establish and document systems and practices that:

  • make it clear to the data subject which elements of their personal data the controller or processor is requesting, and for what purposes
  • make it clear to the data subject which persons or entities will have access to their data
  • maintain the privacy of personal data, e.g., through pseudonymization, data encryption, physical security, etc.
  • prevent the collection of data that is immaterial to the purpose of data gathering

Which sorts of systems and practices qualify as achieving those aims? The answer the regulation gives is, unfortunately, something of a placeholder. Article 42 offers data controllers and processors the opportunity to “certify” their data protection mechanisms, but the certification bodies and requirements for certification are all unspecified. (Paragraph 3 even states that “the certification shall be voluntary.”)

For better or worse, data controllers and processors seem to bear the burden of selecting – and justifying to unspecified “certification bodies” – the technical criteria by which the GDPR will assess their data protection measures.

This is perhaps both a problem and an opportunity. Better that minimum encryption standards, for instance, go unspecified (for now) than be made to conform to some arbitrary decree. As data controllers and processors, we can take an active role in establishing these and other criteria in a way serves data protection and efficient data flow.

When does the regulation go into effect?

The regulation becomes effective on Friday, May 25th, 2018. This is a universal implementation date: national governments do not have authority to legislate a later (or earlier) effective date.

Who is in charge of enforcing it?

The European Parliament, the Council of the European Union and the European Commission are the governing bodies with respect to the EU GDPR.

What are the penalties for non-compliance?

If you’re looking for concreteness among all the abstraction of the GDPR, look no further than Article 83, “General conditions for imposing administrative fines.” All nine paragraphs set forth factors in arriving at a sanction for negligence or willful violation. But paragraph 6 will continue to attract the most attention: “(6) Non-compliance […] shall […] be subject to administrative fines up to 20 000 000 EUR, or in the case of an undertaking, up to 4 % of the total worldwide annual turnover of the preceding financial year, whichever is higher.”

Is that all there is to GDPR?  

Unfortunately no. If regulatory compliance were as easy as nodding along to a blog post, we’d never hear of violations. Then again, we’d hear about a far more, and more severe, privacy breaches. Remaining compliant with all of the regulations than bear on clinical research may be a logistic burden, but it’s the right thing to do. You wouldn’t knowingly expose a patient (or their identity) to harm, but that’s what non-compliance amounts to: thousands of seemingly minor risks that make at least one catastrophe almost inevitable. So continue to educate yourself. We’ll help in that effort with a series of blog posts that begins with this one. And if the moral imperative of compliance doesn’t motivate you, consider the impact that non-compliance could have on your business or organization. You really don’t want to step in this natural fertilizer.

OC17 is just 32 days away! Here’s the detailed program.

If you’ve been waiting for more detailed session information to register, wait no more. You’ll find all the session and workshop abstracts, with speaker bios, on our OC17 resource page.

See the OC17 session abstracts

OC17 is also your chance to see the new OpenClinica up close. Take a “deep dive” into the all-new study build system and form engine.


See more of the new OpenClinica

 

Do not hesitate to email bfarrow@openclinica.com with any questions regarding OC17.

The new OpenClinica is here! (Part 3 of 3)

The story of OpenClinica is a story of customer-driven innovation. No other eClinical platform has a community as passionate and open as ours.

Last week we unveiled the new OpenClinica, the product of two years of effort based on the needs, insights, and collaboration of our community. Posts here and here highlight OpenClinica’s new features and user experience. I’d encourage you to try it for yourself to get a first-hand perspective. You’ll see a solution that:

  • Provides easy, self-service onboarding for all user types
  • Is built around “Smart paper” eCRFs – richly interactive, mobile-friendly forms that autosave your data yet give you as much control over layout as if you were designing them in Word
  • Gives you a collaborative, drag-and-drop study build environment with validated design->test->production pathway for your studies and amendments

We couldn’t deliver these enhancements without under the hood changes that are just as significant as what you see in the user interface. So what are some of those changes?

What’s under the Hood?

The new OpenClinica is a multi-tenant cloud platform that embraces open source, automates provisioning, and provides validated traceability, massive scalability, and high-grade security.

By breaking from the monolithic application model and turning to a microservices model built for the cloud, we’re able to deploy more user-friendly, productivity-enhancing features quickly and reliably. A traditional monolithic software application encapsulates all its functionality in a single code base, continues to grow in complexity as it adds new features, and requires deep expertise in the code base to fix or improve even minor things. With the microservices model, each service performs a small set of business functions, and is built around a web services API from the start, making it far easier to configure, integrate, and orchestrate functionality within the platform and with third-party systems.

For those of you familiar with the OpenClinica 3 architecture, a few of the key changes are:

  • Separation of study build and study runtime functions into discrete systems
  • The ability for the study build system to publish a study definition (or updates thereof) to test and production environments to simplify validation / training / UAT / re-use
  • A new model for building forms, based around the powerful and widely used XLSForm standard.
  • Use of separate database schemas for each study, increasing scalability, portability, performance, and better support of the full study lifecycle
  • A single-sign-on mechanism across the OpenClinica systems and services, with ability to plug in to third party authentication mechanisms more easily

    Study build and runtime
Infrastructure of the new OpenClinica. Click to see an enlarged version.

Together, these changes make OpenClinica easier, more powerful, smarter, and more open.

Our vision is to harness open technology to make research better. The new OpenClinica is fast to adopt, simple to learn, and a joy to use, whether you’re a data manager, CRA, investigator, or study subject. It doesn’t just capture data: it empowers the entire trial team to work fast, with high quality, and to monitor and respond in real time to challenges that arise. It enables the rapid exchange of clinical, laboratory, imaging, and patient-reported data, with the intelligence to take action on it.

Compliance with all pertinent FDA and EMA regulations, including GCP, 21 CFR Part 11, Annex 11, and HIPAA, continues unchanged, as does our SSAE-16 SOCII/ISO certification.

It doesn’t end there, either. The new architecture provides greater scalability, redundancy, and zero downtime; is modular, built for integration and extension; and is able to evolve faster and more flexibly.

For today’s complex clinical research needs, the leverage we get from modern cloud, DevOps, test/build automation, best-of-breed frameworks designed for microservices is enormous. But there are some trade-offs, including the need to manage 10-50 services at a time instead of 3 (database, application server, application) in the old model. Thus, the new OpenClinica is built to be consumed as a native cloud-based solution. This means that it’s not feasible to provide an easily downloadable ‘Community Edition’. Even with a massive amount of effort, packaging all of those services up into a straightforward download/install process that works reliably in a generic set of environments would be hard. Or, as one of our engineers put it, “a nightmare-inducing morass of things-that-could-go-wrong when it’s not used in the context it was specifically designed for.” By highly automating our cloud environment, we are focusing our engineering resources on getting the most secure, fail-safe, fastest, and highest quality user experience possible shipped and available for your research.

For all the added muscle, the new platform retains the heart of OpenClinica. Most of the database schema is the same. Much of the OC3 source code has been adapted to work within the new architecture. As with OpenClinica 3, most (but not all) of it is open source and available on GitHub. OpenClinica was conceived as a commercial open source project–one that has thrived in large part due to the enthusiasm of developers, domain experts, and practitioners who know that collaborative innovation benefits everyone. This ethos continues to guides the OpenClinica LLC team and inform our work.

What’s new in OC4?

  • Better forms:
    • Real-time edit checks and skip logic
    • Easier and more powerful logic for validation, skips, calculations, and inserts
    • Real-time, field-by-field autosave
    • Mobile-friendly design and UX suitable for any device
    • Layout options for every use case
  • Study DesignerTM:
    • Build studies using a drag-and-drop interface
    • 1-click publish to test/production environment
    • Preview forms and edit checks
    • Define study events, append associated forms, and more
    • Collaborate in real-time with other users while building and testing studies
    • Easily re-use forms and events
    • Form and protocol versioning
  • Data extracts that reflect clearer form versioning
  • Self-service training embedded into the user interface
  • Built for performance and scalability
  • Updates delivered safely, seamlessly, and automatically
  • A modernized technology infrastructure enabling future enhancements, including:
    • Better SDV, reporting, coding, configurable permissions, and metrics
    • Self-service startup to select your plan and immediately provision your instance
    • Libraries of reusable items, forms, users, and sites
    • More efficient handling of “reason for change” and cross form-edit checks
    • More capable and consistent API

How will this affect my existing OC3 studies?

For existing users, we recommend starting new studies on the new OpenClinica cloud while you keep your OC3 installation for studies already underway. Studies you are conducting on OpenClinica 3 will continue to receive the best-in-class support you’ve always known, all the way through to their completion. While major new features and improvements will focus on the new platform only, we have made more than 50 improvements to OC3 in 2017 alone, and will continue to produce maintenance releases as long as existing studies remain in production. We’re not yet able migrate existing study data from OC3 to OC4, given differences in the form definition model. We’re working on it!  

Can I use both OC3 and OC4 at the same time?

Yes! We recommend starting new studies on OC4, with one exception: studies requiring double data entry. This feature is not currently supported on OC4.

How will I get trained and request support?

Integrated training, help, documentation, tutorials, and videos are embedded right into the OC4 application. In addition, our stand-alone training modules and dedicated client services team will be available as always to ensure your success.

What do I have to learn?

The new OpenClinica is far more intuitive, and has tutorials and guides embedded in the application. The biggest change between OC3 and OC4 is the form design model. We offer plenty of resources, training, and examples to help you understand the difference. We’ll be releasing a drag-and-drop form designer and enhanced form library capabilities very soon.

What is the cost?

Plans and pricing are the same for OC4 as for OC3, and based on the number of studies you are actively running. To calculate the number of studies, we’ll add together the studies you have on OC4 and OC3.

  • You will keep your current pricing through the end of your contract. At the end of that term, you may renew at then-current pricing, whether you upgrade to OC4 or not.
  • Although we stopped offering plans with ‘unlimited’ studies some time ago, some long-standing customers remain on unlimited plans. At the end of their contract, we will assist these customers in selecting a plan that includes ample studies for current and projected projects at a per-study price that remains cost-effective.
  • We’ve introduced some new plan options whose flexibility and scope we believe will better serve you. Discounts are available for academic institutions and hospitals.

Is OpenClinica still open source?

As with OpenClinica 3, most (but not all) of the new OC is open source and available on GitHub. OpenClinica was conceived as a commercial open source project–one that has thrived in large part due to the enthusiasm of developers, domain experts, and practitioners who know that collaborative innovation benefits everyone.

Key motivators for open sourcing OpenClinica have been, and continue to be, (1) encouraging innovation, (2) maintaining transparency, and (3) enabling researchers. These principles continue to guides us and inform our work. The new model aims to improve the way these goals are met by:

  • prioritizing integration capabilities of the platform
  • keeping key parts open
  • incorporating and contributing significantly to widely used, third party OSS projects, and
  • providing a cost-effective and quickly deployable solution, thereby avoiding the potential technical hurdles, security pitfalls, and time costs of maintaining self-provisioned instance.

 

Key motivators for open sourcing OpenClinica How the new OpenClinica improves the way these goals are met
Encourage innovation Prioritizes integration capabilities of the platform
Maintain transparency Keeps key parts open and accessible. Contributes significantly to widely used, third party OSS projects. Continues the RSP which facilitates structured, complete audits for users requiring it
Enable researchers Provides an affordable and quickly deployable solution, thereby avoiding the potential technical hurdles, security pitfalls, and time costs of maintaining self-provisioned instance.

In the near future we will offer a low-cost plan, “OC Ready”, that gives you access to the key features of the new OpenClinica, including the new protocol designer, form engine, and more.

How can I contribute to this innovation?

Open source, open APIs, and readily available tutorials empower developers and technical users to push the envelope of what’s possible. We plan on releasing an OpenClinica “toolkit” for building integrations and health research apps that guarantee high-integrity, trustworthy data and rigorous standards of quality. When it’s available, developers of all experience levels will be able to:

  • Obtain an API key
  • Download the toolkit/SDK
  • Consult API docs & tutorials
  • See an example module
  • Build a UI or integration module using the toolkit
  • Share their module
  • Improve an existing module


The OC4 Toolkit is still under development, so stay tuned!

The new OpenClinica is here! (Part 2 of 3)

In the previous post in this series we covered how the new OpenClinica makes study build, change control, and collaboration so much easier. This improved user experience doesn’t end at first patient in. Take a journey through the heart of the new OpenClinica: its incredible form engine.

 


 

An incredible study build system. eCRFs your sites have been waiting for. What ties it all together? Find out in our next post.

The new OpenClinica is here! (Part 1 of 3)

Data Managers! Does this sound like you?

  • “Protocols change. I need a fast, reliable way to make needed updates without having to worry about breaking things.”
  • “I need a way to update forms more quickly, before and after study start.”
  • “My eCRF build/test/deploy process is too prone to human error.”
  • “I want my study team to see and take action on what needs to get done today. Then they can apply their brain power to the hard stuff.”
  • “I’m done with paper forms. My sites need fast, real-time data entry flows that match how they actually work.”

If so, you’re not alone. Since the advent of eClinical, we’ve settled for making paper processes work “on a computer.” Web-based technologies may have made your work faster. But so far, in moving away from paper, we’ve only traded crawling for walking.

Now it’s time to fly.

Today, we officially release the new OpenClinica, a leap forward in making research both faster and smarter. We’ve spent the last 18 months listening to users, looking at eClinical through the eyes of data managers, coordinators, and participants. We’ve tested beta versions of new technology and refused to settle for any experience that wasn’t seamless, efficient – even beautiful.

Now, you can expect to move from study idea to data extract in a logical, reliable, and speedy way. The new OpenClinica retains and adds to the power of prior generations of OpenClinica, while entirely rethinking how you get work done.

Here’s a tour of our study build system in six screen captures. Click the down arrow at the bottom of the frame to advance. 


 
And now you’re ready to empower your sites and participants with the user experience they’ve been waiting for. We’ll share that in our next post!

In the meantime, here are two ways to get some hands-on experience!

  1. Jump to the head of the line to try out the new OpenClinica. Click here to schedule an implementation kick-off call. 
  2. Looking for a deeper dive into the infrastructure and full capabilities? Join us in Amsterdam for OC17!

The OC17 Program is here, and it’s bursting at the seams

Click here to learn more and register!

The OpenClinica community has proven once again its passion for innovation and knowledge sharing. We received a record number of abstracts exemplifying the OC17 theme, Making the Complex Simple. Best of all, the diversity of expertise among our community yielded a set of sessions that covers a wide swath of data management challenges and solutions. To claim that OC17 “has it all” would be a cliché (and, given the breadth of our field, an obvious exaggeration). But with sessions on pseudonymization, patient registries, biobanking, medical imaging integration, and more, this year’s conference will deliver case studies and how-to’s that almost certainly bear on your research. So let me an offer another, more defensible, another cliché: OC17 has something–and more likely two or three things–for everyone.

Below you’ll find the program as of today, September 26th. (Order is subject to change.)

Monday, December 4: Track 1

Plenary session, “The Story of OC4” | Cal Collins, CEO – OpenClinica

OC4’s ultra-capable forms | Ben Baumann, COO – OpenClinica

Multi-system subject tracking, screening automation, and data exportation | Patrick Murphy – RTI

50,000 subjects, 15 countries, and 1 (multilingual) OpenClinica study | Gerben Rienk Visser – TrialDataSolutions

OC4 architecture | Krikor Krumlian, CTO – OpenClinica

Risk-adapted strategies using OC in an observational trial with a marketed product | Edgar Smeets, PhD, CCRA – Smeets Independent Consultant – SIC

A Plan for Getting Started with Risk-Based Monitoring | Artem Adrianov, PhD, CEO – Cyntegrity

Monday, December 4, Track 2

Plenary session, “The Story of OC4” | Cal Collins, CEO – OpenClinica

Essential reports for the CRC, data manager, monitor and sponsor | Bryan Farrow, eClinical Catalyst – OpenClinica

EUPID services for pseudonymization and privacy-preserving record linkage in OpenClinica | Markus Falgenhauer – Austrian Institute of Technology (AIT)

MIMS-OC – A medical image management system for OpenClinica | Martin Kropf – Charité Berlin

Working with OC Insight – OpenClinica’s Configurable Data Visualization Dashboard | Lindsay Stevens, CTDM Specialist – OpenClinica

The RadPlanBio approach to building biomedical research platforms around OpenClinica | Tomas Skripcak – German Cancer Consortium

How to Implement SDTM in Your Study | Mark Wheeldon, CEO – Formedix

Tuesday, December 5, Track 1

Keynote address | Dr. Andrew Bastawrous – Peek Vision

Late-breaking sponsor session | Announcement forthcoming

eConsent as a validated application of OpenClinica’s Participate forms | Brittney Stark, Project Manager – OpenClinica

WORKSHOP: Designing, Publishing, and Sharing Your Study in OC4 | Laura Keita, Director of Compliance and Training – OpenClinica

WORKSHOP: Inside OpenClinica’s APIs | Krikor Krumlian, CTO – OpenClinica

Tuesday, December 5, Track 2

Keynote address | Dr. Andrew Bastawrous – Peek Vision

Combining a nationwide prospective patient registry and multiple RCTs with OpenClinica | Nora Hallensleben and Remie Bolte – Dutch Pancreatitis Study Group

Open Conversation: A dialogue between OpenClinica and OpenSpecimen | Srikanth Adiga, CEO – OpenSpecimen; Aurélien Macé – Data Manager, FIND Diagnostics

WORKSHOP: A Tour of OpenClinica Modules and Integrations | Mike Sullivan, Senior Account Executive – OpenClinica

WORKSHOP: Making the Jump from OC3 to OC4 | Iryna Galchuk, Solutions Engineer – OpenClinica

Details are still in the works, but we will host a “cocktails, conversations, and demos” reception on Monday evening, and cruise Amsterdam’s canals on Tuesday evening.

We hope you can join us for a pair of memorable days, professionally and culturally. Learn more and register here!

 

The Change Business

You’re in the change business.

Is that what you thought of as you arrived at work today? It’s true. Working in clinical research, you bring positive change to the world, through the discovery, testing, and dissemination of therapies that improve people’s health.

No doubt your role (and mine) is a lot more specific and focused than that. It has to be, because clinical research is all about the details. To achieve the big changes, we need to implement, control, and communicate many other, smaller changes on a constant basis.

Sometimes, change is initiated from within. Sometimes, it’s imposed from outside. And at times, the whole context shifts. This last kind of change has dominated research for the past few years. Mobile technology, patient-centricity, healthcare upheavals, economic pressures, real-time monitoring, and genomic medicine are changing the context of how we approach research, and the nature of what we’re trying to accomplish.

Responding to this type of change requires clarity, purpose, and vision. A decade ago, a few of us started the OpenClinica project to inject a sorely needed dose of flexibility and accessibility into the clinical trials technology landscape. Now, we’re working to make it easy to adapt to the complexities of the new research ecosystem, while continuing to prioritize our original principles of flexibility and accessibility.

The biggest changes to OpenClinica in years are coming next month. As I’ll illustrate in my next post, closer to release, the new OpenClinica is designed to be both easy and powerful. It’s fast to adopt, simple to learn, and a joy to use, whether you’re a data manager, CRA, investigator, or study subject.

One thing that will never change is our commitment to the success of our customers and our community.

Get Your Queries Under Control (Part 1: Query Aging)

Getting to “no further questions” can seem like an endless task

We may all face death and taxes, as Ben Franklin quipped, but data managers are most certainly guaranteed yet a third inconvenience: queries. As long as human beings play a role in data capture, some fraction of this data will defy either explicit validation rules, common sense, or both. These entries, when left unresolved, not only create more work for monitors and coordinators, but they delay interim and final locks. And time added to a trial is cost added to a trial.

As dispiriting as this sounds, data managers employing EDC enjoy tremendous advantages in “query control” over their paper-using peers. Paper doesn’t support real-time edit checks, our most effective vaccine in the primary prevention of queries. Neither does paper allow for real-time reporting on the status of our queries. That reporting is crucial in any effort to close existing queries, reduce the number of queries triggered in the future, and shorten the amount of time queries remain open.

By itself, a long list of open queries won’t signal alarming query trends. For that, metrics are required. In a series of posts starting with this one, I’d like to offer guidance on those metrics.

Metric #1: Query Aging

You are a data manager in a study of adults with diabetes. On Monday, a coordinator at one of your sites submits a form indicating a patient weighed in at 15 lbs. Imagine that a query was opened that same day, whether automatically by the EDC system or manually through your own vigilance. The clock is now ticking: how long will it take for the coordinator to correct the data point?

In actual studies, there are dozens or hundreds or even thousands of such clocks ticking away. Some queries may be just hours old, while others could be months old (help!). Eventually, all of them will need to get to close in order for database lock to occur. The key to locking on time is to effectively manage, from the very start, both the overall number of open queries and the proportion of open queries that are old.

Why does age matter?

The further an open query recedes into the past, the more difficult it becomes to resolve, as new patients and new tasks push the issue further down on the coordinator’s to do list. Clinicians involved in the original assessment may depart or find their memories of the relevant event fading. We let queries linger at our own risk.

Long, complex studies enrolling hundreds of patients can easily generate thousands of queries. While each open query will have its own origin date, defining age brackets can help you characterize the distribution of their ages. Age brackets are ranges set by a lower limit and upper limit, most commonly expressed in days; 0 to 7 days is one example of a bracket, 8 to 14 days is another. Age brackets must cover, without any overlaps, all possible ages of an open query. Therefore, the first bracket always begins with 0 days open, to accommodate the newest queries, while the last bracket will specify no upper limit at all.

Many EDC systems default to a specific set of brackets (usually 4 or 5) in grouping a study’s open queries. For example, an EDC might group all open queries generated less than 8 days ago into the first bracket, and all open queries generated between 8 and 14 days ago into the second bracket. But these dividing lines do not represent universal constants. You are free to set your own lower and upper limits on your brackets, and you should do so based on the event schedule of your study.

Who are you calling “old”?!

What makes study-specific bracket delimitation a best practice? When it comes to open queries, young and old aren’t just relative to one another, but to the amount of time between consecutive visits. If a query from a patient’s first visit isn’t resolved by her second visit, queries generated by that second visit will push the earlier one (psychologically, administratively) into more precarious territory. By way of analogy, you are much more likely to give prompt, accurate answers to questions about your commute from this morning than to questions about your commute yesterday.

You will never completely prevent query “backlogs” from arising in your studies. But you can put in place one preventative measure based on the idea above. First, identify the shortest span of time between any two consecutive visits in your study. (You can dispense with any outliers; e.g. visits that are just two days apart.) Call that shortest span n days. Now, set your query brackets as follows:

  • First (“youngest”) bracket: 0 to n-1 days old
  • Second bracket: n to 2n-2 days old
  • Third bracket: 2n-1 to 3n-3 days old
  • Fourth bracket: 3n-2 to 4n-4 days old
  • Fifth (“oldest”) bracket: 4n-3 days old or older

Suppose, for example, that the shortest interval between visits in your study is 14 days. You would then structure your brackets as follows:

  • First (“youngest”) bracket: 0 to 13 days old
  • Second bracket: 14 to 26 days old
  • Third bracket: 27 to 39 days old
  • Fourth bracket: 40 to 52 days old
  • Fifth (“oldest”) bracket: 53 days old or older

Now that you’ve set your brackets, announce them to your colleagues and your sites. Let them know that effective data management for your study means keeping the proportion of open queries in each of these brackets steady:

  • First bracket: at least 35% of your open queries
  • Second bracket: no more than 30% of your open queries
  • Third bracket: no more than 20% of open queries
  • Fourth bracket: no more than 10% of open queries
  • Fifth bracket: no more than 5% of open queries

By holding yourself and sites accountable to keeping roughly 2/3rds of your open queries in the first two brackets, you limit the number of queries falling into older brackets, where resolution becomes more difficult due to the passage of time.

Enforcing the policy

There’s a word to describe a goal that doesn’t have a reward tied to its achievement, or a sanction to its failure: a hope. Below are some methods for igniting a sense of urgency in your sites with open queries.

  • Make a pact. A coordinator will not feel obliged to answer within 3 days a query that it took your team 3 weeks to raise. Your data management team should commit to opening queries within some set period of time following initial data entry. Your discipline will inspire the site’s discipline.
  • Do some good. While you should always check with compliance officers first, consider making a donation to a patient advocacy group based on a study-wide reduction in open queries. For example, you may announce at the start of the month that each percentage point reduction in open queries within brackets 3, 4, and 5 by the end of the month will translate to 10 additional philanthropic dollars, made by the sponsor organization in the name of all participating sites.
  • Recognize the “always prompt” and “much improved” sites. On the last day of each month, email to all your coordinators a query status report. Indicate how many queries are open within each bracket. Mention by name and congratulate:
    • sites with the fewest number of queries per data field completed (limited to sites that have supplied data for some minimum number of fields),
    • sites with the shortest average time to close for all queries opened in the prior month (limited to sites that have had some minimum number of queries opened), and
    • sites that have closed the highest percentage of their open queries since the beginning of the month.

In this way, you are recognizing the continuously conscientious sites along with those striving to improve.

Staying on Track

Effective data management requires discipline from you, your monitors, and your sites, but the effort is well worth it. Interim and final locks that occur on time save all trial stakeholders from a lot of anxiety and frustration. A single open query may not seem like an impediment to clean study data overall, and indeed they are inevitable and understandable. But every day a query remains open contributes to a delay that’s felt most acutely by the patients you and your sites are working so hard to help.

Register for OC17 today and secure early bird pricing!

OC17 register today

We are thrilled to announce the venue and keynote speaker for OC17, our 9th Annual Global Conference, and to open registration for the event.

OC17 Theme and Dates
“Making the Complex Simple”
December 4 – 5 | Sessions and Workshops
December 6 – 8 | Super User Training

Venue
Mövenpick Hotel Amsterdam
Piet Heinkade 11
1019 BR  Amsterdam
Netherlands

Keynote Speaker
Dr. Andrew Bastawrous
Co-Founder and CEO
Peek Vision

Further details, pricing, and a registration form are just a click away. Reserve your spot now to lock in early bird pricing.This year’s theme is “Making the Complex Simple,” and we are proud to offer a space and speaker exemplifying that theme. The Mövenpick is a short taxi ride from the international airport in Amsterdam. We are confident that your stay there will delight with its simple elegance. We are especially honored to welcome Dr. Bastawrous as our keynote speaker. His story on the impact that innovation can have must be seen to be believed.

 

 

We hope you can join us in December as we turn the spotlight on our incredible user community once again. Do not hesitate to email bfarrow@openclinica.com with any questions regarding OC17.

Sincerely,
Ben Baumann, COO