“You’ve Got Findings!”: To the 90’s and Back with GCP E6(R2)

dowload GCP; download GDPR; download 21 CRF 11; regulatory documentsImagine a time when phones boasted no smarts. When technophiles worshipped noisy gadgets called modems. When The Human Genome Project sounded (to most ears) like the name of a progressive rock band.

Not to your ears, though. Suppose further that you’re an accomplished research professional of the era, medically trained, with degrees and vision to spare. Someday, you’re certain, paper records will give way to more efficient storage and retrieval. Zip disks, maybe? And this digital highway that’s all the rage, the “Web”, well, who knows? When adequate security measures are in place, maybe sites could email a digital CRF to the data manager.

 

A relic from pre-history (i.e. before Facebook timelines). Scholars are still uncertain of the fate of this “virtual settlement.”

It’s that foresight which, in the mid-1990s, has earned you membership to an international committee of your peers; a committee tasked with devising guidelines to “facilitate the mutual acceptance of clinical data” by regulatory authorities in Europe, Japan, and the United States. At least, that’s the operational goal, and a worthy one, too. You started in this business to push the best research out of silos and ivory towers and into the real world. But that mission is fraught with potential dangers. Not forty years prior, inadequate testing of an immunomodulatory drug led to the births of more than 10,000 children with limb malformations in Germany. Just three years ago, in a trial conducted by the NIH, five participants died of liver toxicity following experimental treatment for hepatitis B. Whatever standard you propose for maximizing the benefits of clinical research, it had better provide “public assurance that the rights, safety and well-being of trial subjects are protected.” After all, your committee is bound by the ethical good in its pursuit of the clinial good. You suggest calling the standard Good Clinical Practice.

Welcome to the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. And welcome to 1996.

But don’t get comfortable. This is the story of GCP–more precisely, the Guideline for Good Clinical Practice–and it is headed right back to the present, with its first amendment already in force in Europe. Why did the ICH amend its original guideline? Why twenty years later? There’s no single answer, but all the best answers revolve around one theme. Just as our phones have gone on to earn a postdoc in the last 22 years, the possibilities for research have matured, too. What’s more, their modern histories are linked, with technological breakthroughs inspiring the search for clinical ones, and clinical triumphs spurring ever more capable technology. Behind that push and pull, the need to ensure safety and quality remains constant. The story of the GCP’s amendment is a story of time-honored values upheld in new ways.

Preserving the Past

In a literal sense, E6(R1) hasn’t been replaced at all. The authors of E6(R2) choose an integrated addendum format when drafting the updated guidance. The original language, with all that it mandates, remains:

The addendum text clarifies the scope and meaning of preceding terms. Twenty years ago, the word “storage” brought to mind file cabinets. Early in this millenium, we might of have pictured hard drive volumes. Today, our heads are in the cloud. But in all of these contexts, storage that is secure and accessible is a must. It’s how we achieve it that differs.

Embracing the New

But just how do we achieve it–meaning everything from security to safety to data quality–in 2018, when so many tasks once considered part of an “honest day’s work” are now specialties of automation, algorithms, and analytics?

Not even the full sixty pages of the document provide a specific, once-and-forever answer. (As a guideline, it shouldn’t.) What E6(R2) does propose is a 21st-century mantra for maximizing safety and quality. The mantra sounds like this:

“Oversee it all. Take action on what matters.”

There are some loaded terms in this phrase. “Oversee” might translate to “maintain real-time digital access,” particularly in the case of processes occurring daily all over the world, such as eConsent. And “take action” needs to cover both proactive and corrective measures. But with those common sense glosses in mind, we could do worse than take the directive above as the crux of E6(R2). What evidence do we have for this reading? Before we look at specific clauses in R2, we can gain a strong sense of how the amendment differs from the original by looking at the frequency of key terms.

Frequency of key terms and contexts; ICH GCP E6(R2); ICH GCP E6(R1)
Clues that the GCP has grasped an important truth. The sheer volume of data now available to us raises probabilistic, “big data”-inspired thinking to a virtue.

Terms like “risk-based” simply weren’t part of the vernacular in 1996, so it’s no surprise that they should make their first appearance only now. But the concept of risk is as old as modern, statistically-informed research itself. So why does the word itself occur twice as frequently in R2 as opposed to R1?

The answer is that risk is omnipresent now, not primarily in the sense of an unintended consequence, but as a factor in decision-making. How did this come to be the case? Along almost any dimension we consider–target enrollment, sources of data, self-reports–research is doing, and producing, more than it ever has. In some cases, like the breadth of genomic factors analyzed, research is doing more than we thought possible back in 1996. On the other hand, the number of hours in a day has remained disconcertingly flat (anyone working on this?). Human cognitive capacities and attentional reserves likewise remain more or less the same. Our technological capacities have grown in orders of magnitude, and that’s all good. But until self-organizing, self-monitoring trials (powered by AI) are the norm, we humans will continue to serve as the chief executives of research.

While the amendment stops shy of saying it explicitly, R2 recognizes that distributing our time and attention equally among all processes works against safety and quality. That’s because some studies are now so complex, or collect so much data, that line-by-line “box checking” not only becomes impractical, it distracts us from those risks that only become apparent on a “big data”-like view of key metrics. In other words, it’s crucial that we see the risk forest for the data element trees. That’s the message behind much of the amendment text:

From the Introduction:

From section 2, “The Principles of GCP”:

From section 5, “Sponsor”:

Does this mean less rigor in oversight? Just the opposite. The GCP amendment will require more vigilance from all parties, from sponsors and sites to CROs and vendors. It means bringing alertness and analysis to bear in order to find the boxes, not just check them. This isn’t the ICH throwing up its hands now that the “scale, complexity, and cost of clinical trials have increased.” It’s the ICH demanding that we learn, and practice, new survival skills in a new world.

So drop the Nintendo controller. Time to pick up some neural implants.

Intelligent, visual reporting is here!

See the video now, join the webinar on March 26 or 28.
You know it better than anyone: clear, on-demand reporting is more than a “nice to have” in our field. It’s essential to conducting research that’s efficient and safe. That’s why we’re launching OpenClinica Insight. Insight makes it easy to ask questions of ALL of your clinical and operational data and visualize answers via interactive reports and dashboards.

Learn more at either of these upcoming webinars:
Monday, March 26, 8pm GMT, or
Wednesday, March 28, 8am GMT

 

The idea is simple, but the results are powerful: ask your questions, choose your visualizations, then return often for updated, interactive results that link you to all of the underlying data.

Learn more about Insight now, or schedule a time to speak with our team. We hope you can join us in March.

Join our OC4 Webinar on Monday, January 22

Happy New Year!

We’re excited to start 2018 with our most significant new release in 10 years. We want to give you all the details in one info-packed hour.

On Monday, January 22nd, from 11am to 12pm EST (UCT-5), learn about the smarter way to build and publish studies, create mobile-friendly forms, and manage your data. We’ll also show you the rock solid security, compliance, performance, and reliability that comes with our cloud hosting.

Click here to register!


 

 

What is GDPR (And Why Should I Care)?

Some laws govern the use of crosswalks; others, how to clean up after our pet in public spaces.

Then there’s the EU General Data Protection Regulation (EU GDPR).

Now, safety considerations, not to mention lawful conduct, urge us to cross the street along designated paths. (Just as common courtesy persuades most of us to scoop up our terrier’s natural fertilizer from the sidewalk.) But we all let distraction or the rush of the workday get the better of us from time to time. The costs of ignoring the law are small in this case. Rules against crossing a street outside of a crosswalk are rarely enforced here in the United States. If law enforcement does catch us in the act, punishment ranges from a warning to a small fine.

A repeat violation of the EU GDPR, on the other hand, could cost the guilty party 20,000,000 Euro.

So how do you stay in between the lines when it comes to this regulation? Knowledge is the best defense, so without any further ado, here are the basic facts you need to know.

What does the GDPR aim to accomplish?

To quote the homepage of eugdpr.org: “The EU General Data Protection Regulation (GDPR) … was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy, and to reshape the way organizations across the region approach data privacy.”

This is vague, but the second clause does highlight the regulation’s main purpose; namely, “to protect and empower all EU citizens’ data privacy.” Harmonization of data privacy laws may be a boon to data-gathering entities operating in multiple EU countries. But given the amount of text proclaiming the rights and freedoms of data subjects (or stating the duties of data controllers and processors in upholding those rights and freedom), the motivation of the GDPR is clear. This regulation is about individuals’ rights over information about themselves; when it may be obtained, how it must be protected, and what may or may not be done with it.

Chapter 1, Article 1 of the official regulation (“Subject-matter and objectives”) makes this clear in more legal-sounding language:

1. This Regulation lays down rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data.

2. This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.

3. The free movement of personal data within the Union shall be neither restricted nor prohibited for reasons connected with the protection of natural persons with regard to the processing of personal data.

What is “personal data”?

Article 4 sets out 26 definitions, and it’s no coincidence that “personal data” is the first: For the purposes of this Regulation:

1. ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

Worth noting is the reference to “an online identifier”. The regulation considers IP addresses and cookie strings personal data.

But a legal-sounding definition doesn’t capture the sanctity with which personal data is regarded in the EU. With the exceptions of sensitive health and financial information, data about a person in the U.S. is subject to the principle of “finders keepers” (de facto if not by de jure). Corporations routinely lay claim to personal data through an obscure “terms of use” page on their website, or the failure of a customer to explicitly deny the corporation the right to collect his or her data. In Europe, personal data is an aspect of personal dignity. The GDPR is, among other things, an insistence on this cultural fact in light of an increasingly global and data-driven economy.

Who is obligated to follow it?

The GDPR casts a wide net. All persons or organizations that are reasonably construed as either a “data controller” or “data processor” (regardless of whether data control or processing are that entity’s primary function) are subject to the regulation, if any one of three conditions apply.

Who or what constitutes a “data controller”?

The “data controller” is the “natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data.” Typically, this is the entity at the top of the “data solicitation chain”; in the area of clinical research, the sponsor, academic institution, or CRO/ARO.

Who or what constitutes a “data processor”?

The data processor is “a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller.” Those who play any role in the administration of a database, including software vendors in those cases where the database is digital, are data processors.

What are the conditions under which a data controller or data processor is bound by the GDPR?

If any one of the following conditions obtain for a data controller or data processor, that entity is bound by the GDPR:

  1. The data controller is based in the European Union (regardless of where the data subject is based)
  2. The data processor operating on behalf of the data controller is based in the European Union (regardless of where the data subject is based)
  3. The data subject is based in the European Union

Practically, the safest default assumption is that your research operations are bound by the GDPR. If any participant in your study resides in the EU, or any link in the chain of data custody passes through the EU, or your organization is based in the EU, the GDPR’s applicability is clear.

What must those persons or entities do?

GDPR mandates are best thought of as duties that data controllers and processors have in upholding the rights of data subjects. Articles 12 through 23 enumerate the rights of the data subject. No summary is adequate to convey all of the particular rights, and for that reason it is incumbent on all data controllers and processors to read, understand, and abide by practices which uphold these rights. But for the purposes of this primer, we can think of these rights are granting persons the following powers and assurances.

Powers

  • To provide only those elements of personal data they agree to provide, having been fully informed of the purposes and risks of the data collection
  • To provide such data only to those entities he or she wishes
  • To rectify or request the erasure of their personal data
  • To access the personal data collected from them in a readable and standardized format (note that this does not necessarily mean in a native spoken language)
  • To contest significant decisions affecting them (e.g., those of employment or legal action) that are computed solely by an algorithm operating on their personal data
  • To seek legal redress for the failure of a data controller or data processor to respect these powers or to maintain the following assurances

Assurances

  • The data subject shall not be identifiable from the personal data, through use of secure pseudonymization protocols (e.g. assigning an alphanumeric identifier to either a data subject and/or an element of their personal data, from which publicly identifying information such as the subject’s name, NHS number, address, or birthday cannot be deduced)
  • The data subject will be immediately informed of any breach of their data privacy
  • The data subject’s personal data shall be consulted and processed only for those those purposes disclosed to the data subject as part of obtaining his or her informed consent
  • Data controllers shall request from the data subject only those elements personal data that are essential to the purposes made explicit during the process of informed consent (i.e. data minimization)

What duties do these powers and assurances incur for data controllers and processors? The concept of “data protection by design and default” is useful, if general, place to start. Before data collection begins, data controllers and processors must establish and document systems and practices that:

  • make it clear to the data subject which elements of their personal data the controller or processor is requesting, and for what purposes
  • make it clear to the data subject which persons or entities will have access to their data
  • maintain the privacy of personal data, e.g., through pseudonymization, data encryption, physical security, etc.
  • prevent the collection of data that is immaterial to the purpose of data gathering

Which sorts of systems and practices qualify as achieving those aims? The answer the regulation gives is, unfortunately, something of a placeholder. Article 42 offers data controllers and processors the opportunity to “certify” their data protection mechanisms, but the certification bodies and requirements for certification are all unspecified. (Paragraph 3 even states that “the certification shall be voluntary.”)

For better or worse, data controllers and processors seem to bear the burden of selecting – and justifying to unspecified “certification bodies” – the technical criteria by which the GDPR will assess their data protection measures.

This is perhaps both a problem and an opportunity. Better that minimum encryption standards, for instance, go unspecified (for now) than be made to conform to some arbitrary decree. As data controllers and processors, we can take an active role in establishing these and other criteria in a way serves data protection and efficient data flow.

When does the regulation go into effect?

The regulation becomes effective on Friday, May 25th, 2018. This is a universal implementation date: national governments do not have authority to legislate a later (or earlier) effective date.

Who is in charge of enforcing it?

The European Parliament, the Council of the European Union and the European Commission are the governing bodies with respect to the EU GDPR.

What are the penalties for non-compliance?

If you’re looking for concreteness among all the abstraction of the GDPR, look no further than Article 83, “General conditions for imposing administrative fines.” All nine paragraphs set forth factors in arriving at a sanction for negligence or willful violation. But paragraph 6 will continue to attract the most attention: “(6) Non-compliance […] shall […] be subject to administrative fines up to 20 000 000 EUR, or in the case of an undertaking, up to 4 % of the total worldwide annual turnover of the preceding financial year, whichever is higher.”

Is that all there is to GDPR?  

Unfortunately no. If regulatory compliance were as easy as nodding along to a blog post, we’d never hear of violations. Then again, we’d hear about a far more, and more severe, privacy breaches. Remaining compliant with all of the regulations than bear on clinical research may be a logistic burden, but it’s the right thing to do. You wouldn’t knowingly expose a patient (or their identity) to harm, but that’s what non-compliance amounts to: thousands of seemingly minor risks that make at least one catastrophe almost inevitable. So continue to educate yourself. We’ll help in that effort with a series of blog posts that begins with this one. And if the moral imperative of compliance doesn’t motivate you, consider the impact that non-compliance could have on your business or organization. You really don’t want to step in this natural fertilizer.

OC17 is just 32 days away! Here’s the detailed program.

If you’ve been waiting for more detailed session information to register, wait no more. You’ll find all the session and workshop abstracts, with speaker bios, on our OC17 resource page.

See the OC17 session abstracts

OC17 is also your chance to see the new OpenClinica up close. Take a “deep dive” into the all-new study build system and form engine.


See more of the new OpenClinica

 

Do not hesitate to email bfarrow@openclinica.com with any questions regarding OC17.

The new OpenClinica is here! (Part 3 of 3)

The story of OpenClinica is a story of customer-driven innovation. No other eClinical platform has a community as passionate and open as ours.

Last week we unveiled the new OpenClinica, the product of two years of effort based on the needs, insights, and collaboration of our community. Posts here and here highlight OpenClinica’s new features and user experience. I’d encourage you to try it for yourself to get a first-hand perspective. You’ll see a solution that:

  • Provides easy, self-service onboarding for all user types
  • Is built around “Smart paper” eCRFs – richly interactive, mobile-friendly forms that autosave your data yet give you as much control over layout as if you were designing them in Word
  • Gives you a collaborative, drag-and-drop study build environment with validated design->test->production pathway for your studies and amendments

We couldn’t deliver these enhancements without under the hood changes that are just as significant as what you see in the user interface. So what are some of those changes?

What’s under the Hood?

The new OpenClinica is a multi-tenant cloud platform that embraces open source, automates provisioning, and provides validated traceability, massive scalability, and high-grade security.

By breaking from the monolithic application model and turning to a microservices model built for the cloud, we’re able to deploy more user-friendly, productivity-enhancing features quickly and reliably. A traditional monolithic software application encapsulates all its functionality in a single code base, continues to grow in complexity as it adds new features, and requires deep expertise in the code base to fix or improve even minor things. With the microservices model, each service performs a small set of business functions, and is built around a web services API from the start, making it far easier to configure, integrate, and orchestrate functionality within the platform and with third-party systems.

For those of you familiar with the OpenClinica 3 architecture, a few of the key changes are:

  • Separation of study build and study runtime functions into discrete systems
  • The ability for the study build system to publish a study definition (or updates thereof) to test and production environments to simplify validation / training / UAT / re-use
  • A new model for building forms, based around the powerful and widely used XLSForm standard.
  • Use of separate database schemas for each study, increasing scalability, portability, performance, and better support of the full study lifecycle
  • A single-sign-on mechanism across the OpenClinica systems and services, with ability to plug in to third party authentication mechanisms more easily

    Study build and runtime
Infrastructure of the new OpenClinica. Click to see an enlarged version.

Together, these changes make OpenClinica easier, more powerful, smarter, and more open.

Our vision is to harness open technology to make research better. The new OpenClinica is fast to adopt, simple to learn, and a joy to use, whether you’re a data manager, CRA, investigator, or study subject. It doesn’t just capture data: it empowers the entire trial team to work fast, with high quality, and to monitor and respond in real time to challenges that arise. It enables the rapid exchange of clinical, laboratory, imaging, and patient-reported data, with the intelligence to take action on it.

Compliance with all pertinent FDA and EMA regulations, including GCP, 21 CFR Part 11, Annex 11, and HIPAA, continues unchanged, as does our SSAE-16 SOCII/ISO certification.

It doesn’t end there, either. The new architecture provides greater scalability, redundancy, and zero downtime; is modular, built for integration and extension; and is able to evolve faster and more flexibly.

For today’s complex clinical research needs, the leverage we get from modern cloud, DevOps, test/build automation, best-of-breed frameworks designed for microservices is enormous. But there are some trade-offs, including the need to manage 10-50 services at a time instead of 3 (database, application server, application) in the old model. Thus, the new OpenClinica is built to be consumed as a native cloud-based solution. This means that it’s not feasible to provide an easily downloadable ‘Community Edition’. Even with a massive amount of effort, packaging all of those services up into a straightforward download/install process that works reliably in a generic set of environments would be hard. Or, as one of our engineers put it, “a nightmare-inducing morass of things-that-could-go-wrong when it’s not used in the context it was specifically designed for.” By highly automating our cloud environment, we are focusing our engineering resources on getting the most secure, fail-safe, fastest, and highest quality user experience possible shipped and available for your research.

For all the added muscle, the new platform retains the heart of OpenClinica. Most of the database schema is the same. Much of the OC3 source code has been adapted to work within the new architecture. As with OpenClinica 3, most (but not all) of it is open source and available on GitHub. OpenClinica was conceived as a commercial open source project–one that has thrived in large part due to the enthusiasm of developers, domain experts, and practitioners who know that collaborative innovation benefits everyone. This ethos continues to guides the OpenClinica LLC team and inform our work.

What’s new in OC4?

  • Better forms:
    • Real-time edit checks and skip logic
    • Easier and more powerful logic for validation, skips, calculations, and inserts
    • Real-time, field-by-field autosave
    • Mobile-friendly design and UX suitable for any device
    • Layout options for every use case
  • Study DesignerTM:
    • Build studies using a drag-and-drop interface
    • 1-click publish to test/production environment
    • Preview forms and edit checks
    • Define study events, append associated forms, and more
    • Collaborate in real-time with other users while building and testing studies
    • Easily re-use forms and events
    • Form and protocol versioning
  • Data extracts that reflect clearer form versioning
  • Self-service training embedded into the user interface
  • Built for performance and scalability
  • Updates delivered safely, seamlessly, and automatically
  • A modernized technology infrastructure enabling future enhancements, including:
    • Better SDV, reporting, coding, configurable permissions, and metrics
    • Self-service startup to select your plan and immediately provision your instance
    • Libraries of reusable items, forms, users, and sites
    • More efficient handling of “reason for change” and cross form-edit checks
    • More capable and consistent API

How will this affect my existing OC3 studies?

For existing users, we recommend starting new studies on the new OpenClinica cloud while you keep your OC3 installation for studies already underway. Studies you are conducting on OpenClinica 3 will continue to receive the best-in-class support you’ve always known, all the way through to their completion. While major new features and improvements will focus on the new platform only, we have made more than 50 improvements to OC3 in 2017 alone, and will continue to produce maintenance releases as long as existing studies remain in production. We’re not yet able migrate existing study data from OC3 to OC4, given differences in the form definition model. We’re working on it!  

Can I use both OC3 and OC4 at the same time?

Yes! We recommend starting new studies on OC4, with one exception: studies requiring double data entry. This feature is not currently supported on OC4.

How will I get trained and request support?

Integrated training, help, documentation, tutorials, and videos are embedded right into the OC4 application. In addition, our stand-alone training modules and dedicated client services team will be available as always to ensure your success.

What do I have to learn?

The new OpenClinica is far more intuitive, and has tutorials and guides embedded in the application. The biggest change between OC3 and OC4 is the form design model. We offer plenty of resources, training, and examples to help you understand the difference. We’ll be releasing a drag-and-drop form designer and enhanced form library capabilities very soon.

What is the cost?

Plans and pricing are the same for OC4 as for OC3, and based on the number of studies you are actively running. To calculate the number of studies, we’ll add together the studies you have on OC4 and OC3.

  • You will keep your current pricing through the end of your contract. At the end of that term, you may renew at then-current pricing, whether you upgrade to OC4 or not.
  • Although we stopped offering plans with ‘unlimited’ studies some time ago, some long-standing customers remain on unlimited plans. At the end of their contract, we will assist these customers in selecting a plan that includes ample studies for current and projected projects at a per-study price that remains cost-effective.
  • We’ve introduced some new plan options whose flexibility and scope we believe will better serve you. Discounts are available for academic institutions and hospitals.

Is OpenClinica still open source?

As with OpenClinica 3, most (but not all) of the new OC is open source and available on GitHub. OpenClinica was conceived as a commercial open source project–one that has thrived in large part due to the enthusiasm of developers, domain experts, and practitioners who know that collaborative innovation benefits everyone.

Key motivators for open sourcing OpenClinica have been, and continue to be, (1) encouraging innovation, (2) maintaining transparency, and (3) enabling researchers. These principles continue to guides us and inform our work. The new model aims to improve the way these goals are met by:

  • prioritizing integration capabilities of the platform
  • keeping key parts open
  • incorporating and contributing significantly to widely used, third party OSS projects, and
  • providing a cost-effective and quickly deployable solution, thereby avoiding the potential technical hurdles, security pitfalls, and time costs of maintaining self-provisioned instance.

 

Key motivators for open sourcing OpenClinica How the new OpenClinica improves the way these goals are met
Encourage innovation Prioritizes integration capabilities of the platform
Maintain transparency Keeps key parts open and accessible. Contributes significantly to widely used, third party OSS projects. Continues the RSP which facilitates structured, complete audits for users requiring it
Enable researchers Provides an affordable and quickly deployable solution, thereby avoiding the potential technical hurdles, security pitfalls, and time costs of maintaining self-provisioned instance.

In the near future we will offer a low-cost plan, “OC Ready”, that gives you access to the key features of the new OpenClinica, including the new protocol designer, form engine, and more.

How can I contribute to this innovation?

Open source, open APIs, and readily available tutorials empower developers and technical users to push the envelope of what’s possible. We plan on releasing an OpenClinica “toolkit” for building integrations and health research apps that guarantee high-integrity, trustworthy data and rigorous standards of quality. When it’s available, developers of all experience levels will be able to:

  • Obtain an API key
  • Download the toolkit/SDK
  • Consult API docs & tutorials
  • See an example module
  • Build a UI or integration module using the toolkit
  • Share their module
  • Improve an existing module


The OC4 Toolkit is still under development, so stay tuned!

The new OpenClinica is here! (Part 2 of 3)

In the previous post in this series we covered how the new OpenClinica makes study build, change control, and collaboration so much easier. This improved user experience doesn’t end at first patient in. Take a journey through the heart of the new OpenClinica: its incredible form engine.

 


 

An incredible study build system. eCRFs your sites have been waiting for. What ties it all together? Find out in our next post.

The new OpenClinica is here! (Part 1 of 3)

Data Managers! Does this sound like you?

  • “Protocols change. I need a fast, reliable way to make needed updates without having to worry about breaking things.”
  • “I need a way to update forms more quickly, before and after study start.”
  • “My eCRF build/test/deploy process is too prone to human error.”
  • “I want my study team to see and take action on what needs to get done today. Then they can apply their brain power to the hard stuff.”
  • “I’m done with paper forms. My sites need fast, real-time data entry flows that match how they actually work.”

If so, you’re not alone. Since the advent of eClinical, we’ve settled for making paper processes work “on a computer.” Web-based technologies may have made your work faster. But so far, in moving away from paper, we’ve only traded crawling for walking.

Now it’s time to fly.

Today, we officially release the new OpenClinica, a leap forward in making research both faster and smarter. We’ve spent the last 18 months listening to users, looking at eClinical through the eyes of data managers, coordinators, and participants. We’ve tested beta versions of new technology and refused to settle for any experience that wasn’t seamless, efficient – even beautiful.

Now, you can expect to move from study idea to data extract in a logical, reliable, and speedy way. The new OpenClinica retains and adds to the power of prior generations of OpenClinica, while entirely rethinking how you get work done.

Here’s a tour of our study build system in six screen captures. Click the down arrow at the bottom of the frame to advance. 


 
And now you’re ready to empower your sites and participants with the user experience they’ve been waiting for. We’ll share that in our next post!

In the meantime, here are two ways to get some hands-on experience!

  1. Jump to the head of the line to try out the new OpenClinica. Click here to schedule an implementation kick-off call. 
  2. Looking for a deeper dive into the infrastructure and full capabilities? Join us in Amsterdam for OC17!