The new OpenClinica is here! (Part 3 of 3)

The story of OpenClinica is a story of customer-driven innovation. No other eClinical platform has a community as passionate and open as ours.

Last week we unveiled the new OpenClinica, the product of two years of effort based on the needs, insights, and collaboration of our community. Posts here and here highlight OpenClinica’s new features and user experience. I’d encourage you to try it for yourself to get a first-hand perspective. You’ll see a solution that:

  • Provides easy, self-service onboarding for all user types
  • Is built around “Smart paper” eCRFs – richly interactive, mobile-friendly forms that autosave your data yet give you as much control over layout as if you were designing them in Word
  • Gives you a collaborative, drag-and-drop study build environment with validated design->test->production pathway for your studies and amendments

We couldn’t deliver these enhancements without under the hood changes that are just as significant as what you see in the user interface. So what are some of those changes?

What’s under the Hood?

The new OpenClinica is a multi-tenant cloud platform that embraces open source, automates provisioning, and provides validated traceability, massive scalability, and high-grade security.

By breaking from the monolithic application model and turning to a microservices model built for the cloud, we’re able to deploy more user-friendly, productivity-enhancing features quickly and reliably. A traditional monolithic software application encapsulates all its functionality in a single code base, continues to grow in complexity as it adds new features, and requires deep expertise in the code base to fix or improve even minor things. With the microservices model, each service performs a small set of business functions, and is built around a web services API from the start, making it far easier to configure, integrate, and orchestrate functionality within the platform and with third-party systems.

For those of you familiar with the OpenClinica 3 architecture, a few of the key changes are:

  • Separation of study build and study runtime functions into discrete systems
  • The ability for the study build system to publish a study definition (or updates thereof) to test and production environments to simplify validation / training / UAT / re-use
  • A new model for building forms, based around the powerful and widely used XLSForm standard.
  • Use of separate database schemas for each study, increasing scalability, portability, performance, and better support of the full study lifecycle
  • A single-sign-on mechanism across the OpenClinica systems and services, with ability to plug in to third party authentication mechanisms more easily

    Study build and runtime
Infrastructure of the new OpenClinica. Click to see an enlarged version.

Together, these changes make OpenClinica easier, more powerful, smarter, and more open.

Our vision is to harness open technology to make research better. The new OpenClinica is fast to adopt, simple to learn, and a joy to use, whether you’re a data manager, CRA, investigator, or study subject. It doesn’t just capture data: it empowers the entire trial team to work fast, with high quality, and to monitor and respond in real time to challenges that arise. It enables the rapid exchange of clinical, laboratory, imaging, and patient-reported data, with the intelligence to take action on it.

Compliance with all pertinent FDA and EMA regulations, including GCP, 21 CFR Part 11, Annex 11, and HIPAA, continues unchanged, as does our SSAE-16 SOCII/ISO certification.

It doesn’t end there, either. The new architecture provides greater scalability, redundancy, and zero downtime; is modular, built for integration and extension; and is able to evolve faster and more flexibly.

For today’s complex clinical research needs, the leverage we get from modern cloud, DevOps, test/build automation, best-of-breed frameworks designed for microservices is enormous. But there are some trade-offs, including the need to manage 10-50 services at a time instead of 3 (database, application server, application) in the old model. Thus, the new OpenClinica is built to be consumed as a native cloud-based solution. This means that it’s not feasible to provide an easily downloadable ‘Community Edition’. Even with a massive amount of effort, packaging all of those services up into a straightforward download/install process that works reliably in a generic set of environments would be hard. Or, as one of our engineers put it, “a nightmare-inducing morass of things-that-could-go-wrong when it’s not used in the context it was specifically designed for.” By highly automating our cloud environment, we are focusing our engineering resources on getting the most secure, fail-safe, fastest, and highest quality user experience possible shipped and available for your research.

For all the added muscle, the new platform retains the heart of OpenClinica. Most of the database schema is the same. Much of the OC3 source code has been adapted to work within the new architecture. As with OpenClinica 3, most (but not all) of it is open source and available on GitHub. OpenClinica was conceived as a commercial open source project–one that has thrived in large part due to the enthusiasm of developers, domain experts, and practitioners who know that collaborative innovation benefits everyone. This ethos continues to guides the OpenClinica LLC team and inform our work.

What’s new in OC4?

  • Better forms:
    • Real-time edit checks and skip logic
    • Easier and more powerful logic for validation, skips, calculations, and inserts
    • Real-time, field-by-field autosave
    • Mobile-friendly design and UX suitable for any device
    • Layout options for every use case
  • Study DesignerTM:
    • Build studies using a drag-and-drop interface
    • 1-click publish to test/production environment
    • Preview forms and edit checks
    • Define study events, append associated forms, and more
    • Collaborate in real-time with other users while building and testing studies
    • Easily re-use forms and events
    • Form and protocol versioning
  • Data extracts that reflect clearer form versioning
  • Self-service training embedded into the user interface
  • Built for performance and scalability
  • Updates delivered safely, seamlessly, and automatically
  • A modernized technology infrastructure enabling future enhancements, including:
    • Better SDV, reporting, coding, configurable permissions, and metrics
    • Self-service startup to select your plan and immediately provision your instance
    • Libraries of reusable items, forms, users, and sites
    • More efficient handling of “reason for change” and cross form-edit checks
    • More capable and consistent API

How will this affect my existing OC3 studies?

For existing users, we recommend starting new studies on the new OpenClinica cloud while you keep your OC3 installation for studies already underway. Studies you are conducting on OpenClinica 3 will continue to receive the best-in-class support you’ve always known, all the way through to their completion. While major new features and improvements will focus on the new platform only, we have made more than 50 improvements to OC3 in 2017 alone, and will continue to produce maintenance releases as long as existing studies remain in production. We’re not yet able migrate existing study data from OC3 to OC4, given differences in the form definition model. We’re working on it!  

Can I use both OC3 and OC4 at the same time?

Yes! We recommend starting new studies on OC4, with one exception: studies requiring double data entry. This feature is not currently supported on OC4.

How will I get trained and request support?

Integrated training, help, documentation, tutorials, and videos are embedded right into the OC4 application. In addition, our stand-alone training modules and dedicated client services team will be available as always to ensure your success.

What do I have to learn?

The new OpenClinica is far more intuitive, and has tutorials and guides embedded in the application. The biggest change between OC3 and OC4 is the form design model. We offer plenty of resources, training, and examples to help you understand the difference. We’ll be releasing a drag-and-drop form designer and enhanced form library capabilities very soon.

What is the cost?

Plans and pricing are the same for OC4 as for OC3, and based on the number of studies you are actively running. To calculate the number of studies, we’ll add together the studies you have on OC4 and OC3.

  • You will keep your current pricing through the end of your contract. At the end of that term, you may renew at then-current pricing, whether you upgrade to OC4 or not.
  • Although we stopped offering plans with ‘unlimited’ studies some time ago, some long-standing customers remain on unlimited plans. At the end of their contract, we will assist these customers in selecting a plan that includes ample studies for current and projected projects at a per-study price that remains cost-effective.
  • We’ve introduced some new plan options whose flexibility and scope we believe will better serve you. Discounts are available for academic institutions and hospitals.

Is OpenClinica still open source?

As with OpenClinica 3, most (but not all) of the new OC is open source and available on GitHub. OpenClinica was conceived as a commercial open source project–one that has thrived in large part due to the enthusiasm of developers, domain experts, and practitioners who know that collaborative innovation benefits everyone.

Key motivators for open sourcing OpenClinica have been, and continue to be, (1) encouraging innovation, (2) maintaining transparency, and (3) enabling researchers. These principles continue to guides us and inform our work. The new model aims to improve the way these goals are met by:

  • prioritizing integration capabilities of the platform
  • keeping key parts open
  • incorporating and contributing significantly to widely used, third party OSS projects, and
  • providing a cost-effective and quickly deployable solution, thereby avoiding the potential technical hurdles, security pitfalls, and time costs of maintaining self-provisioned instance.

 

Key motivators for open sourcing OpenClinica How the new OpenClinica improves the way these goals are met
Encourage innovation Prioritizes integration capabilities of the platform
Maintain transparency Keeps key parts open and accessible. Contributes significantly to widely used, third party OSS projects. Continues the RSP which facilitates structured, complete audits for users requiring it
Enable researchers Provides an affordable and quickly deployable solution, thereby avoiding the potential technical hurdles, security pitfalls, and time costs of maintaining self-provisioned instance.

In the near future we will offer a low-cost plan, “OC Ready”, that gives you access to the key features of the new OpenClinica, including the new protocol designer, form engine, and more.

How can I contribute to this innovation?

Open source, open APIs, and readily available tutorials empower developers and technical users to push the envelope of what’s possible. We plan on releasing an OpenClinica “toolkit” for building integrations and health research apps that guarantee high-integrity, trustworthy data and rigorous standards of quality. When it’s available, developers of all experience levels will be able to:

  • Obtain an API key
  • Download the toolkit/SDK
  • Consult API docs & tutorials
  • See an example module
  • Build a UI or integration module using the toolkit
  • Share their module
  • Improve an existing module


The OC4 Toolkit is still under development, so stay tuned!

The new OpenClinica is here! (Part 2 of 3)

In the previous post in this series we covered how the new OpenClinica makes study build, change control, and collaboration so much easier. This improved user experience doesn’t end at first patient in. Take a journey through the heart of the new OpenClinica: its incredible form engine.

 


 

An incredible study build system. eCRFs your sites have been waiting for. What ties it all together? Find out in our next post.

The new OpenClinica is here! (Part 1 of 3)

Data Managers! Does this sound like you?

  • “Protocols change. I need a fast, reliable way to make needed updates without having to worry about breaking things.”
  • “I need a way to update forms more quickly, before and after study start.”
  • “My eCRF build/test/deploy process is too prone to human error.”
  • “I want my study team to see and take action on what needs to get done today. Then they can apply their brain power to the hard stuff.”
  • “I’m done with paper forms. My sites need fast, real-time data entry flows that match how they actually work.”

If so, you’re not alone. Since the advent of eClinical, we’ve settled for making paper processes work “on a computer.” Web-based technologies may have made your work faster. But so far, in moving away from paper, we’ve only traded crawling for walking.

Now it’s time to fly.

Today, we officially release the new OpenClinica, a leap forward in making research both faster and smarter. We’ve spent the last 18 months listening to users, looking at eClinical through the eyes of data managers, coordinators, and participants. We’ve tested beta versions of new technology and refused to settle for any experience that wasn’t seamless, efficient – even beautiful.

Now, you can expect to move from study idea to data extract in a logical, reliable, and speedy way. The new OpenClinica retains and adds to the power of prior generations of OpenClinica, while entirely rethinking how you get work done.

Here’s a tour of our study build system in six screen captures. Click the down arrow at the bottom of the frame to advance. 


 
And now you’re ready to empower your sites and participants with the user experience they’ve been waiting for. We’ll share that in our next post!

In the meantime, here are two ways to get some hands-on experience!

  1. Jump to the head of the line to try out the new OpenClinica. Click here to schedule an implementation kick-off call. 
  2. Looking for a deeper dive into the infrastructure and full capabilities? Join us in Amsterdam for OC17!

What DIA Stands For, To OpenClinica

What is it about the annual Drug Information Association meeting that energizes those of us working to improve the eclinical experience? Sure, it’s a terrific opportunity to showcase our products and services to research teams that could benefit from them (read: business development). But it’s more than that. “Just make the sale” is no credo for this industry. We serve those who serve patients, tirelessly working to enhance their lives. It’s impossible not to feel privileged by that responsibility, and the DIA conference is a chance for OpenClinica to demonstrate once again our resolve in meeting it. Every summer, we’re reminded to step back and prove to peers that our business aligns with the all-important goal of making trials as effective as they can be, so that safe and effective medicines get to the right patient at the right time. That means distilling the complex processes behind data capture into a story from which every DIA attendee, from data manager to CRA, can draw inspiration.

Back in February, I suggested an outline for that story: “making the complex easy”. I’m pleased to report that that narrative is gathering momentum. Our upcoming release combines power and ease-of-use in a manner that we believe is unprecedented. It will enable data managers, researchers and study participants to do more in less time, while rediscovering that sense of joy a well-designed web experience offers. It’s our way of keeping trials from turning into ordeals.

So yes, as a business, we want to grow by meeting more research teams and sharing the OpenClinica story with them. The annual DIA conference helps us achieve that. But by bringing together the most accomplished teams in drug development, the conference does more. It’s a place to improve our understanding of the challenges research teams face, and stay accountable to the ideals that led to our founding and all of our growth since then. If you’re attending DIA in Chicago, I hope you’ll find time to visit us at booth #1748, so we can show you just how energized those ideals are keeping us.

My Resolution for 2017

The traditional month for making resolutions is over, but since the first day of the year, I’ve been hard at work on mine. I shared it with my team members early in January, and now I want to make it public.

My resolution for 2017 is to make life easier.

Not necessarily for me–though I’d take it–but for clients and collaborators of OpenClinica. The pace of clinical research will only accelerate, so eclinical software (and its users) will need to keep up or get left behind. While we can always make improvements in processing power and data storage, the big gains will come from empowering our users to do more in less time and to rapidly make smart decisions. In the world of electronic data capture, that means giving data managers powerful tools to get studies started, to seamlessly integrate randomization, EDC, and ePRO, and to gain insight into their queried and missing forms.

As a team, we’ve adopted “making the complex easy” as our theme for 2017. As you’ll see in the posts below, we’re poised to do just that. We have new clients that will challenge us to break new ground. We have new team members bringing that rare combination of domain expertise and raw passion. And we have a growing appreciation of the role ePRO will play in identifying therapies that aren’t only efficacious, but effective. So if my life doesn’t get any easier this year, it’s still bound to be exciting.

Introducing OpenClinica v4

The OC16 Conference (Oct 3-4 in Boston) is two weeks away and I’m thrilled to be there to introduce the all-new OpenClinica.

The OpenClinica project started 11 years ago with two goals in mind: use the web to help clinical researchers get better, faster data, and champion an open technology model that gives users the power to extend and customize it to their needs.

These goals remain our main focus and are as important as ever. Achieving them improves quality, gets new discoveries to patients faster, improves patient satisfaction, and reduces costs. But clinical trials keep getting more complex, expensive, and harder. Better technology is a must, and it’s time for OpenClinica to evolve.

Here’s a quick preview of what we’ll be showing at OC16. It has two main components:

1. OpenClinica 4 is a self-service e-source, patient engagement, and EDC platform with low cost of entry, rapid startup, optimized workflows, and massive scalability. It is designed to save time, work great in mobile environments, and encourage better collaboration. Picture:

image6

image5

image4

image3

image2

image1

  • Working with your colleagues in real time on protocol design, in a visual, drag-and-drop way. When ready, its one-click to deploy to test, training, and production environments,
  • A fast, mobile-friendly, flexible forms engine with real-time edit checks, powerful expression logic, and advanced multimedia capabilities
  • Quickly, seamlessly, and securely inviting other users. Have them trained and working productively with the system in a self-service, fully automated way, and
  • Smart & customizable reports, dashboards, and queues that help you work better.

2. The OpenClinica 4 Toolkit is an open, modular environment for building health and research apps that guarantees high-integrity, trustworthy data and adherence to rigorous standards of quality. Developers in healthcare need to be able to easily build mobile & web-based health apps with powerful data collection, workflow, messaging, and visualization capabilities. A junior developer or a healthcare professional with some web programming background will be able to set up a development environment in 5-10 minutes, get a free developer key, and build a ‘hello world’ module in under 30. The toolkit, with a smart mix of modern web & mobile tools, the right amount of domain-specific intelligence, and guaranteed data integrity & security, can unlock immense productivity and innovation. The fundamental features OpenClinica is already built around– trustworthy data provenance, flexible data models, secure REST APIs, and high quality–are a great basis for such a toolkit.

Some key parts of the OpenClinica 4 technology have already been in use for over a year now in OpenClinica Participate. OpenClinica LLC’s engineers have been hard at work on the first four components of OC v4:  the protocol designer, the new form engine, a multi-tenant cloud architecture with single sign on, and self-service training. We will be putting these into use in trials early next year. Other components, like customizable dashboards, workflows, and improved reporting, and the toolkit will soon follow.

Over the past several months, 25 dedicated members of the OpenClinica community, with expertise in clinical trials, EDC, ePRO, data management, and community-based development, have worked in five advisory groups – EDC, Study Startup, eSource, API, and Community – providing strategic direction and ideas to shape the new OpenClinica.

Want to find out more? OC16’s opening panel will have a member of each advisory group sharing their view on where OpenClinica is going, and throughout the day we’ll have live demos, discussion, and training on OpenClinica 4. Come join us at OC16!

 

 

 

Disintermediation

“An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem.” – John Tukey

Biopharma, governments, and the healthcare industry as a whole are grappling with what really provides healthcare value, and how to evaluate and measure it. Though not a new problem, it’s one that is now front-and-center as established models for healthcare research and evaluation are proving too slow, costly, and restrictive for today’s needs.

At the same time, easy-to-use mobile computing is everywhere in our day-to-day lives, providing the pipeline to ever more comprehensive and accessible data about our world.

How are these two things related? The relative lack of mobile technology in health research illuminates the limitations of research designs and data gathering methods from the 1970s that are still relied on today.

In field after field for the past 30 years, Internet technology has shown it can radically democratize and commoditize information, through automation and scaling with low-to-zero marginal costs. There’s a fancy term for this: disintermediation. Many in the healthcare and clinical research fields are looking toward general-purpose consumer mobile technology to disintermediate themselves from the data they seek. Direct engagement with patients using a mobile-centric, real-time approach is a big part of the way forward. This disintermediation aims for greater efficiency and improved accuracy in research. In some cases, entirely new ways of looking at problems may result from the ability to passively collect continuous data streams – the Parkinson’s mPower study is a great early example.

There’s no question population health and observational research are being transformed by the ability to use mobile technology on a wide scale: A flood of low cost wireless sensors coming to market opens possibilities for ubiquitous, passive data collection. We have an unprecedented ability to engage and capture near-real time, in some cases, continuous data at a very low burden to the participant. Online communities are empowering patients by bringing together people who share a common disease burden and providing them with the chance to interact, share knowledge, and compare experiences like never before. This often includes raising the visibility of research participation opportunities. 

The needs of interventional research are changing too. An increased emphasis by payers and providers on effectiveness requires understanding how patients and therapies work in the real world. With the ubiquity of smartphones, we can collect patient-reported measures in a far more meaningful and timely way than with paper diaries or dedicated hand-held devices, allowing us to develop the evidence needed for a value-based market. Even in pivotal randomized registration trials, we can introduce changes to help better engage and retain patients, shorten timelines with adaptive designs that rely on real-time data, and do so in a way that improves these studies’ ability to demonstrate safety and efficacy.

We’re still in the early days of this patient-centric era. Most mHealth and virtual trial systems are bespoke platforms with purpose-built apps. These custom built solutions can be coded exactly to the needs of a given project. But building custom solutions sacrifices reusability and inhibits the ability to start the next project fast with out-of-the box, proven technology.

At OpenClinica, we are working to achieve both. We aim to combine the rigor and exactness of RCTs with a big data/population based approach that yields richer answers about how the real world works. The best way to do that is with a platform that creates a unified experience based on reusable, but customizable components. Our nearly 10 years of work in electronic data capture has taught us a lot about how to ensure data integrity and enable our customers to implement research protocols without writing code. At the same time, we’ve built a user experience for study participants that from the ground up is designed around simplicity, mobile-friendliness, and, ease-of-use. We’ve focused on components that solve problems in a generalizable way and just work, while also providing the means to tailor the user experience and features to meet the unique needs of each study.

Every day is an exciting challenge as we and our customers learn more about the patients we both serve.  Just a few months after the launch of OpenClinica Participate, we have started, or will soon begin, connecting patients in a daily diary study on nutrition, a behavioral health risk screening study, a hospital safety outcomes project, a long-term maternal & child health cohort, and two surgical device studies, including one involving photos captured directly from mobile devices. We’re rapidly making refinements to their user experience and adding features that help improve the speed, convenience, and quality of the research. With these new approaches comes changes at every level: research design; privacy, ethics, and consent; data validity; regulatory compliance; and analytical models. But the potential payoff is great – we now have new abilities to ask big, important research questions that have been impossible to answer in the past.

 

The Forecast is Cloudy

GE recently announced it is moving its 9,000 supported applications to the cloud. Nowadays, all of us are bombarded with information about “the cloud”, and it can be hard to wade through the hype and hyperbole to understand the landscape in a way that helps us make decisions about our own organizations.

Enterprise cloud computing is a complex topic, and how you look at it depends on many variables. Below I try to outline one typical scenario. Your inputs, and the weight you give to different factors involved in making the decision will vary, but the general paradigm is useful across a wide variety of organizations.

In the interest of full disclosure, I am CEO of a company that sells cloud-based clinical research solutions (OpenClinica Enterprise, OpenClinica Participate). We adopted a cloud model after going through exercises similar to the ones below. Rather than reflecting bias, it demonstrates our belief that the cloud model offers the greatest combination of value for the greatest number of organizations in the clinical research market.

So… Let’s say you’re a small-to-medium size enterprise, usually defined as having under 1000 staff, and you are considering moving your eClinical software technologies to a public cloud and/or to a Software-as-a-Service (SaaS) provider.

Let’s start with the generic move of in-house (or co-located) servers and applications to public cloud environment. We’ll get to SaaS in a bit.

Economics

For this exercise, we’ll use the handy modelling tools from Intel’s thecloudcalculator.com. And we’ll assume you want to run mission-critical apps, with high levels of redundancy that eliminate single points of failure. We’ll compare setup of your own infrastructure using traditional virtualization to a similar one on cloud, based on certain assumptions:

The results for an internal, or “private” cloud are:

Economics

The public cloud looks as follows:

Economics2

Economics3

Source: http://thecloudcalculator.com

Wow. A 26x difference in cost. Looks pretty compelling, right? But not totally realistic – you’re probably not considering building a highly redundant in-house or co-located data center to host just a couple of apps. Either you already have one in place, or are already deploying everything to the cloud. In the latter case, you don’t need to read further.

In the former case, let’s explore the cost of adding several more applications to your existing infrastructure. What are the marginal costs of adding the same amount of computing capacity (12GB of memory, 164GB storage) on top of an existing infrastructure? We can use the same calculator to compute the delta between the total cost of a private cloud with 190GB of memory and 836GB of storage. But here it gets much trickier.

According to the calculator, our 190GB cloud costs $379,324 – the same as the 12GB cloud in the first example! Moreover, adding another 12GB of capacity pushes the cost up to $513,435, a difference of $134,111. However, if we change our assumptions and start with a 150GB cloud, then add 12GB of capacity, the marginal cost is $0.

What we’re seeing is how the IT overhead costs of running your own private cloud infrastructure tend to grow in a discrete, rather than continuous, manner, and the cost of going from one tier to the next is usually very expensive.

Our calculator makes a bunch of assumptions about the size of each server, and at what point you need to add more hardware, personnel, cooling, etc. The exact number where these thresholds lie will vary for each organization, and the numbers in the example above were picked specifically to illustrate the discrete nature of IT capacity. But the principle is correct.

Large cloud providers, on the other hand, mask the step-wise and sunk capital costs from customers by only charging for each incremental unit of computing actually in use. Because these providers operate at a huge scale, they are able to always ensure excess supply and they can amortize their fixed and step-wise costs over a large number of customers.

The examples above show that the actual costs of a public cloud deployment are likely to be significantly lower than those of building or adding to a comparable private cloud. While there’s no guarantee that your public cloud cost will be less than in-house or colocated, market prices for cloud computing continue to become more competitive as the industry scales up.

What is certain however, is that flexibility of the public cloud model eliminates the need for long-term IT capital budget planning and ensures that a project won’t be subject to delays due to hardware procurement pipelines or data center capacity issues. In most cases it can also reduce burden on IT personnel.

Qualitative Advantages

The central promise of the cloud is a fundamental difference in the ability to run at scale. You can deploy a world class, massively scaled infrastructure even for your first proof-of-concept without risking millions of dollars on equipment and personnel. When Amazon launched the S3 cloud service in 2006, its headline was “Amazon S3 enables any developer to leverage Amazon’s own benefits of massive scale with no up-front investment or performance compromises”.

It is a materially different approach to IT that enables tremendous flexibility, velocity, and transparency, without sacrificing reliability or scalability. As Lance Weaver, Chief Technology Officer for Cloud at GE Corporate identifies, “People will naturally gravitate to high value, frictionless services”. The global scale, pay as you go pricing models, and instantaneous elasticity offered by major public cloud providers is unlike anything in the technology field since the dawn of the Internet. If GE can’t match the speed, security, and flexibility of leading public cloud providers, how can you?

What You Give Up

At some level, when moving to the cloud you do give up a measure of direct control. Your company’s employees no longer have their hands directly on the raw iron powering your applications. However, the increased responsiveness, speed, and agility enabled by the cloud model gives you far more practical control that the largely theoretical advantages of such hands-on ownership. In a competitive world, we outsource generation of electrical power, banking, delivery of clean, potable water, and access to global communications networks like the Internet. Increasingly, arguments for the cloud look similar, with the added benefits of continuous, rapid improvements and falling prices.

Encryption technologies and local backup options make it possible to protect and archive your data in a way that gives you and your stakeholders increased peace-of-mind, so make sure these are incorporated into your strategy.

Risk Reduction

The model above is based on the broad economics of the cloud. However, there are other, more intangible requirements that must be met before a change can be made. You’ll want to carefully evaluate a solution to ensure it has the features you need and is fit for purpose, that the provider you choose gives you the transparency into the security, reliability, and quality of their infrastructure and processes. Make sure that data ownership and level of access is clear and meets your requirements. Ensure you have procedures and controls in place for security, change control, and transparency/compliance. These would be required controls for an in-house IT or private cloud as well. One benefit of public cloud providers in this area is that many of them offer capabilities that are certified or audited against recognized standards, such as ISO 27001, SSAE16, ISAE 3402, and even FISMA. Some will also sign HIPAA Business Associate Agreements (BAAs) as part of their service. Adherence to these standards may be part of the entry-level offering, though sometimes it is only available as part of a higher-end package. Be sure to research and select a solution that meets your needs.

External Factors

No matter who you are, you are beholden to other stakeholders in some way. Here are a couple areas to ensure you pay attention to:

  • Regulation – Related to risk reduction, you want to have controls in place that adhere to relevant policies and regulations. In clinical research, frameworks such as ICH Good Clinical Practice and their principles of Computer System Validation (CSV) are widely accepted, well understood, and contain nothing that is a barrier to deploying a well-designed cloud with the appropriate due diligence. You may also have to consider national health data regulations such as HIPAA or EU privacy protections. Consider if data is de-identified or not, and at what level, to map out the landscape of requirements you’ll have to deal with.
  • Data Storage – A given project or group may be told that the sponsor, payer, institution, or regulatory authority requires in-house or in-country storage of data. Sometimes this is explicitly part of a policy or guideline, but just as often it is more of a perceived requirement, because “that’s the way we’ve always done it”. If there is wiggle room, think about if it is worth fighting to be the exception (more and more often, the answer is yes). Gauge stakeholders such as your IT department, who nowadays are often overburdened and happy to “outsource” the next project, provided good controls and practices are in place.
  • Culture – a famous saying, attributed to management guru Peter Drucker, is that “Culture eats strategy for breakfast, every time”. Putting the necessary support in place for change in your organization and with external stakeholders is important. The embrace of cloud at GE and in the broader economy helps. Hopefully this article helps :-). And starting small (something inherently more possible with the cloud) can help you demonstrate value and convince others when it’s time to scale.

SaaS

SaaS (Software-as-a-Service) is closely tied to cloud, and often confused with it. It is inherently cloud-based but the provider manages the details all the way up to the level of the application. SaaS solutions are typically sold with little or no up-front costs and a monthly or yearly subscription based on usage or tier of service.

SaaS-IaaS-PaaS

Source: http://venturebeat.com/2011/11/14/cloud-iaas-paas-saas/

When you subscribe to a SaaS application, your solution provider handles the cloud stuff, and you get:

  • a URL
  • login credentials
  • the ability to do work right away

Which leads to a scenario like the following:

A few years ago, you typically had to balance this advantage (lack of IT headaches and delays) against the lack of a comprehensive feature set. As relatively new entrants to the market, SaaS platforms didn’t yet have all the coverage of legacy systems that had been around for years, or in some cases decades. However, the landscape has changed. The SaaS provider is focused on making their solution work great on just one, uniform environment, so they can focus more of their resources on rapidly building and deploying high-quality features and a high-quality user experience. The result is that there is far more parity. Most SaaS solutions have caught up and are outpacing legacy technologies in pace of improvements to user experience, reliability, and features. Legacy providers have to spend more and more resources dealing with a complex tangle of variations in technology stack, network configuration, and IT administration at each customer site.

 

Furthermore, the modern SaaS provider can reduce, rather than increase, vendor lock-in. Technology market forces demand that interoperability be designed into solutions from the ground up. Popular SaaS frameworks such as microservice APIs mean your data and content are likely to be far more accessible, both to users and other software systems, than when locked in a legacy relational database.

The SaaS provider has the ability to focus on solving the business problems of its customers, and increasingly-powerful cloud infrastructure and DevOps technologies to automate the rest in the background in a way that just works. These advantages get passed that along to the customer in continuous product improvements and the flexibility to scale up and down as you need to, without major capital commitments.

Conclusion

YMMV, but cloud & SaaS are powerful phenomena changing the way we live and work. In a competitive environment, they can help you move faster and lower costs, by making IT headaches and delays a thing of the past.

 

Engage. Learn. Repeat.

At OpenClinica we are driven to reduce obstacles to the advancement of medical research. The OpenClinica open source project started because EDC was too complex, too inaccessible, and too expensive. Not to mention far too difficult to evaluate and improve. So we built an EDC / CDMS platform and released it under an open source license. It is now the world’s most widely used open source EDC system and has an active, growing user community.

 

As the user base grew, we listened to users and understood that integration and interoperability were another major obstacle. While we don’t claim to have fully cracked that nut yet, OpenClinica’s CDISC ODM-based APIs have been pretty widely adopted and helped to drive some significant innovations. These APIs have been improved upon by a large number of developers in the few years they have been part of the codebase.

 

As we continue to improve the clinical and researcher experience, our attention has more recently been directed to the experience of trial participants. The difficulty of meaningful, timely engagement with these volunteers also strikes us as an obstacle to successful research. We live in a world where 90% of American adults have mobile phones, 81% text, and 63% use their phone to go online (Pew), and even older age groups are adopting smartphones at a rapid pace [1]. Because of this, we think that mobile technology could be a pretty effective means to help more meaningfully engage participants research.

 

Why is this important? Treating research volunteers as participants, as opposed to subjects, can lead to concrete benefits – improving participation, motivation, and adherence. Increasing your ability to meet recruitment goals, budget, and completion timelines. Getting more complete, timely data. Even enabling new protocol designs that better target populations and/or more closely align with real-world use. But most of all, it just seems like the right thing to do. As one HIV trial participant put it, “I’d initially had this nagging fear in my head, that, once recruited, I would cease to be nothing more than a patient number – a series of digits, test results and charts in a file – which is quite a daunting prospect when you’re not entirely sure how your body is going to respond to the vaccine. This could not have been further from the reality of the trial. I felt safe, informed and valued at every stage of the trial.”

 

The great (and often unrecognized) news is that so many of the people involved in research and care already do an unbelievable job creating this type of engagement – making participants feel safe, informed, and valued. But it takes a lot of work. With a mobile-enabled, real-time solution like OpenClinica Participate, you can provide an engagement channel and data capture experience that is simple, elegant, and easy to use on any device. Because it is fully integrated with OpenClinica and captures data in a regulatory-compliant manner, you can reduce time and headache for your research team from, for instance, merging disparate sources of data and keying in paper reports. Leaving you more time to focus on the kinds of human to human engagement that technology cannot do.
Participate Webinar
[1]  For the over 55 age group, most likely to participate in many types of trials, the picture is a bit different. As of 2013, around 80% have mobile phone but only 37% are smartphones. However over-55s are the fastest growing smartphone adopters, expected by Deloitte to soon reach 50% and reach parity with other age groups by 2020.
See http://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-Telecommunications/gx-tmt-2014prediction-smartphone.pdf. Outside of the developed world, the picture is different, though the opposite of what you might expect. According to Donna Malvey, PhD, co-author of mHealth: Transforming Healthcare, cell phones are even more pervasive, and mHealth “apps are the difference between life and death. If you’re in Africa and you have a sick baby, mHealth apps enable you to get healthcare you would normally not have access to… In China and India, in particular, mobile apps can bring healthcare to rural areas. “