OC16: Call for Abstracts!

OC16 is coming to you this fall at the Harvard Medical School New Research Building–save the date!

Monday, Oct 3rd: Sessions & Demos
Tuesday, Oct 4th: Workshops
Wed-Fri, Oct 5th -7th: Optional OpenClinica Super User Training

The 2016 OpenClinica Global Conference is a unique opportunity to meet with and learn from OpenClinica users around the globe while discussing OpenClinica’s impact on the future of clinical data. Join us for a high-impact event in the heart of Boston’s medical research area to collaborate, integrate, and participate at all levels with the OpenClinica community.

The two-day conference will showcase new developments with OpenClinica’s architecture, preset case studies, illustrate exciting new modules, and much more.

Are there integration, strategies, or techniques you’d like to share? Connect with the community and illustrate your work at OC16!

Submission deadline is July 15, get your abstract in now! For more, visit OC16 conference page.

SocialMedia-OC16-FB

Disintermediation

“An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem.” – John Tukey

Biopharma, governments, and the healthcare industry as a whole are grappling with what really provides healthcare value, and how to evaluate and measure it. Though not a new problem, it’s one that is now front-and-center as established models for healthcare research and evaluation are proving too slow, costly, and restrictive for today’s needs.

At the same time, easy-to-use mobile computing is everywhere in our day-to-day lives, providing the pipeline to ever more comprehensive and accessible data about our world.

How are these two things related? The relative lack of mobile technology in health research illuminates the limitations of research designs and data gathering methods from the 1970s that are still relied on today.

In field after field for the past 30 years, Internet technology has shown it can radically democratize and commoditize information, through automation and scaling with low-to-zero marginal costs. There’s a fancy term for this: disintermediation. Many in the healthcare and clinical research fields are looking toward general-purpose consumer mobile technology to disintermediate themselves from the data they seek. Direct engagement with patients using a mobile-centric, real-time approach is a big part of the way forward. This disintermediation aims for greater efficiency and improved accuracy in research. In some cases, entirely new ways of looking at problems may result from the ability to passively collect continuous data streams – the Parkinson’s mPower study is a great early example.

There’s no question population health and observational research are being transformed by the ability to use mobile technology on a wide scale: A flood of low cost wireless sensors coming to market opens possibilities for ubiquitous, passive data collection. We have an unprecedented ability to engage and capture near-real time, in some cases, continuous data at a very low burden to the participant. Online communities are empowering patients by bringing together people who share a common disease burden and providing them with the chance to interact, share knowledge, and compare experiences like never before. This often includes raising the visibility of research participation opportunities. 

The needs of interventional research are changing too. An increased emphasis by payers and providers on effectiveness requires understanding how patients and therapies work in the real world. With the ubiquity of smartphones, we can collect patient-reported measures in a far more meaningful and timely way than with paper diaries or dedicated hand-held devices, allowing us to develop the evidence needed for a value-based market. Even in pivotal randomized registration trials, we can introduce changes to help better engage and retain patients, shorten timelines with adaptive designs that rely on real-time data, and do so in a way that improves these studies’ ability to demonstrate safety and efficacy.

We’re still in the early days of this patient-centric era. Most mHealth and virtual trial systems are bespoke platforms with purpose-built apps. These custom built solutions can be coded exactly to the needs of a given project. But building custom solutions sacrifices reusability and inhibits the ability to start the next project fast with out-of-the box, proven technology.

At OpenClinica, we are working to achieve both. We aim to combine the rigor and exactness of RCTs with a big data/population based approach that yields richer answers about how the real world works. The best way to do that is with a platform that creates a unified experience based on reusable, but customizable components. Our nearly 10 years of work in electronic data capture has taught us a lot about how to ensure data integrity and enable our customers to implement research protocols without writing code. At the same time, we’ve built a user experience for study participants that from the ground up is designed around simplicity, mobile-friendliness, and, ease-of-use. We’ve focused on components that solve problems in a generalizable way and just work, while also providing the means to tailor the user experience and features to meet the unique needs of each study.

Every day is an exciting challenge as we and our customers learn more about the patients we both serve.  Just a few months after the launch of OpenClinica Participate, we have started, or will soon begin, connecting patients in a daily diary study on nutrition, a behavioral health risk screening study, a hospital safety outcomes project, a long-term maternal & child health cohort, and two surgical device studies, including one involving photos captured directly from mobile devices. We’re rapidly making refinements to their user experience and adding features that help improve the speed, convenience, and quality of the research. With these new approaches comes changes at every level: research design; privacy, ethics, and consent; data validity; regulatory compliance; and analytical models. But the potential payoff is great – we now have new abilities to ask big, important research questions that have been impossible to answer in the past.

 

The Forecast is Cloudy

GE recently announced it is moving its 9,000 supported applications to the cloud. Nowadays, all of us are bombarded with information about “the cloud”, and it can be hard to wade through the hype and hyperbole to understand the landscape in a way that helps us make decisions about our own organizations.

Enterprise cloud computing is a complex topic, and how you look at it depends on many variables. Below I try to outline one typical scenario. Your inputs, and the weight you give to different factors involved in making the decision will vary, but the general paradigm is useful across a wide variety of organizations.

In the interest of full disclosure, I am CEO of a company that sells cloud-based clinical research solutions (OpenClinica Enterprise, OpenClinica Participate). We adopted a cloud model after going through exercises similar to the ones below. Rather than reflecting bias, it demonstrates our belief that the cloud model offers the greatest combination of value for the greatest number of organizations in the clinical research market.

So… Let’s say you’re a small-to-medium size enterprise, usually defined as having under 1000 staff, and you are considering moving your eClinical software technologies to a public cloud and/or to a Software-as-a-Service (SaaS) provider.

Let’s start with the generic move of in-house (or co-located) servers and applications to public cloud environment. We’ll get to SaaS in a bit.

Economics

For this exercise, we’ll use the handy modelling tools from Intel’s thecloudcalculator.com. And we’ll assume you want to run mission-critical apps, with high levels of redundancy that eliminate single points of failure. We’ll compare setup of your own infrastructure using traditional virtualization to a similar one on cloud, based on certain assumptions:

The results for an internal, or “private” cloud are:

Economics

The public cloud looks as follows:

Economics2

Economics3

Source: http://thecloudcalculator.com

Wow. A 26x difference in cost. Looks pretty compelling, right? But not totally realistic – you’re probably not considering building a highly redundant in-house or co-located data center to host just a couple of apps. Either you already have one in place, or are already deploying everything to the cloud. In the latter case, you don’t need to read further.

In the former case, let’s explore the cost of adding several more applications to your existing infrastructure. What are the marginal costs of adding the same amount of computing capacity (12GB of memory, 164GB storage) on top of an existing infrastructure? We can use the same calculator to compute the delta between the total cost of a private cloud with 190GB of memory and 836GB of storage. But here it gets much trickier.

According to the calculator, our 190GB cloud costs $379,324 – the same as the 12GB cloud in the first example! Moreover, adding another 12GB of capacity pushes the cost up to $513,435, a difference of $134,111. However, if we change our assumptions and start with a 150GB cloud, then add 12GB of capacity, the marginal cost is $0.

What we’re seeing is how the IT overhead costs of running your own private cloud infrastructure tend to grow in a discrete, rather than continuous, manner, and the cost of going from one tier to the next is usually very expensive.

Our calculator makes a bunch of assumptions about the size of each server, and at what point you need to add more hardware, personnel, cooling, etc. The exact number where these thresholds lie will vary for each organization, and the numbers in the example above were picked specifically to illustrate the discrete nature of IT capacity. But the principle is correct.

Large cloud providers, on the other hand, mask the step-wise and sunk capital costs from customers by only charging for each incremental unit of computing actually in use. Because these providers operate at a huge scale, they are able to always ensure excess supply and they can amortize their fixed and step-wise costs over a large number of customers.

The examples above show that the actual costs of a public cloud deployment are likely to be significantly lower than those of building or adding to a comparable private cloud. While there’s no guarantee that your public cloud cost will be less than in-house or colocated, market prices for cloud computing continue to become more competitive as the industry scales up.

What is certain however, is that flexibility of the public cloud model eliminates the need for long-term IT capital budget planning and ensures that a project won’t be subject to delays due to hardware procurement pipelines or data center capacity issues. In most cases it can also reduce burden on IT personnel.

Qualitative Advantages

The central promise of the cloud is a fundamental difference in the ability to run at scale. You can deploy a world class, massively scaled infrastructure even for your first proof-of-concept without risking millions of dollars on equipment and personnel. When Amazon launched the S3 cloud service in 2006, its headline was “Amazon S3 enables any developer to leverage Amazon’s own benefits of massive scale with no up-front investment or performance compromises”.

It is a materially different approach to IT that enables tremendous flexibility, velocity, and transparency, without sacrificing reliability or scalability. As Lance Weaver, Chief Technology Officer for Cloud at GE Corporate identifies, “People will naturally gravitate to high value, frictionless services”. The global scale, pay as you go pricing models, and instantaneous elasticity offered by major public cloud providers is unlike anything in the technology field since the dawn of the Internet. If GE can’t match the speed, security, and flexibility of leading public cloud providers, how can you?

What You Give Up

At some level, when moving to the cloud you do give up a measure of direct control. Your company’s employees no longer have their hands directly on the raw iron powering your applications. However, the increased responsiveness, speed, and agility enabled by the cloud model gives you far more practical control that the largely theoretical advantages of such hands-on ownership. In a competitive world, we outsource generation of electrical power, banking, delivery of clean, potable water, and access to global communications networks like the Internet. Increasingly, arguments for the cloud look similar, with the added benefits of continuous, rapid improvements and falling prices.

Encryption technologies and local backup options make it possible to protect and archive your data in a way that gives you and your stakeholders increased peace-of-mind, so make sure these are incorporated into your strategy.

Risk Reduction

The model above is based on the broad economics of the cloud. However, there are other, more intangible requirements that must be met before a change can be made. You’ll want to carefully evaluate a solution to ensure it has the features you need and is fit for purpose, that the provider you choose gives you the transparency into the security, reliability, and quality of their infrastructure and processes. Make sure that data ownership and level of access is clear and meets your requirements. Ensure you have procedures and controls in place for security, change control, and transparency/compliance. These would be required controls for an in-house IT or private cloud as well. One benefit of public cloud providers in this area is that many of them offer capabilities that are certified or audited against recognized standards, such as ISO 27001, SSAE16, ISAE 3402, and even FISMA. Some will also sign HIPAA Business Associate Agreements (BAAs) as part of their service. Adherence to these standards may be part of the entry-level offering, though sometimes it is only available as part of a higher-end package. Be sure to research and select a solution that meets your needs.

External Factors

No matter who you are, you are beholden to other stakeholders in some way. Here are a couple areas to ensure you pay attention to:

  • Regulation – Related to risk reduction, you want to have controls in place that adhere to relevant policies and regulations. In clinical research, frameworks such as ICH Good Clinical Practice and their principles of Computer System Validation (CSV) are widely accepted, well understood, and contain nothing that is a barrier to deploying a well-designed cloud with the appropriate due diligence. You may also have to consider national health data regulations such as HIPAA or EU privacy protections. Consider if data is de-identified or not, and at what level, to map out the landscape of requirements you’ll have to deal with.
  • Data Storage – A given project or group may be told that the sponsor, payer, institution, or regulatory authority requires in-house or in-country storage of data. Sometimes this is explicitly part of a policy or guideline, but just as often it is more of a perceived requirement, because “that’s the way we’ve always done it”. If there is wiggle room, think about if it is worth fighting to be the exception (more and more often, the answer is yes). Gauge stakeholders such as your IT department, who nowadays are often overburdened and happy to “outsource” the next project, provided good controls and practices are in place.
  • Culture – a famous saying, attributed to management guru Peter Drucker, is that “Culture eats strategy for breakfast, every time”. Putting the necessary support in place for change in your organization and with external stakeholders is important. The embrace of cloud at GE and in the broader economy helps. Hopefully this article helps :-). And starting small (something inherently more possible with the cloud) can help you demonstrate value and convince others when it’s time to scale.

SaaS

SaaS (Software-as-a-Service) is closely tied to cloud, and often confused with it. It is inherently cloud-based but the provider manages the details all the way up to the level of the application. SaaS solutions are typically sold with little or no up-front costs and a monthly or yearly subscription based on usage or tier of service.

SaaS-IaaS-PaaS

Source: http://venturebeat.com/2011/11/14/cloud-iaas-paas-saas/

When you subscribe to a SaaS application, your solution provider handles the cloud stuff, and you get:

  • a URL
  • login credentials
  • the ability to do work right away

Which leads to a scenario like the following:

A few years ago, you typically had to balance this advantage (lack of IT headaches and delays) against the lack of a comprehensive feature set. As relatively new entrants to the market, SaaS platforms didn’t yet have all the coverage of legacy systems that had been around for years, or in some cases decades. However, the landscape has changed. The SaaS provider is focused on making their solution work great on just one, uniform environment, so they can focus more of their resources on rapidly building and deploying high-quality features and a high-quality user experience. The result is that there is far more parity. Most SaaS solutions have caught up and are outpacing legacy technologies in pace of improvements to user experience, reliability, and features. Legacy providers have to spend more and more resources dealing with a complex tangle of variations in technology stack, network configuration, and IT administration at each customer site.

 

Furthermore, the modern SaaS provider can reduce, rather than increase, vendor lock-in. Technology market forces demand that interoperability be designed into solutions from the ground up. Popular SaaS frameworks such as microservice APIs mean your data and content are likely to be far more accessible, both to users and other software systems, than when locked in a legacy relational database.

The SaaS provider has the ability to focus on solving the business problems of its customers, and increasingly-powerful cloud infrastructure and DevOps technologies to automate the rest in the background in a way that just works. These advantages get passed that along to the customer in continuous product improvements and the flexibility to scale up and down as you need to, without major capital commitments.

Conclusion

YMMV, but cloud & SaaS are powerful phenomena changing the way we live and work. In a competitive environment, they can help you move faster and lower costs, by making IT headaches and delays a thing of the past.

 

2015 Future of Open Source Survey Results

Open source software has emerged as the driving force of technology innovation, from cloud and big data to social media and mobile. The Future of Open Source Survey is an annual assessment of open source industry trends that drives broad industry discussion around key issues for new and established software-related organizations and the open source community.

The results from the 2015 Future of Open Source Survey reflect the increasing adoption of open source and highlight the abundance of organizations participating in the open source community. Open source continues to speed innovation, disrupt industries, and improve productivity; however, a reported lack of formal company policies and processes around its consumption points to a need for OSS management and security practices to catch up with this growth in investment and use.

Check out the slides below for survey results.

Reducing friction in patient engagement: an (unconventional) case study

Participate_SCDM_SurveyOur quest for frictionless, electronic patient reported outcomes (ePRO) data capture has us looking for novel ways to engage patients and streamline process. I’d like to share a fun and interesting example of this work, in which we used Participate (the OpenClinica ePRO solution) to engage study subjects at the recent SCDM annual conference.

Our goal at the SCDM conference was to get as many attendees to try OpenClinica Participate as possible. With the vast array of vendors, eye candy, and giveaways, it’s a big challenge to cut through the noise and offer a simple, fun way to engage an audience. The same holds true when engaging patients. With the enormous number of daily distractions, ensuring that your patients can quickly access, fill out and submit well-constructed, simple forms is key to compliance and ultimately, better data.

I built the form, shown here, in OpenClinica and enabled access to it via a custom URL, a new feature in our latest release.

Attendees filled out the form, sprinkled with fun health habit questions, then captured information to allow us to draw their names to win Fitbits and other giveaways. We were able to use this data and update our graphs to give the participant a view of how they stacked up with their peers—cool!

Imagine if patients could view a visual representation of the study they are enrolled in – see the parallels and possibilities?

Graphs_SCDM

Who says ePRO and patient engagement can’t be fun?

HTML Tips to Enhance Your eCRF

In some cases, the display of your OpenClinica eCRF may not be exactly what you had in mind. You may want to highlight key words or phrases, create a bullet point list, or insert a URL or image. Using HTML tags, you can make some simple manipulations to change the look and feel of your case report forms and make them more inviting for data entry.

Using HTML tags to enhance your eCRF

The HTML tags described in this document can be used in the following columns in the CRF Excel template:

  • Items Tab: LEFT_ITEM_TEXT
  • Items Tab: RIGHT_ITEM_TEXT
  • Items Tab: HEADER
  • Items Tab: SUBHEADER
  • Sections Tab: INSTRUCTIONS

What are HTML tags?

HTML, or Hyper Text Markup Language, is a markup language that is commonly used for web page development. HTML is written using “tags” that surround text or elements. These tags typically come in pairs, with a start tag and an end tag:

<start tag>Text to format</end tag>

To insert an HTML tag, simply surround the text you want to format with the desired tag. Below are the HTML tags that work in OpenClinica:

Table

You can download this HTML Tags Knowledge Article to help you to get started.

Inserting URLs and Images

HTML also allows you to insert a URL or Image into your CRF, which may be used to provide users with additional information or references.

Insert a URL

A URL may be inserted into a CRF in order to provide a link to further instructions or protocol information. To insert a URL into your CRF, use the following format:

Inserting images - using HTML tags to optimize your eCRF

Simply replace the areas highlighted in yellow with (a) your URL (inside the quotation marks) and (b) the hyperlinked text that you want to display to the user.

The following example will prompt the user to “Click Here!” and will open the OpenClinica website in a new browser tab:

<a href=”https://www.openclinica.com” target=”_blank”>Click Here!</a>

Inserting an image - using HTML tags to optimize your eCRF

Insert an Image

Similarly, HTML can be used to insert an image into your CRF. You might consider using an image to display a pain scale (or other reference image), or even to display your company’s logo.

Inserting an image - using HTML tags in OpenClinica

To insert an image into your CRF, use the following format:

<img src=”images/ImageName”>

Again, simply replace the highlighted text with your image name. You can use PNG, JPG, or GIF image extensions. You can control the height and width of the image using the following format:

<img src=”images/ImageName” width=“n” height=“n”>

The highlighted n corresponds to the desired width and height of the image in pixels.

The following example will insert an image (image1.png) with a width of 300 and a height of 150:

<img src=”images/image1.png” width=”300″ height=”150″>

You can download this Images & URLs Example CRF to help you practice.

The examples included in the above CRF Excel template will insert an image that already exists in the images directory of your OpenClinica application. To insert a custom image, community users will need to place the image in the following directory of the OpenClinica application:

tomcatwebappsOpenClinicaimages

OpenClinica Enterprise customers can request an image be placed on the application server by reaching out to the OpenClinica Enterprise Support team via the Issue Tracker.

Have you used HTML in your CRFs? Let us know if you have any other suggestions or tips!


IMPORTANT NOTES:

 The RESPONSE_OPTIONS_TEXT field is not included in the list above, as HTML tags are currently not supported for response options.

 The QUESTION_NUMBER field will display the text properly, but has been known to cause issues when extracting data. Therefore, HTML should not be used in this column.

Calculating ROI for ePRO

I recently delivered a webinar titled “Getting Started with eCOA/ePRO,” in which roughly a third of attendees polled cited expense as the number one reason that has prevented them from adopting an ePRO solution. So what does ePRO really cost? Is it worth it? Here I strive to provide a basic, high-level framework for thinking about the return on investment ROI of eCOA vs. paper.

Let’s start by taking a look at the costs that are unique to each approach.

Paper

In a traditional paper based model, you are incurring costs that stem from printing, mailing, data entry, and data cleaning. These are all expenses than can be estimated with a fair degree of accuracy, with the cost of data entry being the most significant of these. To estimate the cost of data entry, see how long it takes to key in a subject completed paper casebook, multiply this by your cost of labor (don’t forget to include overhead!).

ePRO

The cost side for ePRO is similarly straightforward, but the expense elements are different. You’re either building an ePRO system (which will almost carry a highly unpredictable cost) of buying one (much more predictable cost). Assuming you’re buying, here are the costs you may expect to incur:

· License
· Hosting
· Training and support
· Professional services (e.g. study configuration)
· Devices

You should evaluate whether your study and selected ePRO system will allow for patients to use their own devices, or if you will need to provision devices (or some mix thereof). The cost of provisioning devices, especially for a global study can be significant—in addition to the costs of the devices themselves, you will need to consider the costs of data plans, and logistics associated with supporting the devices. I’m a big fan of BYOD (bring your own device) but, depending on the study, it may not be feasible to utilize while maintaining scientific validity of data collected.

Once you’ve mapped out your costs of each route, you can begin to weigh these against the benefits of going eCOA.

cropped-istock_000037068804.jpg

Paper vs. eCOA

When you boil it down, people employ ePRO/eCOA to maximize data quality, increase productivity, and/or enable new capabilities that help answer their research questions. ePRO is e-source, so you don’t have worry about administering a paper data entry process. Depending on the study, the cost savings from this alone might justify ePRO.

There are some additional benefits ePRO offers over paper that may be harder to quantify, but nonetheless  very real. For example, there are clear data quality benefits to ePRO. The electronic system can ensure a minimum standard of data quality through edit checks and enforced data structures. ePRO data will always be cleaner than the same data captured on paper.

Benefits and Motivations for eCOA

 

The use of an ePRO system also allows you to know for sure when the data were recorded. For instance, patients can be reminded automatically when their diaries are overdue, and you now only have much stronger assurances that data were collected at the appropriate time (vs paper), you can also more easily monitor the study progress.

Bypassing manual data entry and having the system provide notifications to subjects to ensure data are captured in a timely way might allow for faster and better in-study decision making and even may accelerate study closeout. Also, an increasing amount of evidence exists that mobile-based messaging and communication strategies help increase patient engagement and treatment adherence. And of course, not having to deal with a stack of paper during a site visit might allow the clinician’s interaction with the patient to be higher quality.

Quantifying the benefits of all of these things can be tough, but start with those which are most quantifiable and see if those items alone these alone provide a compelling ROI (from my experience they often do).  Then the less tangible benefits become gravy to the ROI argument.  When modeling costs over time and a pay-back period, keep in mind that ePRO will typically carry a higher upfront cost than paper, with the cost saving benefits realized downstream over time. With today’s technologies, even most smaller studies should be able to realize a positive payback.

Naturally, there may be additional ROI factors to consider which are specific to your situation. If you have particular thoughts, questions, or experiences on this topic I encourage you to add a comment to this post.

Engage. Learn. Repeat.

At OpenClinica we are driven to reduce obstacles to the advancement of medical research. The OpenClinica open source project started because EDC was too complex, too inaccessible, and too expensive. Not to mention far too difficult to evaluate and improve. So we built an EDC / CDMS platform and released it under an open source license. It is now the world’s most widely used open source EDC system and has an active, growing user community.

 

As the user base grew, we listened to users and understood that integration and interoperability were another major obstacle. While we don’t claim to have fully cracked that nut yet, OpenClinica’s CDISC ODM-based APIs have been pretty widely adopted and helped to drive some significant innovations. These APIs have been improved upon by a large number of developers in the few years they have been part of the codebase.

 

As we continue to improve the clinical and researcher experience, our attention has more recently been directed to the experience of trial participants. The difficulty of meaningful, timely engagement with these volunteers also strikes us as an obstacle to successful research. We live in a world where 90% of American adults have mobile phones, 81% text, and 63% use their phone to go online (Pew), and even older age groups are adopting smartphones at a rapid pace [1]. Because of this, we think that mobile technology could be a pretty effective means to help more meaningfully engage participants research.

 

Why is this important? Treating research volunteers as participants, as opposed to subjects, can lead to concrete benefits – improving participation, motivation, and adherence. Increasing your ability to meet recruitment goals, budget, and completion timelines. Getting more complete, timely data. Even enabling new protocol designs that better target populations and/or more closely align with real-world use. But most of all, it just seems like the right thing to do. As one HIV trial participant put it, “I’d initially had this nagging fear in my head, that, once recruited, I would cease to be nothing more than a patient number – a series of digits, test results and charts in a file – which is quite a daunting prospect when you’re not entirely sure how your body is going to respond to the vaccine. This could not have been further from the reality of the trial. I felt safe, informed and valued at every stage of the trial.”

 

The great (and often unrecognized) news is that so many of the people involved in research and care already do an unbelievable job creating this type of engagement – making participants feel safe, informed, and valued. But it takes a lot of work. With a mobile-enabled, real-time solution like OpenClinica Participate, you can provide an engagement channel and data capture experience that is simple, elegant, and easy to use on any device. Because it is fully integrated with OpenClinica and captures data in a regulatory-compliant manner, you can reduce time and headache for your research team from, for instance, merging disparate sources of data and keying in paper reports. Leaving you more time to focus on the kinds of human to human engagement that technology cannot do.
Participate Webinar
[1]  For the over 55 age group, most likely to participate in many types of trials, the picture is a bit different. As of 2013, around 80% have mobile phone but only 37% are smartphones. However over-55s are the fastest growing smartphone adopters, expected by Deloitte to soon reach 50% and reach parity with other age groups by 2020.
See http://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-Telecommunications/gx-tmt-2014prediction-smartphone.pdf. Outside of the developed world, the picture is different, though the opposite of what you might expect. According to Donna Malvey, PhD, co-author of mHealth: Transforming Healthcare, cell phones are even more pervasive, and mHealth “apps are the difference between life and death. If you’re in Africa and you have a sick baby, mHealth apps enable you to get healthcare you would normally not have access to… In China and India, in particular, mobile apps can bring healthcare to rural areas. “

 

OC15 in Review

Two weeks ago, members of the OpenClinica community converged on the Lloyd Hotel in Amsterdam for the 2015 OpenClinica Global Conference. The event was highlighted by a keynote from John Wilbanks, who inspired us with a great talk on how mobile technology and open source can help break new ground in understanding disease. Smartphone-based engagement tools, used in real world settings, can enable greater participation in research, cut costs, and make new research designs and insights possible. It’s not just theoretical, as his company, Sage BioNetworks, is walking the walk by open-sourcing its e-consent toolkit and working with Apple on studies using ResearchKit.

Our community demonstrated it is also a powerful driver of innovation. Through innovative applications, by integrating with other powerful tools, or augmenting the OpenClinica core with custom built extensions, OpenClinica is playing a role in patient engagement, big data, and translational science. TraIT shared its work integrating OpenClinica into a Translational Molecular Medicine infrastructure (ppt), Aachen University presented its integration of medical imaging applications, and University of Cambridge’s RDCIT team unveiled its integration sophisticated pedigree and phenotyping capabilities to support clinical genomics research (pdf). We got to take deep dives into powerful GUIs for data import, client libraries for OC’s web services API, and data marts/reporting. That’s just a few. Many of these efforts are open source and are being shared freely–a phenomenon unique in the field of clinical research.

https://twitter.com/RRittberg/status/604983558562820096/photo/1

https://twitter.com/benbaumann/status/604925177584087040/photo/1

We have become a community that rapidly disseminates ideas and code while holding each other to rigorous standards of quality. We are building on a shared foundation of strong data provenance, audit trails, privacy protections, and GCP compliance. OC15 was a reminder of how openly and effectively our community collaborates, and how great we are at welcoming new participants. I left OC15 inspired and motivated by the participants’ passion and creativity.