Author Archives: melissaebc

How Can Life Science Researchers Balance Security with Emerging Technology in Life Science Research?

Data is the currency of scientific research. Its security should not be left to chance.

Data integrity is crucial across all forms of research, including Life Sciences research.  After all, it’s the only way researchers and regulators can assure the quality, safety, and efficacy of their products.

The way Life Science companies store and communicate data has become increasingly crucial to validating the expectation that their data is safe and secure; as a result, organizations must be hyper-vigilant in mitigating data risks like cyberattacks, data breaches, and record falsification.

In fact, these expectations have grown in recent years. As the Life Science industry grows in complexity, the use of highly automated Cloud-enabled systems makes data integrity increasingly important to sustainable success. Compliance needs are driving organizations to make their data-related processes more robust and secure.

It takes more than controls, processes, and technology to implement good data practice. Life Science research firms must adopt a wider shift towards educating for data risk mitigation and develop a culture that understands and values data integrity.

5 Key Elements of Data Integrity 

Life Science research data needs to be complete, consistent, and accurate throughout the data lifecycle. Ensuring that all original records and true copies – including source data and metadata – remain un-compromised in the Life Science environment is no small feat.  It is important to focus on five key characteristics to increase data integrity:

Attributability. Data must be attributable to a specific project or process of the specific individual who creates it. Modifications must produce an audit trail so that people can follow the path data takes through the organization.

Legibility. Data must be legible and durable. If it isn’t readable by the eye, it should be readily accessible by electronic means. Containerization, a means to support legacy software without needing to maintain legacy hardware/IT, is one way Life Science researchers maintain legibility for scientific workflow applications.

Chronology. Metadata should allow auditors to create an accurate version history. Processes that create metadata should do so in an immediate and verifiable way.

Originality. Data should retain its original format whenever possible. Verified copies should also retain original formatting and avoid arbitrary changes.

Accuracy. Data must accurately reflect the activity or task that generated it. Metrics that measure data should be standardized across platforms.

These characteristics ensure that data is complete, consistent, enduring, and available. Once Life Science research firms implement solutions that maintain data integrity, they can begin operating in more risk-intelligent ways.

Life Science Data Risks are Unique 

Several factors combine to give Life Science research a unique risk profile. While many of the threats that Life Science organizations face are the same ones faced by the commercial and government sectors, there are structural risks inherent to the way Life Science research must be carried out.

Intellectual properties in the Life Sciences are incredibly valuable. Drug formulas, medical device blueprints, and clinical data are the result of years of painstaking research. These properties may have life-changing patient impacts and the potential to generate billions of dollars in revenue. Understandably, these assets are of enormous interest to hackers, including attackers sponsored by hostile nation-states.

As relevant as is the issue of hackers, the internal risk is something to combat, as well.  Research teams often exchange sensitive information within different work streams, and among a wide range of partners. While sharing data expedites research and development, it also increases the risk of data falling into the wrong hands. Even within the field, it is important to be aware of potentially untrustworthy sources with or without malicious intent. 

Life Science organizations typically rely on a global network of suppliers for hard-to-find materials and equipment. Supply chain attacks – where attackers exploit a weak link in a trusted vendor to infiltrate organizations down the supply chain – are a dangerous and growing trend.

Mergers and acquisitions within the Life Science industry also have a tendency to increase security risks. When two companies merge, they inevitably share data in a trust-oriented environment. If both companies’ IT teams have not taken sufficient action to secure that environment first (or adopted a zero-trust model), new vulnerabilities may come to light.

Implement Cloud Security and Risk Mitigation Strategies 

Life Science researchers do not have to give up on the significant advantages that Cloud technology offers. They simply must plan for security contingencies that reflect today’s data risk environment.

Mitigating Cloud risk means establishing a robust cybersecurity policy that doesn’t simply conform to industry standards, but exceeds them. Beyond well-accepted methods like multi-factor authentication, full data encryption (in-transit and at rest) and data exfiltration add layers of protection but require adopting a more proactive stance towards security as a tenet of workplace culture. For example, it’s critical that teams manage encryption keys, fine-grain security, and network access controls internally (vs. outsourcing to the Public Cloud provider). Additionally, work-flow controls and empowered data stewards help put controls in place with reduced impact(s) to collaborative work.

In summary, every research position is also a cybersecurity position. Teaching team members to maintain data integrity ensures secure, consistent access to innovative technologies like the Cloud.

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

5 Ways to Adopt a Proactive Approach to Shifting Biopharma Business Models

If the events of the past year have taught us anything, it’s that Life Science organizations—like most other businesses—need infrastructure that can adapt to unpredictable and disruptive risks. 

Those that adopt certain actionable strategies today are going to be better-suited to ensure disruption-free activity in the future. Here are five you can implement now to help your team prepare.

1) Make Smart Automation Core to Delivering Value

Cloud-based technology has already begun to fundamentally change the way data storage and scientific computing takes place in the R&D environment. As Cloud capabilities improve over time, and the ability to securely capture and interpret scientific data increases, scientific research companies are going to become more efficient, compliant, and secure than ever before.

2) Leverage Interoperable Data to Drive New Value-Generating Insights 

The rise of automation in this industry will enable a far greater degree of data interoperability and transparency. Scientific organizations will have to drive value with their ability to derive accurate insight from widely available data conveners. Disparities between the way research organizations make use of their tech platforms will always exist, but the rules of the game are likely to change when everyone at the table can see all of the pieces.

3) Let Platform Infrastructure Developers Take Center Stage

It’s clear that the future of biotech and pharmaceutical development will require massive changes to the data infrastructure that researchers use to drive progress and communicate with one another.  Over the next two decades, health solutions may no longer be limited to medical devices and pharmaceuticals. Software applications and data-centric communication algorithms are going to become increasingly important when informing healthcare professionals about what actions to take. The availability of personalized therapies will transform the way research and development approach their core processes.

4) Focus on Highly Personalized Data-Powered Therapies 

Until now, biopharmaceutical companies largely focused their efforts on developing one-size-fits-all therapies to treat chronic illnesses that impact large swaths of the population. However, the post-COVID healthcare world is likely to challenge this strategy in the long term, as more researchers focus on highly personalized therapies developed using highly efficient, data-centric methods. These methods may include using patient genomics to predict drug efficacy and gathering data on patients’ microbiome samples for less understood conditions like Alzheimer’s.

5) Commit to Ongoing Research and Empower Smaller-Volume Therapies

The wide availability of patient data will allow medical researchers to interpret patient data well after a drug hits the market. The raw data now available will enable the researcher to identify an opportunity to work with clinicians on developing new treatment pathways for a particular group of patients. Smaller-volume therapies will require new manufacturing capabilities, as well as cross-disciplinary cooperation empowered by remote collaboration tools – all of which require investment in new Cloud-based data infrastructure.

Future Development Depends on Interoperable Data Accessibility

Taking the actionable steps above will drive up value in the biotech and pharmaceutical space exponentially. The availability of interoperable data is already showing itself to be a key value driver for enabling communication between researchers. As the industry leans on this technology more and more over time, integrating state-of-the-art solutions for collecting, handling, and communicating data will become a necessity for R&D organizations to remain competitive. Adapting to the “new normal” is going to mean empowering researchers to communicate effectively with patients, peers, and institutions remotely, leveraging real-time data in an efficient, organized way.

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

5 Things Your Computing Partner Should Never Say

Reputable scientific computing consultants don’t say these things.

Today’s Life Science and biopharmaceutical research processes rely heavily on high-performance computing technology. Your scientific computing partner plays a major role in supporting discovery and guaranteeing positive research outcomes.

It should come as no surprise that not just anyone can fulfill such a crucial role. Life Science executives and research teams place a great deal of trust in their scientific computing advisors – it’s vital that you have absolute confidence in their abilities.

But not all scientific computing vendors are equally capable, and it can be difficult to tell whether you’re dealing with a real expert. Pay close attention to the things vendors say and be on the lookout for any of these five indicators that they may not have what it takes to handle your research firm’s IT needs.

5 Things Your Scientific Computing Vendor Should Never Say 

If you feel like your partner could be doing more to optimize research processes and improve outcomes, pay close attention to some of the things they say. Any one of these statements can be cause for concern in a scientific computing partnership:

1. “But you never told us you needed that.”

Scientific computing is a dynamic field, with ongoing research into emerging technologies leading to a constant flow of new discoveries and best practices. Your scientific computing partner can’t assume it’s your job to stay on top of those developments. They must be proactive, offering solutions and advice even when not specifically solicited.

Your focus should be on core life science research – not how the latest high-performance computing hardware may impact that research. A great scientific computing vendor will understand what you need and recommend improvements to your processes of their own initiative.  

2. “It worked for our other clients.” 

It doesn’t matter what “it” is. The important part is that your scientific computing vendor is explicitly comparing you to one of your competitors. This indicates a “one-size-fits-all” mentality that does not predict success in the challenging world of Life Science research.

Every Life Science research firm is unique especially with respect to their processes. Setting up and supporting a compute environment involves assessing and responding to a wide variety of unique challenges. If your computing vendor doesn’t recognize those challenges as unique, they are probably ignoring valuable opportunities to help you improve research outcomes.

3. “Yes, we are specialists in that field too.” 

There is little room for jacks-of-all-trades in the world of scientific computing. Creating, implementing, and maintaining scientific computing frameworks to support research outcomes requires in-depth understanding and expertise. Life Science research has a unique set of requirements that other disciplines do not generally share.

If your Life Science computing vendor also serves research organizations in other disciplines, it may indicate a lack of specialization. It might mean that you don’t really have reliable field-specific expertise on-hand but a more general advisor who may not always know how best to help you achieve research outcomes.

4. “We met the terms of our service-level agreement.”

This should be a good thing, but watch out for scientific computing vendors who use it defensively. Your vendors may be more focused on meeting their own standards and abiding by contractually defined service-level agreements than helping you generate value through scientific research.

Imagine what happens when your project’s standards and objectives come into conflict with your vendor’s revenue model. If your vendor doesn’t have your best interests at heart, falling back to the service-level agreement is a convenient defense mechanism.

5. “We’re the only approved partner your company can use.”

If the primary reason you’re working with a specific vendor is that they are the only approved partner within your company, that’s a problem. It means you have no choice but to use their services, regardless of how good or bad they might be.

Giving anyone monopolistic control over the way your research team works is a risky venture. If you don’t have multiple approved vendors to choose from, you may need to make your case to company stakeholders and enact change.

Don’t Miss Out on Scientific Computing Innovation 

There are many opportunities your Life Science research firm may be positioned to capitalize on, but most of them rely on a productive partnership with your scientific computing vendor. Life Science organizations can no longer accept scientific computing services that are merely “good enough” when the alternative is positive transformational change. 

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

Modernizing the R&D Environment, Part Two: When a Multi-Cloud Strategy Makes Sense

There are good reasons to balance Cloud infrastructure between multiple vendors.

In Part One of this series, we discussed some of the applications and workflows best-suited for public Cloud deployment. But public Cloud deployments are not the only option for life science researchers and biopharmaceutical IT teams. Hybrid Cloud and multi-Cloud environments can offer the same benefits in a way that’s better aligned to stakeholder interests.

What is a Multi-Cloud Strategy?

Multi-Cloud refers to an architectural approach that uses multiple Cloud computing services in parallel. Organizations that adopt a multi-Cloud strategy are able to distribute computing resources across their deployments and minimize over-reliance on a single vendor.

Multi-Cloud deployments allow Life Science researchers and Bio-IT teams to choose between multiple public Cloud vendors when distributing computing resources. Some Cloud platforms are better suited for certain tasks than others, and being able to choose between multiple competing vendors puts the organization at an overall advantage.

Why Bio-IT Teams Might Want to Adopt a Multi-Cloud Strategy

Working with a single Cloud computing provider for too long can make it difficult to move workloads and datasets from one provider to another. Especially as needs and requirements change which, as we know, is quite often within Life Sciences organizations.  Highly centralized IT infrastructure tends to accumulate data gravity – the tendency for data analytics and other applications to converge on large data repositories, making it difficult to scale data capabilities outwards.

This may go unnoticed until business or research goals demand migrating data from one platform to another. At that point, the combination of data gravity and vendor lock-in can suddenly impose unexpected technical, financial, and legal costs.

Cloud vendors do not explicitly prevent users from migrating data and workflow applications. However, they have a powerful economic incentive to make the act of migration as difficult as possible. Letting users flock to their competitors is not strictly in their interest.

Not all Cloud vendors do this, but any Cloud vendor can decide to. Since Cloud computing agreements can change over time, users who deploy public Cloud technology with a clear strategy for avoiding complex interdependencies will generally fare better than users who simply go “all in” with a single vendor.

Multi-Cloud deployments offer Life Science research organizations a structural way to eliminate over-reliance on a single Cloud vendor. Working with multiple vendors from the start demands researchers and IT teams plan for data and application portability from the beginning.

Multi-Cloud deployments also allow IT teams to better optimize diverse workflows with scalable computing resources. When researchers demand new workloads, their IT partners can choose an optimal platform for each one of them on a case-by-case basis.

This allows researchers and IT teams to coordinate resources more efficiently. One research application’s use of sensitive data may make it better suited for a particular Cloud provider, while another workflow demands high-performance computing resources only available from a different provider. Integrating multiple Cloud providers under a single framework can enable considerable efficiencies through each stage of the research process.

What About Hybrid Cloud?

Hybrid Clouds are IT architectures that rely on a combination of private Cloud resources alongside public Cloud systems. Private Cloud resources are simply Cloud-based architectures used exclusively by one organization.

For example, imagine your life science firm hosts some research applications on its own internal network but also uses Microsoft Azure and Amazon AWS. This is a textbook example of a multi-Cloud architecture that is also hybrid.

Hybrid Cloud environments may offer benefits to Life Science researchers that need security and compliance protection beyond what public Cloud vendors can easily offer. Private Cloud frameworks are ideal for processing and storing sensitive data.

Hybrid Cloud deployments may also present opportunities to reduce overall operating expenses over time. If researchers are sure they will consistently use certain Cloud computing resources frequently for years, hosting those applications on a private Cloud deployment may end up being more cost-efficient over that period. 

It’s common for Bio-IT teams to build private on-premises Cloud systems for small, frequently used applications and then use easily scalable public Cloud resources to handle less frequent high-performance computing jobs. This hybrid approach allows life science research organizations to get the best of both worlds.

Optimize your Cloud Strategy for Achieving Research Goals 

Life Science research organizations generate value by driving innovation in a dynamic and demanding field. Researchers who can perform computing tasks on the right platform and infrastructure for their needs are ideally equipped to make valuable discoveries. Public Cloud, multi-Cloud, and hybrid Cloud deployments are all viable options for optimizing Life Science research with proven technology.

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

Get on Board with Emerging Biopharma Technology

Explore the cutting-edge tools revolutionizing the industry. 

Computer-aided design (CAD) is already an essential tool for most biopharmaceutical research teams. But as CAD tools such as deep learning (DL), machine learning (ML), and artificial intelligence (AI) evolve faster than we can imagine, even the most tech-savvy teams risk missing out on the transformational benefits of these state-of-the-art tools. Explore the nearly unimaginable advantages of the latest CAD advancements and learn how to break down barriers to unlocking the full potential of these innovations, even if your organization has yet to reach for the key.

The Capabilities of Cutting-Edge CAD

Experts in AI, ML, and DL have documented a few primary areas of digital transformation within the biopharmaceutical realm.

Drug discovery

ML gives labs the ability to quickly process nearly infinite drug datasets to find matches that could fuel the discovery of new pharmaceutical treatments

As reported by Genetic Engineering & Biotechnology News, the firm Atomwise pioneered the use of convolutional neural networking—a form of ML used in common consumer tech like social media photo tagging—for more than 550 initiatives, including drug discovery, dose optimization, and toxicity screening.

Personalized medicine

This term describes the use of a type of ML called predictive analytics, in which a patient’s individual health data provides the physician with detailed information about their genetics, risks, and possible diagnoses. The partnership between IBM Watson Oncology and Memorial Sloan Kettering exemplifies the area of personalized medicine as they drive research into the use of this modality to optimize patient treatments and outcomes. The availability of mobile apps, devices, and biosensors for remote health analysis will dramatically expand this area in the coming years.

Patient and site selection for clinical trials

Biopharma companies can significantly reduce the cost and time investment of clinical trials with the application of ML algorithms. In a 2018 article, research firm Evidera cited an analysis of 151 international clinical trials at nearly 16,000 sites. The study, which appeared in the journal Therapeutic Innovation and Regulatory Science, uncovered the difficulty of finding appropriate patients for clinical trials, especially for central nervous system conditions. The use of AI models can potentially use data mining to find subjects that have not yet been diagnosed with the disease in question.

Overcoming barriers to focus on the future

Research institutions of all sizes struggle to adopt emerging tech. The most common blockades to biopharma progress in this area include:

  • Internal culture that resists innovation
  • Limited infrastructure and resources to invest in technology
  • Misplaced commitment to legacy tools and practices that prevent experimentation
  • Lack of digital leadership from C-level executives
  • Limited access to clean, reliable datasets
  • Barriers to interoperability among collaborators, often even within the same organization
  • Concerns about data privacy and security, as well as other regulatory issues

The ethical implications of artificial intelligence and machine learning also pose issues, as technology evolves faster than we can answer questions about bias, transparency, and related concerns.

Today’s emerging biopharma tech will rapidly evolve and replace itself with tomorrow’s innovations. Within just a few years, modern labs that realize the possibilities of AI, DL, and ML will leave traditional biopharma firms in their dust with little hope of recovery. For example, companies that do not adopt advanced AI methods to recruit participants for clinical trials will struggle to complete the necessary research to produce new products. Deloitte estimates that fewer than 17 percent of the 30,000 trials registered on ClinicalTrials.gov in 2018 ever published results. 

In a 2019 study by Deloitte Insights, 79 percent of biotech respondents said their companies planned to implement new CAD advancement within the next five years, with 58 percent citing digital transformation as a top leadership priority. The prior year, a benchmarking study by the research firm found that 60 percent of biotech firms used machine learning, and 96 percent anticipated using it in coming years.

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

A Partly Cloudy Day: Which Applications and Workflows Work Best in the Public Cloud

Life Science researchers are beginning to actively embrace public Cloud technology. Research labs that manage IT operations more efficiently have more resources to spend on innovation.

As more Life Science organizations migrate IT infrastructure and application workloads to the public Cloud, it’s easier for IT leaders to see what works and what doesn’t. The nature of Life Science research makes some workflows more Cloud-friendly than others.

Why Implement Public Cloud Technology in the Life Science Sector?

Most enterprise sectors invest in public Cloud technology in order to gain cost benefits or accelerate time to market. These are not the primary driving forces for Life Science research organizations, however.

Life Science researchers in drug discovery and early research see public Cloud deployment as a way to consolidate resources and better utilize in-house expertise on their core deliverable—data. Additionally, the Cloud’s ability to deliver on-demand scalability plays well to Life Science research workflows with unpredictable computing demands.

These factors combine to make public Cloud deployment a viable solution for modernizing Life Science research and fostering transformation. It can facilitate internal collaboration, improve process standardization, and extend researchers’ IT ecosystem to more easily include third-party partners and service providers.

Which Applications and Workflows are Best-Suited to Public Cloud Deployment?

For Life Science researchers, the primary value of any technology deployment is its ability to facilitate innovation. Public Cloud technology is no different. Life Science researchers and IT leaders are going to find the greatest and most immediate value utilizing public Cloud technology in collaborative workflows and resource-intensive tasks.

1. Analytics

Complex analytical tasks are well-suited for public Cloud deployment because they typically require intensive computing resources for brief periods of time. A Life Science organization that invests in on-premises analytics computing solutions may find that its server farm is underutilized most of the time.

Public Cloud deployments are valuable for modeling and simulation, clinical trial analytics, and other predictive analytics processes that enable scientists to save time and resources by focusing their efforts on the compounds that are likely to be the most successful. They can also help researchers glean insight from translational medicine applications and biomarker pathways and ultimately, bring safer, more targeted, and more effective treatments to patients. Importantly, they do this without the risk of overpaying and underutilizing services.

2. Development and Testing

The ability to rapidly and securely build multiple development environments in parallel is a collaborative benefit that facilitates Life Science innovation. Again, this is an area where life science firms typically have the occasional need for high-performance computing resources – making on-demand scalability an important cost-benefit.

Public Cloud deployments allow IT teams to perform large system stress tests in a streamlined way. System integration testing and user acceptance testing are also well-suited to the scalable public Cloud environment.

3. Infrastructure Storage

In a hardware-oriented life science environment, keeping track of the various development ecosystems used to glean insight is a challenge. It is becoming increasingly difficult for hardware-oriented Life Science research firms to ensure the reproducibility of experimental results, simply because of infrastructural complexity.

Public Cloud deployments enable cross-collaboration and ensure experimental reproducibility by enabling researchers to save infrastructure as data. Containerized research applications can be opened, tested, and communicated between researchers without the need for extensive pre-configuration.

4. Desktop and Devices

Research firms that invest in public Cloud technology can spend less time and resources provisioning validated environments. They can provision virtual desktops to vendors and contractors in real-time, without having to go through a lengthy and complicated hardware process.

Life Science research organizations that share their IT platform with partners and contractors are able to utilize computing resources more efficiently and reduce its data storage needs. Instead of storing data in multiple places and communicating an index of that data to multiple partners, all of the data can be stored securely in the cloud and made accessible to the individuals who need it.

5. Infrastructure Computing

Biopharmaceutical manufacturing is a non-stop process that requires a high degree of reliability and security. Reproducible high-performance cloud (HPC) computing environments allow researchers to create and share computational biology data and biostatistics in a streamlined way.

Cloud-enabled infrastructure computing also helps Life Science researchers monitor supply chains more efficiently. Interacting with supply chain vendors through a Cloud-based application enables researchers to better predict the availability of research materials, and plan their work accordingly.

Hybrid Cloud and Multi-Cloud Models May Offer Greater Efficiencies 

Public Cloud technology is not the only infrastructural change happening in the Life Science industry. Certain research organizations can maximize the benefits of cloud computing through hybrid and multi-Cloud models, as well. The second part of this series will cover what those benefits are, and which Life Science research firms are best-positioned to capitalize on them.

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

Platform DevOps – 7 Reasons You Should Consider It for Your Research Co

Many researchers already know how useful DevOps is in the Life Sciences. 

Relying on specialty or proprietary applications and technologies to power their work, biotech and pharmaceutical companies have benefited from DevOps practices that enable continuous development meant micro-service-based system architectures. This has dramatically expanded the scope and capabilities of these focused applications and platforms, enabling faster, more accurate research and development processes.

But most of these applications and technologies rely on critical infrastructure that is often difficult to deploy when needed. Manually provisioning and configuring IT infrastructure to meet every new technological demand can become a productivity bottleneck for research organizations, while the cost surges in response to increased demand.

As a result, these teams are looking toward Platform DevOps—a model for applying DevOps processes and best practices to infrastructure—to address infrastructural obstacles by enabling researchers to access IT resources in a more efficient and scalable way.

Introducing Platform DevOps: Infrastructure-as-Code

One of the most useful ways to manage IT resources towards Platform DevOps is implementing infrastructure-as-code solutions in the organization. This approach uses DevOps software engineering practices to enable continuous, scalable delivery of compute resources for researchers.

These capabilities are essential, as Life Science research increasingly relies on complex hybrid Cloud systems. IT teams need to manage larger and more granular workloads through their infrastructure and distribute resources more efficiently than ever before.

7 Benefits to Adopting Infrastructure-as-Code

The ability to create and deploy infrastructure with the same agile processes that DevOps teams use to build software has powerful implications for life science research. It enables transformative change to the way biotech and pharmaceutical companies drive value in seven specific ways:

1. Improved Change Control
Deploying an improved change management pipeline using infrastructure-as-code makes it easy to scale and change software configurations whenever needed. Instead of ripping and replacing hardware tools, all it takes to revert to a previous infrastructural configuration is the appropriate file. This vastly reduces the amount of time and effort it takes to catalog, maintain, and manage infrastructure versioning.

2. Workload Drift Detection
Over time, work environments become unique. Their idiosyncrasies make them difficult to reproduce automatically. This is called workload drift, and it can cause deployment issues, security vulnerabilities, and regulatory risks. Infrastructure-as-code solves the problem of workload drift using a mathematical principle called idempotence – the fundamental property of repeatability.

3. Better Separation of Duties
It’s a mistake to think separation of duties is incompatible with the DevOps approach. In fact, DevOps helps IT teams offer greater quality, security, and auditability through separation of duties than traditional approaches. The same is true for infrastructure, where separation of duties helps address errors and mitigate the risk of non-compliance.

4. Optimized Review and Approval Processes
The ability to audit employees’ work is crucial. Regulators need to be able to review the infrastructure used to arrive at scientific conclusions and see how that infrastructure is deployed. Infrastructure-as-code enables stakeholders and regulators to see eye-to-eye on infrastructure.

5. Faster, More Efficient Server and Application Builds
Before Cloud technology became commonplace, deploying a new server could take hours, days or even longer depending upon the organization. Now, it takes mere minutes. However, configuring new servers to reflect the state of existing assets and scaling them to meet demand manually is challenging and expensive. Infrastructure-as-code automates this process, allowing users to instantly deploy or terminate server instances.

6. Guaranteed Compliance
Since the state of your IT infrastructure is defined in code, it is easily readable and reproducible. This means that the process of establishing compliant workflows for new servers and application builds is automatic. There is no need to verify a carbon copy of a fully compliant server because it was directly built with compliant architecture.

7. Tougher Security
Shifting to infrastructure-as-code allows Life Science researchers to embed best-in-class security directly into new servers from the very beginning. There is no period where unsecured servers are available on the network waiting for cybersecurity personnel to secure them. The entire process is saved to the configuration file, making it infinitely repeatable.

Earn Buy-In for Platform DevOps From Your IT Team

Implementing infrastructure-as-code can be a difficult sell for IT team members, who may resist the concept. Finding common ground between IT professionals and researchers is key to enabling the optimal deployment of DevOps best practices for research compute environments.

Clear-cut data and a well-organized implementation plan can help you make the case successfully. Contact RCH Solutions to find out how we helped a top-ten global pharmaceutical company implement the Platform DevOps model into its research compute environment.

 

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

How to Overcome Legacy Obstacles and Implement a Cloud-First Strategy

Transformative change means rethinking the scientific computing workflow. 

The need to embrace and enhance data science within the Life Sciences has never been greater. Yet, many Life Sciences organizations performing  drug discovery face significant obstacles when transforming their legacy workflows.

Multiple factors contribute to the friction between the way Life Science research has traditionally been run and the way it needs to run moving forward. Companies that overcome these obstacles will be better equipped to capitalize on tomorrow’s research advances.

5 Obstacles to the Cloud-First Data Strategy and How to Address Them 

Life Science research organizations are right to dedicate resources towards maximizing research efficiency and improving outcomes. Enabling the full-scale Cloud transformation of a biopharma research lab requires identifying and addressing the following five obstacles.

1. Cultivating a Talent Pool of Data Scientists

Life Science researchers use a highly developed skill set to discover new drugs, analyze clinical trial data, and perform biostatistics on the results. These skills do not always overlap with the demands of next-generation data science infrastructure. Life Science research firms that want to capitalize on emerging data science opportunities will need to cultivate data science talent they can rely on.

Aligning data scientists with therapy areas and enabling them to build a nuanced understanding of drug development is key to long-term success. Biopharmaceutical firms need to embed data scientists in the planning and organization of clinical studies as early as possible and partner them with biostatisticians to build productive long-term relationships.

2. Rethinking Clinical Trials and Collaborations

Life Science firms that begin taking a data science-informed approach to clinical studies in early drug development will have to ask difficult questions about past methodologies:

  • Do current trial designs meet the needs of a diverse population?
  • Are we including all relevant stakeholders in the process?
  • Could decentralized or hybrid trials drive research goals in a more efficient way?
  • Could we enhance patient outcomes and experiences using the tools we have available?
  • Will manufacturers accept and build the required capabilities quickly enough?
  • How can we support a global ecosystem for real-world data that generates higher-quality insights than what was possible in the past?
  • How can we use technology to make non-data personnel more capable in a cloud-first environment?
  • How can we make them data-enabled?

All of these questions focus on the ability for data science-backed cloud technology to enable new clinical workflows. Optimizing drug discovery requires addressing inefficiencies in clinical trial methodology.

3. Speeding Up the Process of Achieving Data Interoperability

Data silos are among the main challenges that Life Science researchers face with legacy systems. Many Life Science organizations lack a company-wide understanding of the total amount of data and insights they have available. So much data is locked in organizational silos that merely taking stock of existing data assets is not possible.

The process of cleaning and preparing data to fuel AI-powered data science models is difficult and time-consuming. Transforming terabyte-sized databases with millions of people records into curated, AI-ready databases manually is slow, expensive, and prone to human error.

Automated interoperability pipelines can reduce the time spent on this process to a matter of hours. The end result is a clean, accurate database fully ready for AI-powered data science. Researchers can now create longitudinal person records (LPRs) with ease.

4. Building Infrastructure for Training Data Models

Transforming legacy operations into fast, accurate AI-powered ones requires transparent access to many different data sources. Setting up the infrastructure necessary takes time and resources. Additionally, it can introduce complexity when identifying how to manage multiple different data architectures. Data quality itself may be inconsistent between sources.

Building a scalable pipeline for training AI data models requires scalable cloud technology that can work with large training datasets quickly. Without reputable third-party infrastructure in place, the process of training data models can take months.

5. Protecting Trade Secrets and Patient Data

Life Science research often relies on sensitive technologies and proprietary compounds that constitute trade secrets for the company in question. Protecting intellectual property has always been a critical challenge in the biopharmaceutical industry, and today’s cybersecurity landscape only makes it more important.

Clinical trial data, test results, and confidential patient information must be protected in compliance with privacy regulations. Life Science research organizations need to develop centralized policies that control the distribution of sensitive data to internal users and implement automated approval process workflows for granting access to sensitive data.

Endpoint security solutions help ensure sensitive data is only downloadable to approved devices and shared according to protocol. This enables Life Science researchers to share information with partners and supply chain vendors without compromising confidentiality.

A Robust Cloud-First Strategy is Your Key to Life Science Modernization

Deploying emergent technologies in the Life Science industry can lead to optimal research outcomes and better use of company resources. Developing a cloud computing strategy that either supplements or replaces aspects of your legacy system requires input and buy-in from every company stakeholder it impacts. Consult with the expert Life Science research consultants at RCH Solutions to find out how your research team can capitalize on the digital transformation taking place in Life Science.

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

AI Ecosystems, Edge, and the Potential for Quantum Computing in Research Science

Key Takeaways from NVIDIA’s GTC Conference Keynote

I recently attended NVIDIA’s GTC conference. Billed as the “number one AI conference for innovators, technologists, and creatives,” the keynote by NVIDIA’s always dynamic CEO, Jensen Huang, did not disappoint.

Over the course of his lively talk, Huang detailed how NVIDIA’s DGX line, which RCH has been selling and supporting since shortly after the inception of DGX, continues to mature as a full-blown AI enabler.

How? Scale, essentially.

More specifically, though, NVIDIA’s increasing lineup of available software and models will facilitate innovation by removing much of the software infrastructure work and providing frameworks and baselines on which to build.

In other words, one will not be stuck reinventing the wheel when implementing AI (a powerful and somewhat ironic analogy when you consider the impact of both technologies—the wheel and artificial intelligence—on human civilization). 

The result, just as RCH promotes in Scientific Compute, is that the workstation, server, and cluster look the same to the users so that scaling is essentially seamless.

While cynics could see what they’re doing as a form of vendor lock, I’m looking at it as prosperity via an ecosystem. Similar to the way I, and millions of other people around the world, are vendor-locked into Apple because we enjoy the “Apple ecosystem”, NVIDIA’s vision will enable the company to transcend its role as simply an emerging technology provider (which to be clear, is no small feat in and of itself) to become a facilitator of a complete AI ecosystem. In such a situation, like Apple, the components are connected or work together seamlessly to create a next-level friction-free experience for the user.

From my perspective, the potential benefit of that outcome—particularly within drug research/early development where the barriers to optimizing AI are high—is enormous.

The Value of an AI Ecosystem in Drug Discovery

The Cliff’s Notes version of how NVIDIA plans to operationalize its vision (and my take on it), is this: 

  • Application Sharing: NVIDIA touted Omniverse as a collaborative platform — “universal” sharing of applications and 3D. 
  • Data Centralization: The software-defined data center (BlueField-2 & 3 / DPU) was also quite compelling, though in the world of R&D we live in at RCH, it’s really more about Science and Analytics than Infrastructure. Nonetheless, I think we have to acknowledge the potential here.
  • Virtualization: GPU virtualization was also impressive (though like BlueField, this is not new but evolved). In my mind, I wrestle with virtualization for density when it comes to Scientific Compute, but we (collectively) need to put more thought into this.
  • Processing: NVIDIA is pushing its own CPU as the final component in the mix, which is an ARM-based processor. ARM is clearly going to be a force moving forward, and Intel x86_64 is aging … but we also have to acknowledge that this will be an evolution and not a flash-cut.

What’s interesting is how this approach could play to enhance in-silico Science. 

Our world is Cloud-first. Candidly, I’m a proponent of that for what I see as legitimate reasons (you can read more about that here). But like any business, Public Cloud vendors need to cater to a wide audience to better the chances of commercial success. While this philosophy leads to many beneficial services, it can also be a blocker for specialized/niche needs, like those in drug R&D. 

To this end, Edge Computing (for those still catching up, a high-bandwidth and very low latency specialty compute strategy in which co-location centers are topologically close to the Cloud), is a solution. 

Edge Computing is a powerful paradigm in Cloud Computing, enabling niche features and cost controls while maintaining a Cloud-first tact. Thus, teams are able to take advantage of the benefits of a Public Cloud for data storage, while augmenting what Public Cloud providers can offer by maintaining compute on the Edge. It’s a model that enables data to move faster than the more traditional scenario; and in NVIDIA’s equation, DGX and possibly BlueField work as the Edge of the Cloud.

More interestingly, though, is how this strategy could help Life Sciences companies dip their toes into the still unexplored waters of Quantum Computing through cuQuantum … Quantum (qubit) simulation on GPU … for early research and discovery. 

I can’t yet say how well this works in application, but the idea that we could use a simulator to test Quantum Compute code, as well as train people in this discipline, has the potential to be downright disruptive. Talking to those in the Quantum Compute industry, there are anywhere from 10 – 35 people in the world who can code in this manner (today). I see this simulator as a more cost-effective way to explore technology, and even potentially grow into a development platform for more user-friendly OS-type services for Quantum.

A Solution for Reducing the Pain of Data Movement

In summary, what NVIDIA is proposing may simplify the path to a more synergistic computing paradigm by enabling teams to remain—or become—Cloud-first without sacrificing speed or performance. 

Further, while the Public Cloud is fantastic, nothing is perfect. The Edge, enabled by innovations like what NVIDIA is introducing, could become a model that aims to offer the upside of On-prem for the niche while reducing the sometimes-maligned task of data movement. 

While only time will tell for sure how well NVIDIA’s tools will solve Scientific Computing challenges such as these, I have a feeling that Jensen and his team—like our most ancient of ancestors who first carved stone into a circle—just may be on to something here. 

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

7 Reasons To Implement Platform DevOps for Your Research Compute Environment

Many researchers already know how useful DevOps is in the life sciences. 

Relying on specialty or proprietary applications and technologies to power their work, biotech and pharmaceutical companies have benefited from DevOps practices that enable continuous development meant micro-service-based system architectures. This has dramatically expanded the scope and capabilities of these focused applications and platforms, enabling faster, more accurate research and development processes.

But most of these applications and technologies rely on critical infrastructure that is often difficult to deploy when needed. Manually provisioning and configuring IT infrastructure to meet every new technological demand can become a productivity bottleneck for research organizations, while the cost surges in response to increased demand.

As a result, these teams are looking toward Platform DevOps—a model for applying DevOps processes and best practices to infrastructure—to address infrastructural obstacles by enabling researchers to access IT resources in a more efficient and scalable way.

Introducing Platform DevOps: Infrastructure-as-Code

One of the most useful ways to manage IT resources towards Platform DevOps is implementing infrastructure-as-code solutions in the organization. This approach uses DevOps software engineering practices to enable continuous, scalable delivery of compute resources for researchers.

These capabilities are essential, as life science research increasingly relies on complex hybrid cloud systems. IT teams need to manage larger and more granular workloads through their infrastructure and distribute resources more efficiently than ever before.

7 Benefits to Adopting Infrastructure-as-Code

The ability to create and deploy infrastructure with the same agile processes that DevOps teams use to build software has powerful implications for life science research. It enables transformative change to the way biotech and pharmaceutical companies drive value in seven specific ways:

  1. Improved Change Control

Deploying an improved change management pipeline using infrastructure-as-code makes it easy to scale and change software configurations whenever needed. Instead of ripping and replacing hardware tools, all it takes to revert to a previous infrastructural configuration is the appropriate file. This vastly reduces the amount of time and effort it takes to catalog, maintain, and manage infrastructure versioning.

  1. Workload Drift Detection

Over time, work environments become unique. Their idiosyncrasies make them difficult to reproduce automatically. This is called workload drift, and it can cause deployment issues, security vulnerabilities, and regulatory risks. Infrastructure-as-code solves the problem of workload drift using a mathematical principle called idempotence – the fundamental property of repeatability.

  1. Better Separation of Duties

It’s a mistake to think separation of duties is incompatible with the DevOps approach. In fact, DevOps helps IT teams offer greater quality, security, and auditability through separation of duties than traditional approaches. The same is true for infrastructure, where separation of duties helps address errors and mitigate the risk of non-compliance.

  1. Optimized Review and Approval Processes

The ability to audit employees’ work is crucial. Regulators need to be able to review the infrastructure used to arrive at scientific conclusions and see how that infrastructure is deployed. Infrastructure-as-code enables stakeholders and regulators to see eye-to-eye on infrastructure.

  1. Faster, More Efficient Server and Application Builds

Before cloud technology became commonplace, deploying a new server could take hours, days or even longer depending upon the organization. Now, it takes mere minutes. However, configuring new servers to reflect the state of existing assets and scaling them to meet demand manually is challenging and expensive. Infrastructure-as-code automates this process, allowing users to instantly deploy or terminate server instances.

  1. Guaranteed Compliance

Since the state of your IT infrastructure is defined in code, it is easily readable and reproducible. This means that the process of establishing compliant workflows for new servers and application builds is automatic. There is no need to verify a carbon copy of a fully compliant server because it was directly built with compliant architecture.

  1. Tougher Security

Shifting to infrastructure-as-code allows life science researchers to embed best-in-class security directly into new servers from the very beginning. There is no period where unsecured servers are available on the network waiting for cybersecurity personnel to secure them. The entire process is saved to the configuration file, making it infinitely repeatable.

Earn Buy-In for Platform DevOps From Your IT Team

Implementing infrastructure-as-code can be a difficult sell for IT team members, who may resist the concept. Finding common ground between IT professionals and researchers is key to enabling the optimal deployment of DevOps best practices for research compute environments.

Clear-cut data and a well-organized implementation plan can help you make the case successfully. Contact RCH Solutions to find out how we helped a top-ten global pharmaceutical company implement the Platform DevOps model into its research compute environment.

Tips for Protecting Your Bio-IT Infrastructure In a Remote Environment

Challenging times or shifting conditions, such as those brought about by the unprecedented global health crisis, highlight the need to ensure your team has the technology, workflows, and processes in place to continue to deliver innovation at a rapid pace, despite the hurdles.

For most regions in the U.S., life has been toppled in different ways following urgent stay-home orders put in place to flatten the curve and reduce the immediate impact of the spreading coronavirus. In ways largely unexpected a few weeks or a month ago, millions of employees now have to work from home and school their children digitally, and hospitals are facing an unprecedented number of patients in need of care.

While working from home is necessary at a time like this, it leaves critical employees away from secure buildings and far from IT teams who can keep their devices or information safe. In our industry of Life Sciences and Healthcare, this is a particularly troubling fact.  

Reasons to Prepare: Common Threats in the New Working Normal

Companies or enterprises, whether involved in crisis response and management or not, should be prepared to counter potential DDoS attacks, large scale phishing attempts, and even ransomware attacks that may increase as a result of new remote work standards. Hospitals, in particular, are at higher risk, so these enterprises need to go back to the basics, patching systems as soon as possible and not falling into the trap of – “we can’t afford that activity or downtime now.”

VPN connections, which have become a relatively common way for enterprises to provide their employees with secure connections, still present some risk if not properly deployed. As more and more employees are working from home, organizations are struggling to maintain network privacy and handle security issues. Also, because of bandwidth capacity issues, organizations may struggle to provide secure VPN connections for all of their remote employees. And, since not all employees understand how VPN works, they may engage in activities like streaming videos that drastically tax the bandwidth for all the users.

The increased use of online meetings, which has been a critical tool for many companies to enable collaboration among employees, also exposes vulnerabilities, as not all users understand the importance of creating—and attending—only password-protected events. 

Moreover, the IT Operations teams that are typically able to respond immediately to a security breach or threat thereof, when in the office, are now at risk of being hampered by poor connectivity. Things that previously involved 10 to 15 minutes window to resolution—whether a system outage or something serious of nature like an ongoing attack—may now involve double or even triple the time due to slower connections.  

Best Practices to Protect: Rules to Work By

The good news is that many of the tools that allow for secure remote work already exist, including some that offer VPN’s (example Cisco Anyconnect VPN, Zscalar Private Access), two-factor authentication, password managers, secure file transfer and other security features and tools.     

In addition, there are several best practices organizations should work by not only through times of crisis but also year-round for maximum protection and continuity:

1) Secure System Access

All employee logins, not only critical ones, should be protected by strong multi-factor authentication as quickly as possible. Single sign-on solutions  (SSO), such as Okta, can help users reduce the number of logins the users have to complete to go about their everyday work while protecting your critical data.  And for the most sensitive system access, encrypted VPN’s should be enforced as a requirement to log in.  Additionally, a companywide, one-time password reset cycle with a prefaced notice that a maximum secure password is now critical.

 2) Ensure Redundancy

It is essential to maintain service levels when data center or service failure occurs. To do so, move traffic to a different zone, region, or geographical area from the affected area, and keep core applications deployed to an N + 1 standard so, in the event of a failure, there is sufficient or adequate capacity to enable the load to be load-balanced to the remaining sites or geographical locations. 

 3) Safeguard Availability

Ensure critical systems are backed up locally as well as across multiple isolated locations or regions. Each location should be designed and engineered to operate independently and with high reliability. Create a system design that has highly resilient systems and should be well-architected to provide service availability.

 4) Maintain Detailed Business Continuity Plans

A good plan outlines measures to avoid and lessen environmental disruptions, not just what to do to recover from them and includes operational details about the steps to take to before, during, and after an event. The Business Continuity plan is supported by testing that includes simulations of different scenarios when a service is disrupted. It is important to document people and process performance during and after testing, corrective actions that need to be taken, and lessons learned with the aim of continuous improvement.

 5) Prepare For the Unlikely 

A pandemic, for example, is an important event-type for which all businesses should prepare. The events of the past months remind us why. Mitigation strategies include alternative staffing models to transfer critical processes to out-of-region resources and activation of crisis management plans to support critical business operations. 

Bottom line: No matter what is thrown at you or your team, taking steps to ensure your important work can continue is critical. 

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

Containerization: The New Standard for Reproducible Scientific Computing

Containers resolve deployment and reproducibility issues in Life Science computing.

Bioinformatics software and scientific computing applications are crucial parts of the Life Science workflow. Researchers increasingly depend on third-party software to generate insights and advance their research goals.

These third-party software applications typically undergo frequent changes and updates. While these updates may improve functionalities, they can also impede scientific progress in other ways.

Research pipelines that rely on computationally intensive methodologies are often not easily reproducible. This is a significant challenge for scientific advancement in the Life Sciences, where replicating experimental results – and the insights gleaned from analyzing those results – is key to scientific progress.

The Reproducibility Problem Explained 

For Life Science researchers, reproducibility falls into four major categories:

Direct Replication is the effort to reproduce a previously observed result using the same experimental conditions and design as an earlier study.

Analytic Replication aims to reproduce scientific findings by subjecting an earlier data set to new analysis.

Systemic Replication attempts to reproduce a published scientific finding under different experimental conditions.

Conceptual Replication evaluates the validity of an experimental phenomenon using a different set of experimental conditions.

Researchers are facing challenges in some of these categories more than others. Improving training and policy can help make direct and analytic replication more accessible. Systemic and conceptual replication is significantly harder to address effectively.

These challenges are not new. They have been impacting research efficiency for years. In 2016, Nature published a study showing that out of 1,500 life science researchers, more than 70% failed to reproduce another scientist’s experiments.

There are multiple factors responsible for the ongoing “reproducibility crisis” facing the life sciences. One of the most important challenges scientists need to overcome is the inability to easily assemble software tools and their associated libraries into research pipelines.

This problem doesn’t fall neatly into one of the categories above, but it impacts each one of them differently. Computational reproducibility forms the foundation that direct, analytic, systemic, and conceptual replication techniques all rely on.

Challenges to Computational Reproducibility 

Advances in computational technology have enabled scientists to generate large, complex data sets during research. Analyzing and interpreting this data often depends heavily on specific software tools, libraries, and computational workflows.

It is not enough to reproduce a biotech experiment on its own. Researchers must also reproduce the original analysis, using the computational techniques that previous researchers used, and do so in the same computing environment. Every step of the research pipeline has to conform with the original study in order to truly test whether a result is reproducible or not.

This is where advances in bioinformatic technology present a bottleneck to scientific reproducibility. Researchers cannot always assume they will have access to (or expertise in) the technologies used by the scientists whose work they wish to reproduce. As a result, achieving computational reproducibility turns into a difficult, expensive, and time-consuming experience – if it’s feasible at all.

How Containerization Enables Reproducibility 

Put simply, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries, and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.

If a researcher publishes experimental results and provides a containerized copy of the application used to analyze those results, other scientists can immediately reproduce those results with the same data. Likewise, future generations of scientists will be able to do the same regardless of upcoming changes to computing infrastructure.

Containerized experimental analyses enable life scientists to benefit from the work of their peers and contribute their own in a meaningful way. Packaging complex computational methodologies into a unique, reproducible container ensure that any scientist can achieve the same results with the same data.

Bringing Containerization to the Life Science Research Workflow

Life Science researchers will only enjoy the true benefits of containerization once the process itself is automatic and straightforward. Biotech and pharmaceutical research organizations cannot expect their researchers to manage software dependencies, isolate analyses away from local computational environments, and virtualize entire scientific processes for portability while also doing cutting-edge scientific research.

Scientists need to be able to focus on the research they do best while resting assured that their discoveries and insights will be recorded in a reproducible way. Choosing the right technology stack for reproducibility is a job for an experienced biotech IT consultant with expertise in developing R&D workflows for the biotech and pharmaceutical industries.

RCH Solutions helps Life Science researchers develop and implement container strategies that enable scalable reproducibility. If you’re interested in exploring how a container strategy can support your lab’s ability to grow, contact our team to learn more.

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here.