Optimize Your Life Science Startup for Success From the Start: Develop a Solid Compute Model

Building an effective computing environment early on helps ensure positive research outcomes later. 

Now more than ever, Life Science research and development is driven by technology innovation.

That does not mean human ingenuity has become any less important. It simply depends on accurate, well-structured data more than ever before. The way Life Science researchers capture, store, and communicate that data is crucial to their success.

This is one of the reasons why Life Science professionals are leaving major pharmaceutical firms and starting their own new ventures. Startups have the unique opportunity to optimize their infrastructure from the very start and avoid being hamstrung by technology and governance limitations the way many large enterprises often are.

Optimized tech and data management models offer a significant competitive advantage in the Life Science and biopharma industries. For example, implementing AI-based or other predictive software and lean workflows make it easier for scientists to synthesize data and track toward positive results or, equally as important, quickly see the need to pivot their strategy to pursue a more viable solution.  The net effect is a reduction in the time and cost of discovery, which not only gives R&D teams a competitive upper hand but improves outcomes for patients.

Time is Money in R&D

Life Science research and development is a time-consuming, resource-intensive process that does not always yield the results scientists or stakeholders would like. But startup executives who optimize discovery processes using state-of-the-art technology early on are able to mitigate two significant risks:

  • Fixing Broken Environments. Building and deploying optimized computing infrastructure is far easier and less expensive than repairing a less-than-ideal computing environment once you hit an obstacle. 
  • Losing Research Data Value. Suboptimal infrastructure makes it difficult to fully leverage data to achieve research goals. This means spending more time manually handling data and less time performing critical analysis. In a worst-case scenario, a bad infrastructure can lead to even good data being lost or mismanaged, rendering it useless. 

Startups that get the experience and expertise they need early on can address these risks and deploy a solid compute model that will generate long-lasting value.  

5 Areas of Your Computing Model to Optimize for Maximum Success

There are five foundational areas startup researchers should focus on when considering and developing their compute model:

1. Technology

Research teams need to consider how different technologies interact with one another and what kinds of integrations they support. They should identify the skillset each technology demands of its users and, if necessary,  seek objective guidance from a third-party consultant when choosing between technology vendors. 

2. Operating Systems

Embedded systems require dependable operating systems, especially in Life Sciences. Not only must operating systems support every tool in the researchers’ tech stack; individual researchers must also be well-acquainted with the way those systems work. Researchers need resource management solutions that share information between stakeholders easily and securely.

3. Applications and Software

Most Life Science organizations use a variety of on-prem, Cloud-enabled, open-source and even home-grown applications procured on short-term contracts. This offers flexibility, but organizations cannot easily coordinate between software and applications with different implementation and support requirements. Since these tools come from different sources and have varying levels of post-sale documentation and support, scientists often have to take up the heavy burden of harmonizing their tech stack on their own.

4. Workflows

Researchers have access to more scientific instruments than ever before. Manufacturers routinely provide assistance and support implementing these systems, but that is not always enough. Startups need expert guidance establishing workflows that utilize technological and human resources optimally. 

But building and optimizing scientific workflows is not a one-size-fits-all endeavor; teams with multiple research goals may need separate workflows optimized differently to accommodate each specific research goal.

5. Best Practices

Optimizing a set of research processes to deliver predictable results is not possible without a stable compute environment in place. For Life Science research organizations to develop a robust set of best practices, they must first implement the scientific computing model that makes those practices possible. This takes expert guidance and implementation from professionals who specialize in IT considerations unique to a research and development environment that, many times, lean startups simply don’t have on the team.

Maximize the Impact of Research Initiatives 

Emerging Life Science and biotech research companies have to empower their scientific teams to make the most of the tools now available. But architecting, hinging, and implementing a robust and effective compute model requires experience and expertise in the very specific area of IT unique to research and discovery. If the team lacks such a resource, scientists will often jump into the role to solve IT problems, pulling them away from the core value of their expertise.

The right bio-IT partner can be instrumental in helping organizations design, develop, and implement their computing environment, enabling scientists to remain focused on science and helping to position the organization for long-term success.   

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

Innovate and Reduce Risk

I find it interesting that when I’m speaking with a prospective customer or a prospective employee,  I often hear many of the same questions. 

How long have you been in business?

What is your growth plan?

How do you make a decision on a project that may be outside your comfort zone?

How do you compare yourself to a “Big Professional Services Company”?

Often, the context changes but the interest and intent behind the questions remain the same. What they really want to know is:

Are you stable and reliable?

Will you be able to scale with us if we give you this project?

What if the project fails?

We don’t know much about you—why should I choose you?

And I get it.  These are all legitimate and fair questions, especially of a company or solution that is “unknown” (which is not to be confused with unproven).  

At the heart of all of these questions is a common theme: minimizing risk. On the heels of a tumultuous year, a tendency toward risk aversion helps us feel more in control. It helps us feel safe.

But in the business of science—a business built on boundary-breaking innovation—there is an equal, if not greater, amount of risk that can be associated with mistaking static for safe. 

While this tendency is not necessarily inherently wrong (so much of the business world, in general, is about minimizing risk as there is much to lose) the business of Life Sciences is dichotomously at odds with this philosophy. 

Research and Development teams are constantly pushing boundaries.  In their quest to find the next cure, they often fail, and when they do, it’s important they “fail fast”.  But a model that is customized to meet these unique needs of the R&D business is costly, challenging, and risky. More directly, it’s in conflict with the enterprise IT model chartered to provide services that work for all.  

But in this attempt to minimize risk, do you not stifle innovation?

I think of the questions two executives may ask when considering an investment in more training for employees. 

Executive 1What if we pay for this and they leave?

Executive 2What if we don’t, and they stay…?

Perhaps innovation is risky but the outcome of doing nothing is the same.

Reap the Reward of Innovation Without Out-sizing Your Risk 

Now, when it comes to support services for R&D within Life Sciences, I’m not suggesting that you abandon your plan.  I’m not suggesting that you allow the business to completely dictate and own the new scientific computing model.  Although, we could make a strong argument why doing so, based on proven examples, would serve to both innovate and reduce risk.

I’m suggesting a sort of compromise.  The compromise involves identifying which projects, processes and/or workloads would be better managed by other service providers.  

You see, too often, we see scientific computing services falling under the consulting and support umbrella of one of those ‘Big Professional Services Companies’.  And why? 

 Not because they are less expensive. (They usually cost more.) 

Not because they execute better than others?  (When is the last time you hear some sing their praises?). 

Because their processes are better?  (No, they often layer on another process and throw more resources at the problem which only services to increase costs, stifle innovation and decrease satisfaction.)  

So why?  Why keep giving projects to generalists when specialists are what is needed?  

The only answer is to reduce risk.  

However, the reality is choosing the big name in the space just because it seems “safer” can keep the company from moving forward. And—in a process that can almost become self-fulling—these players often become little more than a scapegoat when something fails.  

So I would challenge you, isn’t the risk of doing the same thing over and over with the same very average vendor greater than doing something different?

If  you’re interested in finding out how you can reduce risk through innovation, I offer you this three-step process to get started:  

Step 1  –  Identify those areas and projects that need special attention and/or knowledge and skills. Perhaps those in R&D.

Step 2  –  Find a partner with a proven track record of success addressing those special needs through its own advanced knowledge and skills (for example, the important difference between IT and Bio-IT).   

Step 3 –  Leave the other projects to the generalist vendor

This compromise is one that supports innovation and reduces risk at the same time.  Besides, you might also find that you are also able to reduce costs, complete projects faster and more efficiently, and gain that much needed internal customer satisfaction.  

Or, perhaps the risk is that you may find a better partner.  

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here. 

How High-Performance Computing Will Help Scientists Get Ahead of the Next Pandemic

I know we may not be ready to think about this. 

Nearly a year after the first case was reported, we are still deep in the throes of a global pandemic. And although we saw the two highly effective vaccines, not to mention a range of promising treatments, developed and approved in record time, distribution of those vaccines is still in its early phase and we are far from in the clear.  Because various new strains and mutations are showing up as well, instead of one target we are now tracking toward several. 

Thinking about the next deadly virus lurking deep in the jungle—what it may be and worse, what it may cause—is not only terrifying, it’s also tiring. 

Nevertheless, we must. It’s not a matter of if another novel virus with deadly potential will be discovered, it’s when

Fortunately, we’re not alone in this fight. 

Research scientists have the super-human power of supercomputers to help their teams get ahead of the next outbreak and gain an edge not only for their company’s profit but also for their own survival. 

Here’s why. 

This discipline of high-performance computing (HPC) focuses on matching resource-intensive computing processes with the technical architecture best-suited for the task, enabling research capabilities not otherwise possible by humans alone. 

Emerging technologies that rely on AI, ML, and DL algorithms, like many of those that were invaluable to R&D teams in their quest to understand and overcome COVID-19, often require immense computing power to produce results. Bio-pharmaceutical research tasks can easily require millions or billions of parameters to generate useful results, which is far beyond the processing capabilities of most on-premises computing systems (and those tasks can be too few and far in between for the need to own large server farms to do them).

High-Performance Computing Improves Model Rendering and Predictive Analysis

Today, HPC is being used effectively through all phases of the drug-development life cycle. 

One way, in particular, is through model rendering; scientists are using high-performance computing to build models of biological and chemical structures for analysis, inventing compounds, and performing analysis on whether those compounds will treat symptoms or cure a disease before ever reaching a patient. This ability to simulate conditions for trials programmatically increases the predictability of drug success and produces better trial outcomes that ultimately accelerates the speed at which a drug can be brought to market. And help people who are sick.  

But, as useful as model rendering is for attacking diseases we know about, it also has the potential to provide insight into diseases and disease mutations we don’t yet know about, docking ultra-large libraries to discover new chemotypes and supporting the multitude of scientific tools that require HP for handing its computing to improve its performance.

But even labs that already have access to supercomputing hardware may find themselves disadvantaged when running particular processes. 

Luckily, hardware optimization and Cloud integration can eliminate processing bottlenecks, and allow flexibility in handling challenges when dealing with constantly changing demands of task size to frequency, commonly encountered in laboratories with frequent large-scale computing needs.

Some computer-aided drug design tasks perform best under high-speed single-thread CPU architectures. Others are better served by massively parallelized graphical processing units (GPUs) with multiple-thread architectures. Every particular computing problem has its own unique optimal resolution architecture.

Optimizing Your HPC Environment for the Unknown

A research lab with limited supercomputing access (and limited human resources) will not be able to optimize its computing resources to solve all of its computing problems effectively. It will be able to solve some processes quickly while others suffer from significant inefficiencies, whether that’s a result of insufficient computing power or access to it. 

When it comes to the large-scale computing resources that emerging technologies demand—technologies essential to preventing or at least minimizing the destruction of the next outbreak—these inefficiencies can become roadblocks to progress. In fact, it’s not unusual for ultra-large library docking processes to require tens of thousands of core hours to complete. Having access to Cloud systems with thousands of optimized cores (that can expand to your needs) could mean the difference between waiting for weeks or hours for the result.  And when lives are at stake, weeks become an eternity. 

RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here.