How High-Performance Computing Will Help Scientists Get Ahead of the Next Pandemic

By John Healy

Sr. Scientific Computing Specialist, RCH Solutions

I know we may not be ready to think about this. 

Nearly a year after the first case was reported, we are still deep in the throes of a global pandemic. And although we saw the two highly effective vaccines, not to mention a range of promising treatments, developed and approved in record time, distribution of those vaccines is still in its early phase and we are far from in the clear.  Because various new strains and mutations are showing up as well, instead of one target we are now tracking toward several. 

Thinking about the next deadly virus lurking deep in the jungle—what it may be and worse, what it may cause—is not only terrifying, it’s also tiring. 

Nevertheless, we must. It’s not a matter of if another novel virus with deadly potential will be discovered, it’s when

Fortunately, we’re not alone in this fight. 

Research scientists have the super-human power of supercomputers to help their teams get ahead of the next outbreak and gain an edge not only for their company’s profit but also for their own survival. 

Here’s why. 

This discipline of high-performance computing (HPC) focuses on matching resource-intensive computing processes with the technical architecture best-suited for the task, enabling research capabilities not otherwise possible by humans alone. 

Emerging technologies that rely on AI, ML, and DL algorithms, like many of those that were invaluable to R&D teams in their quest to understand and overcome COVID-19, often require immense computing power to produce results. Bio-pharmaceutical research tasks can easily require millions or billions of parameters to generate useful results, which is far beyond the processing capabilities of most on-premises computing systems (and those tasks can be too few and far in between for the need to own large server farms to do them).

High-Performance Computing Improves Model Rendering and Predictive Analysis 

Today, HPC is being used effectively through all phases of the drug-development life cycle. 

One way, in particular, is through model rendering; scientists are using high-performance computing to build models of biological and chemical structures for analysis, inventing compounds, and performing analysis on whether those compounds will treat symptoms or cure a disease before ever reaching a patient. This ability to simulate conditions for trials programmatically increases the predictability of drug success and produces better trial outcomes that ultimately accelerates the speed at which a drug can be brought to market. And help people who are sick.  

But as useful as model rendering is for attacking diseases we know about, it also has the potential to provide insight into diseases and disease mutations we don’t yet know about, docking ultra-large libraries to discover new chemotypes and supporting the multitude of scientific tools that require HP for handing its computing to improve its performance.

But even labs that already have access to supercomputing hardware may find themselves disadvantaged when running particular processes. 

Luckily, hardware optimization and cloud integration can eliminate processing bottlenecks, and allow flexibility in handling challenges when dealing with constantly changing demands of task size to frequency, commonly encountered in laboratories with frequent large-scale computing needs.

Some computer-aided drug design tasks perform best under high-speed single-thread CPU architectures. Others are better served by massively parallelized graphical processing units (GPUs) with multiple-thread architectures. Every particular computing problem has its own unique optimal resolution architecture.

Optimizing Your HPC Environment for the Unknown

A research lab with limited supercomputing access (and limited human resources) will not be able to optimize its computing resources to solve all of its computing problems effectively. It will be able to solve some processes quickly while others suffer from significant inefficiencies, whether that’s a result of insufficient computing power or access to it. 

When it comes to the large-scale computing resources that emerging technologies demand—technologies essential to preventing or at least minimizing the destruction of the next outbreak—these inefficiencies can become roadblocks to progress. In fact, it’s not unusual for ultra-large library docking processes to require tens of thousands of core hours to complete. Having access to cloud systems with thousands of optimized cores (that can expand to your needs) could mean the difference between waiting for weeks or hours for the result.  And when lives are at stake, weeks become an eternity. 

If you’re looking to better optimize your HPC environment, learn how RCH Solutions has the expertise and experience to help.

Philadelphia Headquarters
992 Old Eagle School Road
Wayne, PA 19087
610-902-0400

Boston
90 Canal Street, 4th Floor
Boston, MA 02114
617-674-2029

San Diego
4660 LaJolla Village Drive
Suite 500
San Diego, CA 92122
858-877-9488

Belgium
Avenue Louise 149/24
B1050 Brussel, Belgium

©Copyright 2020 RCH Solutions     |  Privacy    |   Terms & Conditions