"We are not the best, but try to be better."

 

 

 

 

 

What is going on here?

Nowadays, "Computer Modeling" of complex phenomena is a respected alternative to the expensive experiments  in science and industry. It fills the gap between "novel ideas" and their "implementation", reducing the explicit costs on the practical realization of the ideas.

When modeling and computing, there are several methods to improve performance:
- Use a appropriate algorithm to speed up the calculation
,
- Use a faster
CPU to find solutions more quickly,
- Divide the c
omputing among multiple nodes in a High Performance Computing (HPC) Cluster.  

The first and the second methods are old approaches and a common practice in research activities. But how about the third method? The third one is "Parallelism", i.e., executing instructions simultaneously. It means that you may be able to divide the computation up  among  different  nodes  on  a  network,  each  node  working  on a part of the calculation, all working at the same time [1]. This gives to scientists and engineers an opportunity to investigate more complex problems in science and industry such as:

Simulation of Protein Folding: Proteins molecules are long flexible chains that can take on a virtually infinite number of 3D shapes. In nature, when put in a solvent, they quickly "fold" to their native states. Incorrect folding is believed to lead to a variety of diseases like Alzheimer's; therefore, the study of protein folding is of  fundamental importance. One way scientists try to understand protein folding is by simulating it on computers. In nature, folding occurs quickly (in about one millionth of a second), but it is so complex that its simulation on a regular computer could take decades. This focus area is a small one in an industry with many more such areas, but it needs serious computational power [2],

Neuroscience: Computational modeling and simulation of physical and biological processes are providing increasingly powerful tools for neuroscience research. The implementation of models with sufficient resolution, such as for head tissue conductivity, requires high performance computing. Many problems that are inherently uncertain, such as estimating the neural sources of brain electrical activity, can be addressed with confidence through large scale statistical simulations. These simulations are now possible with increasingly affordable HPC Clusters [3],

Genomics: High Performance Computing is accelerating several projects on genetic research, one of them being the famous Human Genome Project. The Human Genome Project's main goal is to identify each and every gene in the human DNA. The project has to deal with over 30,000 genes and 3 billion chemical base pairs. HPC is therefore vital for processing, sequencing and storing the genome information [4],

Tracking Jet Engines: Fighter jets fly a wide variety of missions, and the level of wear on individual engine parts depends upon the types of missions flown. To more accurately predict an engine part’s life consumption, designers at Volvo Aero began collecting data — including time, speed, temperature, pressure and other engine part conditions — about 11 years ago to determine how engine part wear relates to mission conditions. By combining this information with data from current missions and analyzing it using ANSYS structural mechanics software, some proprietary in-house and commercial tools, and a cluster of computers, Volvo Aero is now able to accurately predict when each part in a particular jet engine needs to be replaced or serviced. Using this system, service technicians at external customer organizations can save time, reduce costs and improve safety by treating each engine based on its own unique history [5],

High Energy Physics:  Quantum Chromodynamics (QCD) is the theory which describes the strong nuclear force which binds elementary particles called quarks and gluons together to make hadrons such as the protons and neutrons. This force has an unusual property: it is very weak when the quarks are close together growing stronger as the distance between quarks grows and then remaining constant even as the quarks are moved still further apart. This is called asymptotic freedom. The idea that QCD describes the strong force has been experimentally confirmed many years ago but perhaps surprisingly, no free single quarks have been observed. This is called confinement and it is believed to arise from complex dynamics caused by the strength of the forces generated by QCD. A quantitative understanding of this mechanism is one of the outstanding challenges in modern physics. Asymptotic freedom means that the usual theoretical tools will not work for QCD and a more robust approach is needed. Lattice QCD describes QCD in terms of its fundamental particles, the quarks and gluons while also making predictions of hadron properties. The formulation of QCD on a discrete space-time lattice makes it amenable to large-scale numerical simulations, similar to Monte Carlo simulations applied in condensed matter physics. It is these predictions that are vital to understand the complete picture of high-energy physics and also to understand new physics discovered at experiments like the Large Hadron Collider (LHC) at CERN. In lattice QCD space and time are discretised on a four-dimensional grid and hadronic physics is determined by numerical simulation. This is one of the Grand Challenge projects, requiring capability computing on a vast scale [6,7],

Weather Forecasts: Not only the general public but more and more professionals working in various fields are asking for more detailed weather forecasts in space and time. A new aspect is the increasing demand of forecast weather data for follow-up models, for example in the fields of hydrology (for the forecast of the water level of large rivers), air pollution (for the computation of trajectories and pollutant dispersion), civil aviation (for landing management), merchant navy (estuary and harbor management) and avalanche forecasting (for warnings). These requirements can only be satisfied with sophisticated, very high resolution meso-scale numerical weather prediction  models. But the operational constraint requires that the model results have to be available rapidly, otherwise they are useless. This last requirement can only be met if this type of models are computed on very powerful computers. It is the reason why today, next to the Global Climate Modeling (GCM) for climate research, the very high resolution regional models - which must be nonhydrostatic - are also a chapter of HPC [8,9].

As is obvious from the foregoing, HPC Clusters are an important component in scientific challenges and industry, since this may give the advantage to be first to publish the final results, or it may determine who is first to get the patent.

 

References (click on the links!):

  1. J.D. Solan , High Performance Linux Clusters with OSCAR, Rocks, OpenMosix and MPI, O'Reilly Media, USA, 2005, 360p.
  2. A. Narayan, High-Performance Linux Clustering, IBM Technical Library, September 27, 2005, USA.
  3. Neuroinformatics Section, Electrical Geodesics, Inc. (EGI), Oregon, USA.
  4. M. Shakya, High Performance Computing for Genetic Research, the eXtreme Computing Research, 2009, Luisiana Tech. University, USA..
  5. M. Andersson, Tracking Jet Engines, Ansys Advantage, 5 (2011) 9.
  6. S. Ryan, High Energy Physics meets High Performance Computing, Trinity College Dublin, Ireland.
  7. High Performance Computing (HPC) Cluster "Wilson" , Institute of Nuclear Physics, Gutenberg University, Mainz, Germany.
  8. J. Quiby, D. Maric, High Resolution, Nonhydrostatic Modelling and HPC Requirements, The 3rd International Workshop on Next Generation Climate Models for Advanced High Performance Computing Facilities, March 2001, Japan.
  9. B. Handwerk, Faster Supercomputers Aiding Weather Forecasts, National Geographic News, August 29, 2005, USA.

 

 

Last Modification: September 2017

Free counters!