Pär Persson Mattsson | February 20, 2015
We have previously written about HPC with the COMSOL Multiphysics® software, clusters, and hybrid computing. But not all of us have a cluster available in the office (or the hardware to build a Beowulf cluster). So what possibilities do we have if we really need that extra compute power that a cluster can give us? One solution is to utilize cloud computing, a service that provides compute power on a temporary basis, to give our computations and productivity a boost.
Pär Persson Mattsson | November 13, 2014
The first computer I used was a real performance beast. Equipped with Intel’s 486 clocking in at 66 MHz, this machine was ready to take on whatever challenges the future would bring us. Or so I thought. The CPU clock speeds increased and soon passed 500 MHz, 1 GHz, and continued upwards. Around 2005, the top speed of the high-end processors settled around 4 GHz and hasn’t increased much since then. Why is that? I’ll explain.
Pär Persson Mattsson | March 20, 2014
One thing we haven’t talked much about so far in the Hybrid Modeling blog series is what speedup we can expect when adding more resources to our computations. Today, we consider some theoretical investigations that explain the limitations in parallel computing. We will also show you how to use the COMSOL software’s Batch Sweeps option, which is a built-in, embarrassingly parallel functionality for improving performance when you reach these limits.
Pär Persson Mattsson | February 6, 2014
A couple of weeks ago, we published the first blog post in a Hybrid Modeling series, about hybrid parallel computing and how it helps COMSOL Multiphysics model faster. Today, we are going to briefly discuss one of the building blocks that make up the hybrid version, namely shared memory computing. Before that, we need to consider what it means that an “application is running in parallel”. You will also learn when and how to use shared memory with COMSOL.
Pär Persson Mattsson | November 14, 2013
A little while ago we wrote about being Intel® Cluster Ready (ICR) certified and the advantages for companies in need of high performance computing (HPC). The conclusion of that blog post was that you can install and run COMSOL Multiphysics on any ICR cluster without additional preliminary work. There is a solution even simpler than that, thanks to a joint project between COMSOL Multiphysics GmbH and Fujitsu in Germany, called Ready-To-Go+ (RTG+). RTG+ is a complete solution, bringing you an […]
Pär Persson Mattsson | September 5, 2013
In June it was time for the International Supercomputing Conference (ISC) in Leipzig, Germany. Many of the largest hardware and software vendors for High Performance Computing (HPC) were there to present new equipment, and the latest TOP500 list, a list of the fastest supercomputers, was presented as well. In a world where we continuously demand faster computations and higher resolution in our models, it’s not only necessary to have better hardware; we also need software that can handle the ever-growing […]
Pär Persson Mattsson | April 11, 2014
Many of us need up-to-date software and hardware in order to work efficiently. Therefore, we need to follow the pace of technological development. But, what should we do with the outdated hardware? It feels wasteful to send the old hardware to its grave or to just put it in a corner. Another, more productive, solution is to use the old hardware to build a Beowulf cluster and use it to speed up computations.
Pär Persson Mattsson | February 20, 2014
In the latest post in this Hybrid Modeling blog series, we discussed the basic principles behind shared memory computing — what it is, why we use it, and how the COMSOL software uses it in its computations. Today, we are going to discuss the other building block of hybrid parallel computing: distributed memory computing.
Pär Persson Mattsson | January 23, 2014
Twenty years ago, the TOP500 list was dominated by vector processing supercomputers equipped with up to a thousand processing units. Later on, these machines were replaced by clusters for massively parallel computing, which soon dominated the list, and gave rise to distributed computing. The first clusters used dedicated single-core processors per compute node, but soon, additional processors were placed on the node requiring the sharing of memory. The capabilities of these shared-memory parallel machines heralded a sea change towards multicore […]
Pär Persson Mattsson | October 3, 2013
The future of high performance computing (HPC) is in clusters and parallel computing. The last single processor computers on the TOP500 list disappeared in 1997 — more than 15 years ago. Clusters allow us to compute larger and more detailed models faster than ever before, but taking the step into the world of HPC can be a challenge. A lot of time, money, and research must be invested when building a cluster from scratch. What kind of hardware should the […]