IT for public good

0844 5000 115

  1. Eduserv
  2. Blog
  3. Cloud Computing
  4. Virtualizing HPC

Virtualizing HPC

|

I recently attended an interesting meeting at King’s College London to discuss benefits of virtualizing high performance compute (HPC) workloads. The meeting was arranged by Caroline Johnston of the Biomedical Research Centre for Mental Health and involved VMware and the South London and Maudsley NHS Foundation Trust (SLAM). Parts of the meeting were under an NDA, so I won’t touch on them here, but a number of issues were discussed which were of interest in the context of our own thinking about HPC in our VMware vCloud Director based cloud environment.

Many people believe that the further you abstract applications from the underlying hardware, the further you reduce performance. However, this isn’t always the case. In some areas of HPC, virtualization can actually boost performance by allowing better coupling of CPU to memory and by helping applications that do not scale well vertically but do scale horizontally.

Having piqued my interest, I started to do a bit of research into HPC to understand where virtualization may be able to play a part and where the benefits lie. (What follows requires a reasonable knowledge of VMware technologies.)

It seems to me that most types of HPC (yes, there will always be cases outside the norm) can be lumped into two broad categories: embarrassingly parallel programming or message passing interface programming (MPI). Technically, the term HPC seems to be generally associated with MPI and embarrassingly parallel with High Throughput Computing (HTC) but, often, both terms are lumped under the banner of HPC.

MPI has certain technical requirements that makes its suitability to virtualization less favorable. For example, it requires low latency links for memory access between nodes. This typically relies on protocols like RDMA (which has currently not been virtualized) meaning that VMware DirectPath would be required. Unfortunately, the use of DirectPath prevents many of the benefits of virtualizing workloads in the first place, for example vMotion and snapshots. Interestingly some work is being done to try and virtualize RDMA which could change this situation.

So, if virtualising HPC workloads can’t simply be justified for purely performance reasons, are there other motivators for migrating to a virtualized system? I think there are several reasons why virtualising HPC makes good sense:

More flexibility for the end user to choose distributions/schedulers/libraries, etc. – currently most HTC clusters run a small selection or potentially identical distros to provide access to compute resources. They provide a set of libraries which don’t always meet user requirements and which force users to code against a potentially sub-optimal systems for their application’s specific requirements. By allowing more flexibility for the user to choose the distro, scheduler or libraries, application performance may be vastly improved, thus increasing the throughput of the cluster and therefore getting better value for money.

Multi-tenancy – By removing the one-to-one relationship between OS and hardware, placing a virtual layer of networking and resource management over the top, it becomes far easier to present portions of resources to specific users, allowing for easier multi-tenancy of the platform. This can help get better value for users whilst still complying with specific policies, for example security regulations.

More flexibility of underlying infrastructure – By abstracting workload from hardware and by taking advantage of technologies like VMware’s Dynamic Resource Scheduling (DRS) and VMotion, it is possible to build clusters that are better suited to specific workflows – for example high memory nodes or high compute nodes. Workloads can then be transferred dynamically to those nodes that suit a particular workflow, again providing better cluster throughput.

Chargeback/Showback – As financial pressure on institutions increases, the ability to track utilisation and cost against specific projects becomes more important. Standardising the compute with VMware makes the task of ‘showback’ reporting pretty easy. That’s not to say that it isn’t possible with the current array of cluster managers, it’s just something that VMware already does well in the commercial space andthat  could also prove beneficial here.

Easy workload isolation – Currently, with the way schedulers function, it is possible for rogue jobs to lock/panic machines, causing failure of jobs and reduction of resources that affect all users. By isolating the code running on a platform to subset of self-contained nodes, failures of this type can be contained to only the user with the issue. This provides more platform stability and reduces the downtime, or degraded service, provided by the compute clusters.

Quick deployment and better compliance – When we look at deploying machines we have technologies like PXE, CFEngine and Puppet that allow the quick deployment of physical servers. However, for any additional services, such as network configuration, firewall rules, CMDB updates, etc. the deployment time is still slow. By virtualizing much of infrastructure we can easily automate and deploy isolated infrastructure which follows any compliance policies, quickly reducing the sys-admin time required to manage the service.

Better failure response, checkpointing etc. – Whilst checkpointing is a well-established method for protecting workloads, minimizing the impact of a failing job on the runtime of the application, VMware can assist the users in this operation. For example, checkpointing in the application may require data to be written to disk, whereas VMware can use snapshots to speed up this process. Secondly, any machines that do fail can be automatically restarted on another host far quicker than a replacement node could be commissioned, thus minimizing the interruption to service during failure. Having said that, I admit that this may be a tenuous point if the cluster is fully utilized as there may not be free resources to restart the failed machines!

Unification of IT infrastructure – Another advantage of virtualization is the potential unification of disparate clusters. By having all clusters under the same management, finding the right node/cluster for a particular workflow becomes easier. It will also help with asset management by having a good handle on resource utilisation from a central perspective. I think this also has the advantage of making future procurement easier as the centralised collection of detailed cluster stats can be used to make sure the correct hardware for the job is being procured.

All in all, I think there are a number of benefits to virtualizing certain HTC workflows which could see the combination of more efficient use of hardware, a reduction in administration and more sensible procurement all resulting in a reduction to the cost of running HTC infrastructure.

Here are a few interesting articles I have stumbled across, for those who would like to do more reading:

About Charles Llewellyn

A Solutions Architect at Eduserv with a specific interest in cloud computing and storage. Would always rather be up a mountain with a bike.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>