Over the past two decades, the use of Graphics Processing Units (GPUs) in high performance scientific computing has grown dramatically. Designed originally for fast graphics in computer games, these devices can operate in parallel on a large number of pieces of data. For the right scientific applications, they can perform calculations for a much lower cost and in a shorter time than traditional CPUs, and the top machines on the Top500 list of supercomputers achieve much of their compute performance by making use of GPUs.
However, HPC software usually needs to be rewritten for GPUs, requiring HPC software developers to learn new techniques. In August, the University of Sheffield organised a "hackathon" dedicated to GPU computing for scientific applications, in collaboration with Nvidia, Oak Ridge National Laboratory, and STFC Hartree Centre. Seven teams of 3-5 members were selected from those that applied, each paired with two mentors; experts in GPU computing. Each team brought their own application and worked on porting functionality to run on GPUs or to accelerate existing GPU functionality.
After a successful proposal from postdoctoral researcher Benjamin Owen from the department of Mechanical, Aerospace and Civil Engineering at the University of Manchester, a team was formed representing a MACE research group led by Alistair Revell. Joining the team was PhD student Marta Camps Santasmasas, postdoctoral researcher Joseph O'Connor, and Research Software Engineers (RSEs) Adrian Harwood and Ian Hinder from Research IT. The hackathon committee assigned mentors Jeffrey Kelling from Helmholtz-Zentrum Dresden-Rossendorf and Anna Brown from the University of Oxford.
The team worked on a code developed by Adrian Harwood which solves problems in Computational Fluid Dynamics (CFD) using the Lattice Boltzmann Method (LBM) on multiple GPUs. A particular focus of the research group is the interaction of fluids with embedded structures. Applications include real-time analysis of the interaction between flexible tissue and blood, energy harvesting using flexible filaments in urban areas, and urban wind analysis, e.g. natural ventilation, pedestrian wind comfort or contaminant dispersion.
Before the hackathon, the code could efficiently simulate fluids on multiple GPUs. During the hackathon, the team coded a GPU implementation of the Immersed Boundary Method (IBM) for coupling the fluid simulation to embedded structures based on a CPU implementation by Joseph O'Connor. This new functionality was then coupled to an existing structural solver developed by Benjamin Owen, which was itself made faster, and the fluid solver was coupled to an external interface developed by Marta Camps Santasmasas for coupling the LBM simulation to other simulations solving the Navier-Stokes equations.
Research IT contributed Adrian Harwood, originally in the MACE group but now working as an RSE in Research IT, as well as Ian Hinder, an RSE attached to the group on a long term project. Dedicated RSEs can be particularly useful in this type of project, as they not only have the background and expertise to produce high quality scientific software, but also the remit to focus exclusively on this activity without sharing their time with producing new research results.
While the improvements to the code are still very preliminary, the focused nature of the hackathon format enabled rapid progress and tight interactions within the team, and everyone came away very satisfied with the progress that had been made in such a short space of time. A publication is in preparation to disseminate the new code as open source software representing a significant impact for the research group on the wider community.
If you are interested in having a research software engineer work on your research project please see our website.