Press coverage highlights from story:
- InformationWeek, Birmingham amps up research with new Linux supercomputer
- DataCentreDynamics, Birmingham upgrades HPC capacity
- InsideHPC, BlueBear brings more HPC power to Birmingham University
- HPCwire, BlueBear cluster goes online in the UK
Researchers across the University of Birmingham using a centrally funded High Performance Computing (HPC) service will now benefit from additional modeling power and a wider range of services following a complete replacement of their previous HPC system, which had been installed in 2007. The University of Birmingham also provides additional compute power to GridPP, a collaboration of particle physicists and computer scientists from the UK and CERN.
Known as BlueBEAR II, the initial HPC service boasts 15 TFlop/s of performance using nearly 850 processing cores [Intel’s Sandy-Bridge eight-core processors]. This initial installation is of comparable performance, with a much lower carbon footprint, to the outgoing system and will grow in the light of IT Services’ experience to meet the varied demands from the research community. Unlike many recent HPC upgrades, the University of Birmingham has released recurrent funding for this service so that it can develop to meet the needs of both current and new users
The Linux-based HPC service is one part of the overall Birmingham Environment for Academic Research (BEAR) that is being jointly developed by the University, OCF and other specialist partners. BEAR is a set of complimentary and inter-linked services to meet the diverse needs of the wide research base of the University. Other components of this service include:
- a Windows based HPC service to bring the power of cluster computing to users of Windows HPC applications without the need to become familiar with the low-level Linux and job submission commands that are associated with traditional Linux HPC
- a GPGPU service for applications that benefit from GPU accelerator technology
- a large memory service for needs that are primarily data intensive as opposed to compute intensive, whilst understanding that the two needs are rarely completely distinct
- a sophisticated visualisation center, incorporating active stereo display and motion tracking
- highly scalable collaborative conferencing and collaborative visualisation services, which are especially beneficial to large and often international research groups
- a dedicated render farm based on a mixture of CPU-only and GPU-assisted nodes, primarily for demanding off-line rendering, is planned for release as a later phase of the developments of the BEAR environment
The additional modeling power will enable researchers to process larger, more detailed, more accurate simulations and test cases in less time than was possible on the previous service. Within the School of Chemistry Professor Roy Johnston and his team, one of the service’s major users, are using the Linux HPC service for research into many areas including computational nanoscience. Professor Johnston’s team is trying to understand how to create more cost effective and more environmentally friendly catalysts for fuel cells and hydrogen cars, for example.
Paul Hatton, HPC & Visualisation Specialist, IT Services, University of Birmingham, says: “The new Linux HPC service has been well received by users of the previous service, and most of the ongoing projects that were benefitting from the previous service, including some from archaeology and economics as well as the science and engineering disciplines, have continued to use the new service. The next, and in many ways more difficult, challenge is to widen the user base to those research areas that do not traditionally make use of a Linux HPC service, which was the motivation for including other services, especially the Windows HPC service, in the wider BEAR environment.”
The University funded the core BEAR service whilst other researchers have added resources from their research grants, benefitting from the system management offered by IT Services. Designed, built and integrated by HPC, data management, storage and analytics provider OCF, the clusters uses IBM hardware and system software. As part of a wider collaboration, OCF has also provided staff resource to help the University with outreach projects to recruit new users onto the service.
Paul Hatton continues: “OCF has provided expert consultancy to support the design of BlueBEAR II. It has built flexible, scalable and unobtrusive high performance server clusters powered by both CPUs and GPU processors, using Linux and Windows, which will support our research now and into the future. The team is also providing support to help recruit new users to the service. On the balance of hardware & software expertise and provision of support services, OCF came out on top.”
- The server clusters uses IBM System x iDataPlex® with Intel Sandy Bridge processors. OCF has installed more high performance server clusters using the industry-leading IBM iDataPlex server than any other UK integrator.
- The server clusters also uses IBM Tivoli Storage Manager for data back up and IBM GPFS software which enables more effective storage capacity expansion, enterprise wide, interdepartmental file sharing, commercial-grade reliability, cost-effective disaster recovery and business continuity.
- The scheduling system on BlueBEAR II is Adaptive Computing’s MOAB software, which enables the scheduling, managing, monitoring, and reporting of HPC workloads
- Use of Mellanox’s Virtual Protocol Interconnect (VPI) cards within the cluster design makes it easier for IT Services to redeploy nodes between the various components of the BEAR services depending on changing workloads.