We Are XMA
Queen Mary University of London - Case Study
High performance clustering

Viglen is a preferred supplier of PC systems to the University of London. In an initial agreement worth over £150,000, the College invested in 130 custom-built Viglen dual-Xeon Processor 2.8GHz servers, each with 2Gb SDRAM and 120Gb hard drive. Each machine, supplied in a 1u rack-mountable configuration, has dual Gigabit ethernet interfaces, uses Supermicro motherboards and the cluster has its own dedicated network. The University subsequently expanded this cluster, with 288 AMD Opteron 270 Servers, with 4GB RAM and 250GB Hard Disks, worth over £500,000, was added in May 2006.

 

 When we look at the cost of investing in computer systems, we don’t just look at upfront cost, but the cost of reliability and lifetime support. That’s why, taken along with the relationship it has with the University, we chose Viglen.
Dr Alex Martin, Physics Department


 
High performance clustering for high throughput computing
The Physics Department’s leading-edge research work demanded a step-change in its processing capacity. The particle physics group is heavily involved in the ATLAS experiment, which is searching for the “Higgs — Boson”, thought to be responsible for the generation of mass.

Working with academic institutes worldwide, the Queen Mary group is responsible for installing and commissioning key hardware. Prior to the start of the experiement, the group is running simulations and testing software. The rigours of the ATLAS experiment mean that it is necessary to make a large-scale array of high performance processors work in concert as a central resource.
 
Although the Queen Mary cluster is chiefly for use in particle physics, it will also have applications in other areas of science, including astrophysics, engineering and bioinfomatics.
 
“We aim to develop a generic e-science resource. By purchasing a cluster of machines from Viglen we can tackle the in-house scientific research projects which we previously didn't have capacity for. This is a tremendous resource that we intend to expand in the future.”


Schedule, Allocate, Calculate

The installed software is based on the Scientific Linux distribution. To control and manage its cluster of high-power machines, the University uses a mixture of custom and standard open source tools. A sophisticated resource management system is used to queue and allocate tasks to individual computing nodes.



  We have a High Throughput Cluster (HTC) - as we can run a large number of single processor jobs simultaneously. We can now tackle at least 1,500 tasks concurrently and can run a mix of parallel and non-parallel tasks.        
Dr Alex Martin, Physics Department

 
 
The system is designed with flexibility in mind. As a result, it is not essential to constantly operate all of its new Viglen servers; some can be ‘rested’ and in the event of on or off-site support, individual machines are readily detachable from the cluster without interrupting ongoing tasks. The cluster is designed to be readily expandable.


Innovative, flexible solutions
 

Commenting on Viglen’s continued partnership with Queen Mary, University of London, and its key role in supplying technology to the Department of Physics, Viglen Chief Executive, Bordan Tkachuk, said:

“Viglen shares the same desire for excellence as Queen Mary, University of London. In tailoring our technology to deliver in High Performance Clusters, Viglen is responding to new ways of working with innovative and flexible solutions. Once again, Viglen has demonstrated it's ability to work hand-in-hand with in-house IT specialists at our leading Universities to provide robust research technology that makes an immediate difference.”

Hooking up to the global grid

In just a short space of time, the Department’s new Viglen cluster will begin to advertise it’s availability to nodes elsewhere on the global Grid, accepting low priority tasks from others to make the best use of it’s capacity. The system is entirely scaleable and can easily accept extra clusters of machines and can plug-in to external ‘Grids’ of processors outside the University — offering the potential of phenomenal combined power.

 
 
Update

The E-Science project includes experimental particle physicists and astronomers. The availability of the resources allows the University to participate in various research projects and international collaborations.
 
Viglen has pro-actively worked with the University by testing code on various hardware platforms to ensure best performance on its mix of parallel and non-parallel tasks. Viglen HPC systems engineers and technical staff worked with the University to offer first class independent advice on hardware.
 
Power consumption was also a vital factor in the choice of hardware platform. The final provision was for 284 AMD Dual Processor, Dual Core nodes. As part of the solution, Viglen has also supplied gigabit switches, PDUs and 12TB of storage.

 
Get in touchNot sure who to speak to? Need advice? You can ask about any of our solutions or get a project started by sending a message.
Name:
Organisation:
Email:
Telephone:
Message:
 
Please enter the number above:

Related informat​ion
Virtualisation
Virtualisation

Let us take you down the path towards a virtualised environment

FIND OUT MORE Find out more 
LAN & WAN

We provide core and edge LAN technologies that cover all server technologies
 
FIND OUT MORE Find out more
Is storage performance holding you back?

Contact us today for more information on our storage solutions

FIND OUT MORE Find out more   
Get insights from the XMA experts by connecting with us today.

ISO Certificates Privacy
Cookies Legal
Feedback


© 2017 XMA Ltd. All Rights Reserved