All Columns in Alphabetical Order


Tuesday, November 21, 2017

MELLANOX DEPLOYMENT COLLABORATION WITH LENOVO WILL POWER CANADA’S LARGEST SUPERCOMPUTER CENTRE WITH LEADING PERFORMANCE, SCALABILITY FOR HIGH PERFORMANCE COMPUTING APPLICATIONS

Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced that the University of Toronto has selected the Mellanox InfiniBand solutions to accelerate its new leading supercomputer in Canada. The new system will leverage the new Dragonfly+ network topology, built on Lenovo’s ultra-dense ThinkSystem SD530 high-performance computing server for processing power, developed to deliver an Exascale-ready infrastructure optimized for cost performance and unlimited scalability.

Upon completion, the jointly-innovated supercomputer will be the premier system at SciNet, Canada’s largest supercomputer center, providing Canadian researchers with the computational resources and expertise necessary to perform their research at scales not previously possible.

“We support research on all fields, from climate science to humanities, astrophysics, life sciences, social sciences, engineering, physics and chemistry,” said Dr. Daniel Gruner, CTO at SciNet, in the University of Toronto. “We do traditional simulations, data processing, analytics, and increasingly machine learning as well. The ability to scale out to the full size of the cluster is crucial for our community.”

When considering its new supercomputing requirements, the University of Toronto SciNet team needed a solution that balanced performance and cost efficiency with the ability to seamlessly scale alongside existing, increasingly dense workloads. Its work to understand global climate change also necessitated the HPC solution – which typically require large amounts of energy to cool – be incredibly efficient so as to reduce the consortium’s carbon footprint and overall operational expenses. Mellanox’s industry-first Dragonfly+ network topology, which dynamically manages network traffic and bandwidth, coupled with Lenovo’s powerful ThinkSystem hardware, provides SciNet with not only the HPC performance they require, but up to 600 kilowatts (kW) of energy savings on associated cooling needs – roughly the amount of energy needed to power 700 homes for a year. These types of considerations must be evaluated early in the development process of such an interconnected solution, and the longstanding collaboration between Mellanox and Lenovo helped simplify the approach.

“One of the unique challenges of academic computing lies in a university’s need to support a very broad range of applications and workflows,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Mellanox smart InfiniBand solutions deliver the highest performance, scalability and efficiency for a variety of workloads, and also ensure backward and future compatibility, protecting the university’s investment. We’re also excited by the opportunity to collaborate with both the University of Toronto and Lenovo, the world’s fastest-growing supercomputing vendor and HPC solutions provider for this amazing project, to build the first large scale Dragonfly+ InfiniBand system. This new network topology represents one of the cornerstones needed to move us into the Exascale era.”

“We’re excited to be working with Mellanox on the first Dragonfly+ deployment. As the world’s fastest growing supercomputing vendor, with proven excellence in HPC hardware deployments, it was a natural next step in our relationship with Mellanox to deliver the new supercomputer to SciNet,” said Madhu Matta, vice president and general manager, HPC & AI, Lenovo DCG. “The processing capabilities of our latest ThinkSystem server platform, combined with the efficiency and cost savings associated with Dragonfly+ network cabling, allow us to provide the University of Toronto with a comprehensive, energy efficient solution that will help researchers tackle tomorrow’s problems today.”

Back to TOP