12.356.2014 - Running HPC Applications on vSphere using InfiniBand RDMA
|
Looking back on 2014, this was the year in which we made significant strides in assessing and addressing High Performance Computing (HPC) performance on vSphere. We’ve shown that throughput or task-parallel applications can be run with only small or negligible performance degradations (usually well under 5%) when virtualized — a fact that the HPC community seems to now generally accept. The big new thing this year, however, was the progress we are making with MPI applications — specifically those applications that require a high-bandwith, low-latency RDMA interconnect like InfiniBand. As I mentioned in an earlier blog post , we generated a great deal of RDMA-based performance data about HPC applications and benchmarks this past summer due to the hard work of my intern, Na Zhang . Here is a more detailed look at some of those results. Test Configuration The cluster we used for these tests consisted of four DL380p G8 servers, each with two Ivy Bridge 3.30 GHz eight-core processors, 128 GB RAM, and three 600GB 10K SAS disks. The nodes were connected with Mellanox InfiniBand FDR/EN 10/40 Gb dual-port adaptors using a Mellanox 12-port FDR-based switch. We have since doubled the size of this cluster and will report higher-scale results in a later article. [...]]>...
- View Press Release
- Visit VMware, Inc.
|
|
|
|
NID: 52135 / Submitted by: The Zilla of Zuron
|
| Categories:
Press Release
|
| Most recent VMware related news. |
|
Simpler Licensing with VMware vSphere Foundation and VMware Cloud Foundation 5.1.1
|
|
VMware Skyline Advisor Pro Proactive Findings – January 2024 Edition
|
|
VMware Skyline Advisor Pro Proactive Findings - January 2024 Edition
|
|
Skyline Advisor Pro: Introducing Inventory Export Reports
|
|
VMware Skyline Advisor Pro Proactive Findings – December 2023 Edition
|
|
View archive of VMware related news.
|
Digg
del.icio.us
Furl
Google Bookmarks
Yahoo! My Web
AddThis Bookmark
|