Enable high
bandwidth and low latency across
InfiniBand-connected server nodes in
your M1000e blade chassis
high-performance computing (HPC)
cluster.
- Single-wide switches occupy only one slot, leaving availability for fabric redundancy.
- InfiniBand™ Trade Association (IBTA) management support allows easy management through any IBTA-compliant subnet manager.
- Non-blocking throughput is enabled through 16 dedicated internal and 16 external ports.
Get the most data throughput available in a Dell M1000e blade chassis with a Mellanox InfiniBand blade switch. Designed for low-latency and high-bandwidth applications in high performance computing (HPC) and high-performance data center environments, InfiniBand switches offer 16 internal and 16 external ports to help eliminate the bottlenecks common to other switch designs.
Choose from three single-wide Mellanox InfiniBand blade switches, each offering non-blocking throughput and IBTA management compatibility:
Mellanox M4001Q QDR InfiniBand Blade Switch
Per-port bit rate: 40 Gb/s
Per-port data throughput: 32 Gb/s
Mellanox M4001T FDR10 InfiniBand Blade Switch
Per-port bit rate: 41.25 Gb/s
Per-port data throughput: 40 Gb/s
Mellanox M4001F FDR InfiniBand Blade Switch
Per-port bit rate: 56.25 Gb/s
Per-port data throughput: 54.54 Gb/s