Skip to content

Mellanox CB194A Connect-IB FDR Infiniband 56GB/s Dual Port Host Bus Adapter (MCB194A-FCAT)

Save 20% Save 20%
Original price $23.74
Original price $23.74 - Original price $23.74
Original price $23.74
Current price $18.99
$18.99 - $18.99
Current price $18.99
SKU MCB194A-FCAT
Availability:
50+ in stock

In-store Pickup

Local Pickup in Dallas, TX Available

Shopify Product Page with Tabs
  • Specifications
  • Description
Manufacturer Part Number MCB194A-FCAT
Manufacturer Mellanox (an NVIDIA company)
Condition Refurbished
Standard Warranty 90 Day Replacement Warranty
Item Category Host Bus Adapter (HBA)
Model Connect-IB
Form Factor Full Height
Interface InfiniBand
External Interfaces 2x QSFP Ports
Storage Capacity
Transfer Rate 56 Gb/s
Network Management Type
Processor Type
Clock Speed
Number of Cores
Cache
Socket Type
TDP ~13 W
Compatible Port PCIe 3.0 x8
Number of Ports 2
CPU Type
HDD Form Factor
HDD Interface
HDD Rotation Speed
HDD Storage Capacity
HDD Transfer Rate
Memory Bus Speed
Memory Capacity Per Module
Memory Features
Memory Form Factor
Memory Kit Brand
Memory Kit Breakdown
Memory Kit Speed
Memory Kit Technology
Memory Kit Total Capacity
Memory Kit Voltage
Memory Number of Modules
Memory Number of Pins
Memory Total Capacity
Memory Type
Network Ports
Processor Model
SSD Capacity
SSD Interface
SSD Storage Capacity
SSD Transfer Rate
SSD Type

This is a Mellanox Connect-IB Dual-Port FDR InfiniBand Host Bus Adapter (P/N: MCB194A-FCAT). This high-performance adapter is designed to provide an efficient, low-latency interconnect solution for High-Performance Computing (HPC) clusters, data centers, and enterprise storage. It delivers a massive 56Gb/s of throughput per port, making it an ideal choice for connecting servers and storage systems in performance-driven environments.


 

Key Features

 

  • FDR InfiniBand Performance: Features two QSFP ports, each delivering FDR (Fourteen Data Rate) 56Gb/s InfiniBand connectivity for a total of 112Gb/s of throughput.

  • Low Latency: The Connect-IB architecture is engineered for extremely low latency, which is critical for inter-server communication in HPC clusters and for high-speed access to storage.

  • Advanced Offloads: Supports advanced offload engines, including RDMA (Remote Direct Memory Access), to reduce CPU overhead and improve overall system efficiency.

  • PCIe 3.0 Interface: Utilizes a PCIe 3.0 x8 host interface to provide ample bandwidth between the server and the network fabric.