Find out more Find out more

Scan AI

Scan AI

Connectivity for AI

High performance server and storage systems designed for AI and deep learning workloads need an equally high-performance network infrastructure to support them and ensure there are no bottlenecks that affect data throughput, GPU utilisation and time to results. Whether located within a datacentre or at the edge of the network, connectivity is key to keep systems functioning optimally. The Scan AI connectivity solutions portfolio includes high-speed low-latency Ethernet and Infiniband solutions, 4G / 5G options to deliver edge connectivity and professional services to support your infrastructure.

Network Interface Cards

Although only a small component in an AI system, the Ethernet network interface card (NIC) or Infiniband host channel adapter (HCA) has a key role in ensuring maximum data throughput ensuring optimal GPU utilisation. From simply providing basic (10GbE) connectivity for a development workstation up to speeds of 400GbE. In datacentre training systems, the NIC or HCA also provides vital offloading technology such as Remote Memory Direct Access (RDMA or iWARP) and RDMA over converged Ethernet (RoCE) to provide the ability to communicate without involving the CPU, cache or operating system thus delivering much shorter latency times across the network. Our system build teams are able to upgrade NICs or HCAs within a server build from a large range of Intel and NVIDIA Mellanox options.

Intel NICs for Development Systems

Intel’s ever evolving NIC range offers consistent reliable performance from 10GbE up to 100GbE with proven interoperability. They are optimised for use in wider Intel architecture environments, and deliver improved application efficiency and network performance when used with complex AI workloads.

NIC Name Protocol Interfaces Maximum Speed Ports Host Bus RDMA (WARP or RoCE)
800 Series Ethernet SFP28 / QSFP28 100GbE 2/4 PCIe 4.0
700 Series Ethernet RJ45/SFP+/QSFP+/QSFP28 40GbE 1/2/4 PCIe 3.0
500 Series Ethernet RJ45/LC Fibre/SFP28 10GbE 1/2 PCIe 3.0

NVIDIA Networking NICs for Training Systems

The industry-leading NVIDIA Mellanox ConnectX family of intelligent datacentre NICs offers the broadest and most advanced hardware offloads, which enable the high throughput and the low latency for AI workloads. Available in both Ethernet and Infiniband versions, they offer speeds of up 200GbE.

NIC Name Protocol Interfaces Maximum Speed Ports Host Bus RDMA (WARP or RoCE)
ConnectX - 7 Ethernet/Infiniband QSFP56 400GbE 1/2 PCIe 4.0
ConnectX - 6 Ethernet/Infiniband QSFP56 200GbE 1/2 PCIe 4.0
ConnectX - 5 Ethernet/Infiniband SFP28/QSFP28 100GbE 1/2 PCIe 3.0
ConnectX - 4 Ethernet/Infiniband SFP28/QSFP28 50GbE 1/2 PCIe 3.0

To learn more about Ethernet and Infiniband NIC technologies click here to read our Network Card Buyers Guide.

Learn more

NVIDIA BlueField Data Processing Units for Training Systems

The NVIDIA BlueField-2 data processing unit (DPU) is a datacentre infrastructure on a chip optimised for traditional enterprise, high-performance computing (HPC), and AI workloads, delivering a broad set of accelerated software-defined networking, storage, security, and management services. By combining the industry-leading NVIDIA Mellanox ConnectX-6 Dx network adapter with an array of Arm cores, BlueField-2 offers purpose-built, hardware-acceleration engines with full software programmability.

DPU Name Protocol Interfaces Maximum Speed Host Bus Memory Crypto Enabled
BlueField-2 Ethernet 2x QSFP56 2x 100GbE / 1x 200GbE PCIe 4.0 16GB
BlueField-2 Ethernet 2x QSFP56 2x 100GbE / 1x 200GbE PCIe 4.0 16GB
BlueField-2 Infiniband 2x QSFP56 2x 100Gb EDR / 1x 200Gb HDR PCIe 4.0 16GB
BlueField-2 Infiniband 2x QSFP56 2x 100Gb EDR / 1x 200Gb HDR PCIe 4.0 16GB

To learn more about NVIDIA BlueField Data Processing Units for Training Systems click here

Learn more

Network Switches

NVIDIA Networking switches provide the highest performing Ethernet and Infiniband switch families delivering speeds of up to 400GbE – designed to partner seamlessly with ConnectX NICs to ensure maximum through and eliminate bottlenecks across your network. The complete connectivity solution enables AI workloads to operate at maximum functionality at any scale.

Switch Name Protocol Interfaces Maximum Speed Maximum Ports Data Throughput
QM9700 Infiniband QSFP56 400Gbps 64 51.2Tb/s
SN4000 Ethernet QSFP56/QSFP-DD 400GbE 128 25.6TB/s
SN3000 Ethernet SFP28/QSFP28/QSFP56 200GbE 128 12.8TB/s
SN2000 Ethernet SFP28/QSFP28 100GbE 64 6.4TB/s
QM8700 Ethernet QSFP56 200GbE 40 16TB/s
SB7800 Ethernet QSFP28 100GbE 32 7.2TB/s

To learn more about Ethernet and Infiniband switch technologies click here to read our Network Switch Buyers Guide.

Learn more

Airtime & Connectivity Services

Once you get to the inferencing stage in your AI journey, high speed and low latency connectivity is not only required within the datacentre and corporate network, but also outside in the real world, where edge AI devices gather data from cameras, sensors, vehicle and other connected end points. To address this wider connectivity requirement, Scan has partnered with Scancom and EE to provide a wide range of flexible SIM card options covering both 4G and the latest 5G bandwidths; and with BTnet to deliver leased line services for the datacentre.

Learn more

Professional Services

The Scan IT team has many years’ experience in delivering professional services to support network infrastructure deployment. Our consultants are able to use their expertise to help your company design your network in a secure way to ensure the highest levels of business continuity. From secure network architecture and design to installation, configuration and remote monitoring, right through to disaster recovery and business continuity.

Learn more