© 2022 Images et Technologie | All rights reserved | Privacy Policy (GDPR)

LinkedIn Images Tech
Instagram Images Tech

Graphcore 

IPU-POD

Second generation IPU systems for
AI infrastructure at scale

IPU-MACHINE: M2000

A core, new building block for AI infrastructure, the IPU-M2000 is powered by 4 x Colossus Mk2 GC200, Graphcore’s second generation 7nm IPU. It packs 1 PetaFlop of AI compute, up to 450GB Exchange Memory and 2.8Tbps IPU-Fabric for  super low latency communication, in a slim 1U blade to handle the most demanding of machine intelligence workloads.

IPU-M2000

Key Features

Compute

• 4 x Colossus™ Mk2 GC200 IPU 

• 1 PetaFlop AI compute 

• 5888 independent processor cores  

WORLD-CLASS 

PERFORMANCE 

IPU-POD systems support industry-standard software tools. Developers can work with frameworks such as TensorFlow, PyTorch, PyTorch Lightning and Keras, as well as open standards like ONNX and model libraries like Hugging Face

Download Product Sheet

Graphcore IPU Lets Innovators Make New Breakthroughs in Machine Intelligence

Graphcore IPU-Machine M2000 Images et Technologie

Memory

• Up to 450GB Exchange Memory™

• 180TB/s Exchange Memory™
  bandwidth

 

Communications

• 2.8Tbps ultra-low latency IPU-Fabric™

• Direct connect or via Ethernet switches

• Collectives and all-reduce operations
  support
 

Software

• Poplar SDK

• PopVision visualization and analysis     tools 

Converged Infrastructure Support

• Virtual-IPU comprehensive virtualization     and workload manager support 

• Support for SLURM workload manager

• Support for Kubernetes orchestration

• OpenBMC management built-in 

• Grafana system monitoring tool interface


IPU Gateway SoC

• Arm Cortex quad-core A-series SoC

Form Factpr

• Industry standard 1U

16

IPU-POD16 DA (Direct Attach) is the ideal platform
for exploration, innovation and development. This
lets AI teams make new breakthroughs in machine
intelligence. Four IPU-M2000s, supported by a host
server, deliver a powerful 4 petaFlops of AI compute for
both training and inference workloads in an affordable,
compact 5U system.

For Innovators making new AI Breakthroughs

IPU-POD16 opens up a new world of machine intelligence innovation.

Contact Sales
Graphcore IPU-Pod16 Front  Images et Technologie
Graphcore IPU-Pod16 Images et Technologie

IPU-POD16 DA is designed to get you up and running in no time. A turnkey system, featuring IPU-M2000s directly attached to an approved host server ready for installation in your datacenter. Extensive documentation and support is provided both by AI experts at Graphcore and our elite partner network.

IPU-POD16 DA is a powerful, standalone AI compute resource. However, it also offers the opportunity for growth, on your terms. Your IPU-POD16 DA system investment can be expanded later into a larger IPU-POD system

Designed specifically for the communication requirements of AI workloads at scale, IPU-Fabric is Graphcore’s innovative low-latency, jitter-free interconnect using industry standard IT equipment. It supports highly efficient, deterministic, all-to-all IPU interconnect across your system regardless of size.

Fast & efficient for dense matmul models

Excels at sparse & fine-grained computation

Expert support to get you up and running quickly

Graphcore 

IPU-POD

64

The IPU-POD64 is a single rack configuration featuring 16 IPU-M2000™ compute blades, based on the innovative GC200 Intelligence Processing Unit (IPU). The IPU-POD64 can deliver up to 16 petaFLOPS of AI compute.

The Building Block for Machine Intelligence Scale-Out

Graphcore IPU-Pod64 Images et Technologie

The whole system, hardware and software, has been architected together. IPU-POD64 supports standard frameworks and protocols to enable smooth integration into existing data center environments. Innovators can focus on deploying their AI workloads at scale, using familiar tools while benefitting from cutting-edge performance.

Download Product Sheet

Built for AI developers

For deeper control and maximum performance, the Poplar framework enables direct IPU programming in Python and C++. Poplar allows effortless scaling of models across many IPUs without adding development complexity, so developers can focus on the accuracy and performance of their application.

CONTACT SALES

Plug-and Play with Direct Attach

AI infrastructure built to scale

Start small, scale big

EXTENSIVE ECOSYSTEM

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use. 

Designed from the ground up for high performance training and inference workloads, the IPU-M2000 unifies your AI infrastructure for maximum datacentre utilization. Get started with development and experimentation then ramp to full scale production.

CONTACT SALES

Graphcore IPU Lets Innovators Make New Breakthroughs in Machine Intelligence

WORLD-CLASS 

PERFORMANCE 

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use. 

EXTENSIVE ECOSYSTEM

IPU-POD systems support industry-standard software tools. Developers can work with frameworks such as TensorFlow, PyTorch, PyTorch Lightning and Keras, as well as open standards like ONNX and model libraries like Hugging Face

Built for AI developers

For deeper control and maximum performance, the Poplar framework enables direct IPU programming in Python and C++. Poplar allows effortless scaling of models across many IPUs without adding development complexity, so developers can focus on the accuracy and performance of their application.

Second generation IPU systems for
AI infrastructure at scale

IPU-MACHINE: M2000

A core, new building block for AI infrastructure, the IPU-M2000 is powered by 4 x Colossus Mk2 GC200, Graphcore’s second generation 7nm IPU. It packs 1 PetaFlop of AI compute, up to 450GB Exchange Memory and 2.8Tbps IPU-Fabric for  super low latency communication, in a slim 1U blade to handle the most demanding of machine intelligence workloads.

Download Product Sheet

IPU-M2000

Key Features

Compute

• 4 x Colossus™ Mk2 GC200 IPU 

• 1 PetaFlop AI compute 

• 5888 independent processor cores  

Memory

• Up to 450GB Exchange Memory™

• 180TB/s Exchange Memory™
  bandwidth

 

Communications

• 2.8Tbps ultra-low latency IPU-Fabric™

• Direct connect or via Ethernet switches

• Collectives and all-reduce operations
  support
 

Software

• Poplar SDK

• PopVision visualization and analysis      tools 

Converged Infrastructure Support

• Virtual-IPU comprehensive virtualization    and workload manager support 

• Support for SLURM workload manager

• Support for Kubernetes orchestration

• OpenBMC management built-in 

• Grafana system monitoring tool interface


IPU Gateway SoC

• Arm Cortex quad-core A-series SoC

Form Factor

• Industry standard 1U

Designed from the ground up for high performance training and inference workloads, the IPU-M2000 unifies your AI infrastructure for maximum datacentre utilization. Get started with development and experimentation then ramp to full scale production.

Designed from the ground up for high performance training and inference workloads, the IPU-M2000 unifies your AI infrastructure for maximum datacentre utilization. Get started with development and experimentation then ramp to full scale production.

Graphcore 

IPU-POD

16

IPU-POD16 DA (Direct Attach) is the ideal platform for exploration, innovation and development. This lets AI teams make new breakthroughs in machine intelligence. Four IPU-M2000s, supported by a host server, deliver a powerful 4 petaFlops of AI compute for both training and inference workloads in an affordable, compact 5U system.

For Innovators making new AI Breakthroughs

Fast & efficient for dense matmul models

Excels at sparse & fine-grained computation

Expert support to get you up and running quickly

IPU-POD16 opens up a new world of machine intelligence innovation.

Contact Sales

IPU-POD16 DA is designed to get you up and running in no time. A turnkey system, featuring IPU-M2000s directly attached to an approved host server ready for installation in your datacenter. Extensive documentation and support is provided both by AI experts at Graphcore and our elite partner network.

IPU-POD16 DA is a powerful, standalone AI compute resource. However, it also offers the opportunity for growth, on your terms. Your IPU-POD16 DA system investment can be expanded later into a larger IPU-POD system

Designed specifically for the communication requirements of AI workloads at scale, IPU-Fabric is Graphcore’s innovative low-latency, jitter-free interconnect using industry standard IT equipment. It supports highly efficient, deterministic, all-to-all IPU interconnect across your system regardless of size.

Plug-and Play with Direct Attach

AI infrastructure built to scale

Start small, scale big

Graphcore 

IPU-POD

64

The IPU-POD64 is a single rack configuration featuring 16 IPU-M2000™ compute blades, based on the innovative GC200 Intelligence Processing Unit (IPU). The IPU-POD64 can deliver up to 16 petaFLOPS of AI compute.

The Building Block for Machine Intelligence Scale-Out

The whole system, hardware and software, has been architected together. IPU-POD64 supports standard frameworks and protocols to enable smooth integration into existing data center environments. Innovators can focus on deploying their AI workloads at scale, using familiar tools while benefitting from cutting-edge performance.

Download Product Sheet