, ,

SMP interconnect and accelerator interface- IBM Power E1050

The Power10 processor provides a highly optimized, 32-Gbps differential signaling technology interface that is structured in 16 entities (eight ports that provide two 1×9 xBus). Each entity consists of eight data lanes and one spare lane. This interface can facilitate the following functional purposes:

Ê First- or second-tier SMP link interface, enabling up to 16 Power10 processors to be combined into a large, robustly scalable single-system image

Ê Open Coherent Accelerator Processor Interface (OpenCAPI) to attach cache coherent and I/O-coherent computational accelerators, load/store addressable host memory devices, low latency network controllers, and intelligent storage controllers

Ê Host-to-host integrated memory clustering interconnect, enabling multiple Power10 systems to directly use memory throughout the cluster

Note: The OpenCAPI interface and the memory clustering interconnect are Power10 technology option for future usage.

48      IBM Power E1050: Technical Overview and Introduction

Because of the versatile nature of signaling technology, the 32-Gbps interface is also referred to as a PowerAXON interface. The IBM proprietary X-bus links connect two processors on a system board with a common reference clock. The IBM proprietary A-bus links connect two processors in different drawers on different reference clocks by using a cable.

OpenCAPI is an open interface architecture that allows any microprocessor to attach to the following items:

Ê Coherent user-level accelerators and I/O devices

Ê Advanced memories accessible through read/write or user-level DMA semantics

The OpenCAPI technology is developed, enabled, and standardized by the OpenCAPI Consortium. For more information about the consortium’s mission and the OpenCAPI protocol specification, see OpenCAPI Consortium.

The PowerAXON interface is implemented on dedicated areas that are at each corner of the Power10 processor die.

The Power10 processor-based E1050 server uses this interface to implement: Ê DCM internal chip-to-chip connections

Ê Chip-to-chip SMP interconnects between DCMs in a 1-hop topology

Ê OpenCAPI accelerator interface connections

The chip-to-chip DCM internal interconnects, and connections to the OpenCAPI ports, are shown in Figure 2-6.

Figure 2-6 SMP xBus 1-hop interconnect and OpenCAPI port connections

Note: The left (front) DCM0 and DCM3 are placed in a 180-degrees rotation compared to the two right (rear) DCM1 and DCM2 to optimize PCIe slots and Non-volatile Memory Express (NVMe) bay wirings.

Chapter 2. Architecture and technical overview                                                                      49

All ports provide two 1×9 xbusses, so some ports show two connections. All connections going outside the DCM are 1×9, which is also the case for the ports where only one connection is shown.

For the internal connection of the two chips in one DCM, two ports are available, but only one is used. The used port connects the two chips inside the DCM with a 2×9 bus.

Note: The implemented OpenCAPI interfaces can be used in the future, but they are currently not used by the available technology products.

Related Posts