The internal I/O subsystem of the Power E1050 server is connected to the PCIe Express controllers on a Power10 chip in the system. A Power10 chip has two PCI Express controllers (PECs) of 16 lanes each for a total of 32 Gen5/Gen4 lanes per chip and 64 Gen5/Gen4 lanes per DCM.
Each PEC supports up to three PCI host bridges (PHBs) that directly connect to PCIe slots or devices. Both PEC0 and PEC1 can be configured as follows:
Ê One x16 Gen4 PHB or one x8 Gen5 PHB
Ê One x8 Gen5 and one x8 Gen4 PHB
Ê One x8 Gen5 PHB and two x4 Gen4 PHBs
The usage or configurations of the PECs are shown in the notation of the ports. There are two notations, the E-Bus notation and the PHB notation, which describe the split of the PEC. Table 2-9 gives an overview.
Table 2-9 PHB cross-reference to E-Bus notation
Chapter 2. Architecture and technical overview 65
Figure 2-13 shows a diagram of the I/O subsystem of the Power E1050 server. The left (front) DCM0 and DCM3 are placed in a 180-degrees rotation compared to the two right (rear) DCM1 and DCM2 to optimize PCIe slots and NVMe bay wirings.
Figure 2-13 Power E1050 I/O subsystem diagram
On the left (front) side, you find 10 NVMe bays. Six NVMe bays are connected to DCM0 and four NVMe bays are connected to DCM3. To make all 10 NVMe bays available, all four processor sockets must be populated. The NVMe bays are connected mainly by using a x8 PHB, some with a x4 PHB. But because the NVMe devices can use four lanes (x4), this fact is not relevant from a performance point of view.
On the right (rear) side, you find 11 PCIe slots in a different manner. Six slots use up to 16 lanes (x16) and can operate either in PCI Gen4 x16 mode or in Gen5 x8 mode. The remaining five slots are connected with eight lanes. Two of them are Gen5 x8, and three of them are Gen4 x8. In a 2-socket processor configuration, seven slots are available for use (P0-C1 and P0-C6 to P0-C11). To make all the slots available, at least three processor sockets must be populated.
The x16 slots can provide up to twice the bandwidth of x8 slots because they offer twice as many PCIe lanes. PCIe Gen5 slots can support up to twice the bandwidth of a PCIe Gen4 slot, and PCIe Gen4 slots can support up to twice the bandwidth of a PCIe Gen3 slot, assuming an equivalent number of PCIe lanes.
Note: Although some slots provide a x8 connection only, all slots have an x16 connector.
All PCIe slots support hot-plug adapter installation and maintenance and enhanced error handling (EEH). PCIe EEH-enabled adapters respond to a special data packet that is generated from the affected PCIe slot hardware by calling system firmware, which examines the affected bus, allows the device driver to reset it, and continues without a system restart. For Linux, EEH support extends to the most devices, although some third-party PCI devices might not provide native EEH support.
66 IBM Power E1050: Technical Overview and Introduction
All PCIe adapter slots support hardware-backed network virtualization through single-root IO virtualization (SR-IOV) technology. Configuring an SR-IOV adapter into SR-IOV shared mode might require more hypervisor memory. If sufficient hypervisor memory is not available, the request to move to SR-IOV shared mode fails. The user is instructed to free extra memory and try the operation again.
The server PCIe slots are allocated DMA space by using the following algorithm:
Ê All slots are allocated a 2 GB default DMA window.
Ê All I/O adapter slots (except the embedded Universal Serial Bus (USB)) are allocated Dynamic DMA Window (DDW) capability based on installed platform memory. DDW capability is calculated assuming 4 K I/O mappings:
– The slots are allocated 64 GB of DDW capability.
– Slots can be enabled with Huge Dynamic DMA Window (HDDW) capability by using the I/O Adapter Enlarged Capacity setting in the ASMI.
– HDDW-enabled slots are allocated enough DDW capability to map all installed platform memory by using 64 K I/O mappings.
– Slots that are HDDW-enabled are allocated the larger of the calculated DDW capability or HDDW capability.
The Power E1050 server is smarter about energy efficiency when cooling the PCIe adapter environment. It senses which IBM PCIe adapters are installed in their PCIe slots, and if an adapter requires higher levels of cooling, they automatically speed up fans to increase airflow across the PCIe adapters. Faster fans increase the sound level of the server. Higher wattage PCIe adapters include the PCIe3 serial-attached SCSI (SAS) adapters and solid-state drive (SSD)/flash PCIe adapters (#EJ10, #EJ14, and #EJ0J).
USB ports
The first DCM (DCM0) also hosts the USB controller that is connected by using four PHBs, although the USB controller uses only one lane. DCM0 provides four USB 3.0 ports, with two in the front and two in the back. The two front ports provide up to 1.5 A USB current, mainly to support the external USB DVD (Feature Code EUA5). The two rear USB ports current capacity is 0.9 A.
Note: The USB controller is placed on the trusted platform module (TPM) card because of space reasons.
Some customers require that USB ports must be deactivated for security reasons. You can achieve this task by using the ASMI menu. For more information, see 2.5.1, “Managing the system by using the ASMI GUI” on page 73.
Chapter 2. Architecture and technical overview 67