"Funny you should mention it." On April 5, 2022, IBM announced the IBM z16 with 
new IBM Telum Processors. Every main processor chip on the IBM z16 is equipped 
with a new AI accelerator section. Here's the list of operations supported on 
the AI accelerator:

LSTM Activation
GRU Activation
Fused Matrix Multiply, Bias op
Fused Matrix Multiply (w/ broadcast)
Batch Normalization
Fused Convolution, Bias Add, Relu
Max Pool 2D
Average Pool 2D
Softmax
Relu
Tanh
Sigmoid
Add
Subtract
Multiply
Divide
Min
Max
Log

The AI accelerator is expressly designed for real-time, low latency inferencing 
at massive scale (transactions and batch) — for example payment card fraud 
prevention. More information is available here: 
https://ibm.github.io/ai-on-z-101/

You can already attach GPUs to IBM zSystems and LinuxONE servers via network 
connections, and typically they'd be used for model training. If you have some 
other use cases in mind then please let IBM know preferably via an official 
channel.

— — — — —
Timothy Sipples
Senior Architect
Digital Assets, Industry Solutions, and Cybersecurity
IBM zSystems/LinuxONE, Asia-Pacific
[email protected]


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to