Good point about NUMA....and it is still a differentiator and competitive advantage for IBM z. IBM bought Sequent 20+ years ago to get their excellent NUMA technology, and has since built some very clever cache topology and management algorithms. AMD has historically been crippled in real-world performance due to cache inefficiencies. Rather ironically Sequent had its start from ex-Intel engineers after Intel killed their "mainframe on a chip" project. Another irony is that Oracle was partnering with Sequent and after IBM bought Sequent and killed that hardware line, Oracle bought Sun Microsystems. Despite all this mainframe-killing competition, there are still over 100 billion CICS transactions processed per day -- by comparison, Google does 10 billion searches per day. That is perhaps the most telling statistic about the health and future of mainframe. 10 years ago CICS was at 30 billion transactions per day, so volume has tripled in 10 years, during the massive growth of cloud. Healthy indeed.
On Mon, May 22, 2023 at 2:56 PM David Crayford <[email protected]> wrote: > Sent again in plain text. Apple mail is too clever for it’s own good! > > > On 22 May 2023, at 12:46 pm, David Crayford <[email protected]> wrote: > > > > > > > >> On 21 May 2023, at 12:52 pm, Howard Rifkind <[email protected]> > wrote: > >> > >> Hundreds of PC type servers still can’t handle the huge amount of data > like a mainframe can. > > > > > Of course, that's an absurd statement! By "PC type," I assume you're > referring to x86? We can easily break this down. First things first, let's > forget about the "hundreds" requirement. A 32 single socket system is > enough to match up. > > AMD EPYC is the poster child for single socket servers, running its novel > chiplet technology on a 5nm process node. AMD's infinity interconnects are > capable of massive I/O bandwidth. You can learn more about it here: > https://www.amd.com/en/technologies/infinity-architecture. Each socket > can have a maximum of 96 cores, but even if we use a conservative 64 cores > per socket, that's a scale-out of 2048 cores. AMD also has accelerators for > offload encryption/compression, etc. > > Over in Intel land, the Ice Lake server platform is not quite as > impressive, but the QPI (Quick Path Interconnect) yet again handles huge > bandwidth. Intel also has accelerators such as their QAT, which can either > be on-die SoC or a PCIe card. It's not too dissimilar to zEDC but with the > advantage that it supports all popular compression formats and not just > DEFLATE. You can find more information here: > https://www.intel.com.au/content/www/au/en/architecture-and-technology/intel-quick-assist-technology-overview.html > . > > A more apples-to-apples comparison would be the HP Superdome Flex, which > is a large shared memory system lashed together with NUMA interconnects, > with a whopping 32 sockets and a maximum core count of 896 on a single > vertically integrated system. HP Enterprise has technology such as nPars, > which is similar to PR/SM. They claim 99.999% availability on a single > system and even beyond when clustered. > > On the Arm side, it gets even more interesting as the hyperscalers and > cloud builders are building their own kit. Although this technology is > almost certainly the growth area of non-x86 workloads, you can find more > details about it here: > https://www.nextplatform.com/2023/05/18/ampere-gets-out-in-front-of-x86-with-192-core-siryn-ampereone/ > . > > > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to [email protected] with the message: INFO IBM-MAIN > ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
