Angels on the head of a pin, guys. Running x86 at 100% runs into various 
architectural bottlenecks to which IBM Z (generally) doesn't fall prey; we know 
that. Whether those are necessary or OS-caused doesn't matter: they are there.

"A z17 offers n* the performance of an Intel 12345" is also semi-meaningless, 
unless we're comparing the same workload. Until you RUN a Google search on IBM 
Z, you don't know whether it will do better, worse, or the same. A given search 
is a relatively small operation, so 100,000 searches MIGHT do better on n,nnn 
separate x86 servers Just Because; or might benefit from shared memory etc. on 
IBM Z. Again, without testing, we don't know.

This is Performance 101, eh?

-----Original Message-----
From: IBM Mainframe Assembler List <ASSEMBLER-LIST@LISTSERV.UGA.EDU> On Behalf 
Of Jon Perryman
Sent: Sunday, August 24, 2025 3:14 PM
To: ASSEMBLER-LIST@LISTSERV.UGA.EDU
Subject: Re: Telum and SpyreWAS: Vector instruction performance

On Sun, 24 Aug 2025 16:38:51 +0100, Martin Ward <mar...@gkc.org.uk> wrote:

>So if Google switched to using z17's they would be running at 100% and 
>therefore using the full 35kW per machine. Got it, thanks.

Ok, you don't understand power curves nor understand facts. I asked Gemini 
questions about power consumption which says running at 100% saves 37% kW 
instead of running at 75%. Telum power curve is not available. Without changing 
hardware, Google would save around 10-15% kw plus the 16% for idle consumption 
of the removed servers but only mainframers understand how to run at 100%. Can 
you tell us for the 100th time, was it 35 kW. Got it, thanks Dr. Martin. 
>
>> z17 is a Ferrari, not a moped
>For some applications, 10 mopeds beat one Ferrari :-)

Your point? Google uses servers and z17 significantly outperforms those servers 
but again, you show no facts to prove either way. Got it, thanks.

Reply via email to