more from the increase in context switching that
occurs when virtualization is done. So yes they can do it, but not nearly
as well.
Joe Temple
[EMAIL PROTECTED]
845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794
-- Forwarded by Joseph Temple/Poughkeepsie/IBM on
02/20
Robert Nix wrote: But, if one image starts doing compiles or compression
of large quantities of data, or any other CPU bound task, everyone will
suffer.
Actually you have a choice. If the compiles, etc. are relegated to a
compute server you can make it suffer rather than everyone else, also, if
Mark Drvodelsky wrote But the question still does not appear to be
answered - why does the
mainframe have to run at such a low clock speed?
The answer to your question has to do with how chip real estate is used.
In a zSerires micro processor the primary usage of area is for large L1
caches and
John Summerfield wrote That tells me you weren't current on your
maintenance...
Software currency is an issue, but there is very good reason why people
bring their systems down only once or twice a year for maintenance. They
lose money when they do it. The balance between having the right fixes
John Summerfield wrote I presume, from what you say, that Java isn't all
that wonderful on zSeries? Improved CPU performance may make it so.
One cannot make such blanket statements. JAVA is a language, not a
workload. Yes it does have characteristics that cause it to have long path
lengths.
Alan Cox wrote Was it alpha or gamma emitters they got in their materials
?
It was alpha particles and it was not unique to IBM. Basically it started
with dynamic RAM. A passing particle drains charge from the memory cell
causing soft errorsECC became mandatory for dynamic ram when the
John Summerfield wrote
Why do people keep referring to the speed of light? In what I learned of
electronics, signals are carried by electrons travelling round in
conductors
(and semiconductors). AFAIK electrons are quite a bit slower than photons.
Well, according to a guy named James Clerk
Lucius wrote Forgive me for being dumb here, but I'd like to ask How?
If you're
sharing a VM minidisk among several Linux guests, how can you update the
contents without having all of the guests brought down?
You start with 2 LPARs and 2 VMs each has a set of shared Linux disks at
whatever level.
Tzafrir Cohen said:Yes, but if you bring clustering into the game, then
suddenly cheaper
hardwares can become more relieble.
The author also forgets that the guests need patching as well. Having
all of them as guests on a mainframe, or as separate machines in a farm
is not all that different in
Port
[EMAIL PROTECTED]
EDU
03/19/2003 06:34 AM
Please respond to
Linux on 390 Port
On Tue, 18 Mar 2003, Joseph Temple wrote:
I would point out that clustering makes hardware more
It takes somewhere between 1 and 4 900 MHz SPARC III to do the work of a
z900T (around 900 MHz) engine, DEPENDING on the workload. This is at equal
utilization on both machines. Use 1.5 to 2 if you don't know what the
workload is.
We use the following rules for utilization:
Benchmark 100%
Head
While reliability is not the same as availability, availability depends on
it. The less reliable the component pieces are the more redundancy is
required to be available.It depends on what your target availability
is.
Joe Temple
[EMAIL PROTECTED]
845-435-6301 295/6301 cell 914-706-5211
Tzfir Cohen wrote
And if you had all of those Office machines as separate images on a
giant T-Rex, those IT folks would still have to manually patch each and
every image separately, and spend 15 minutes on that.
As for cloning, patch distribution etc.: those solutions
are exactly solutions (?) to
Also, if you move a bunch of intel servers onto the faster intel server it
still probably still is being run at low or very low utilization and how
many customers would want to put that many eggs in basket that is has an
Intel server's reliability?
Joe Temple
[EMAIL PROTECTED]
845-435-6301
The first card in the 1130 deck actually did the entire cold start. You
could operate the machine after but had to know the control panel. Of
course its always easier with some software. I had a friend who could
cold start an 1130 faster than you could take the card off the top of the
reader
Actually, while I/O is the classic example of why processor speed is not
everything, you don't have to move that far beyond the processor itself to
show this. Note that the various types of servers have different size and
structures of L1, L2, L3, caches and memory interfaces. Also note
The best way to understand is to take measurements of the running
production systems. There are many tools for doing this and you may
already be gathering at least the data that you would need. The way to
look at utilization is to plot the utilization on intervals for a peak
period, day or
I don't know of any plans to make RMF-PM available on other platforms. I
will look around, but it will be a week or so; others may be able to help
sooner.
Joe Temple
[EMAIL PROTECTED]
845-435-6301 295/6301 cell 914-706-5211 home 845-338-8794
David Boyes
Actually, Blue Gene is neither a new sort of machine nor a mainframe. It
is essentially a very large Scalable Parallel (from around 1992) machine
using Linux and Intel instead of AIX and Power.Thus it is a new sort of
SP.I think using it as the example of big iron causes the article to
I think the answer is that zSeries Linux scales to more than 4
processors. This is obviously workload dependent. Not sure what you
mean by negative if you mean that 4 processors get less work done than
3, probably won't see that on zSeries. If you mean that you get less than
4X the
Ken,
Richard is on the right track here. Short of a Size390 sizing I can tell
you that the range of relative capacity between a z900 is quite broad. The
actual result will depend on what kind of workload is being done (query
only, some updates, heavy transactional), the cache working set
Let me add to what Joe added.
When you combine the low utilization that many (not all) dense rack
mounted servers run at it becomes even easier for z to win the
througput/KVA race. Even if we don't include non production servers and
look at clusters for a single application, the peak composite
Speaking of KVA, has anyone else heard about anyone hooking up the old
machine floor plumbing to chillers in order get cool enough air on the
floor to cool dense racks or blade centers. Just wondering if what I
heard is a rumour, at fact or a mainframe geek joke :-)
Joe Temple
Executive
Kielek is correct, but consider this.
1. Given the availability of the application, there is a small difference
between Linux on z and Linux on Intel simply because the zSeries
reliability takes the hardware multiplier on availability closer to 1.
2. Yes we can configure the Intel with
Ralph,
How often do you change releases? If you go to Microsoft do your
programmers understand that you are going into a keep up or die culture?
Yes the application programming fashionable, and hardware is cheap, but
microsoft does not provide long term support with regard to release levels.
The
Larry,
the conversion rate for Intel platforms to zSeries ranges from 3 MHz per
MIPS to 30 MHz per MIPS depending on how much data you are pushing through
the caches and out much serialization (context switches, locks, etc.).
This is typically lower for the web servers and higher for the backend.
Does the Intel Rack contain a switch like a blade center has? If so this
is a classic example of Infrastructure Simplification and a lot of the
savings would be in network infrastructure and admin costs.
Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301
The values that Uriel uses is a reasonable starting point when you don't
know either the utilization or details about the application. It is not
precisely the same as we use for sizing at IBM, but there is more to this
than our guess about the z. The basis we use would make the Sun machine
have
This comparison is also in the range of values that have been measured.
However it is near the low end of the range for commercial work. We
usually see this type of comparison when there are long pathlengths of work
per byte processed, query oriented workloads with little locking or writes,
or
Part of the difference is the comparison of the HP to the Intel Box.
If you take the z900 uniprocessor IFL to be 256 MIPS my first guess for the
HP 440 MHz 2way would have been 290 MIPS, but for the PIII 700 MHz I would
have come up with 83.5 MIPS. I do these comparisons by using extrapolation
This one is often either overlooked or overblown. zSeries zealots will
talk about how the IO processors do the I/O for z and I/O being done in the
cpus for others. While neither machine really DOES I/O in either the CPU
or SAP (IOP), there is high priority set up code which must be done to
Nope... Some of us are too old for that, but we did play Adventure on
VM... maze of twisty turny little passages all different. No graphics,
just keystrokes and imagination.
Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301 295/6301 cell
IBM probably could build them, whether we could sell them at price Google
could afford is another issue...
Does anyone know how many of what class of servers are being used? Also,
my guess is that some sort of hybrid might be the answer. That is some of
the clusters may lend themselves to
I agree.To google the computer that they have custom designed and
programmed is their factor. It produces their product.Thus for them
the development of this machine is like GM building their auto assembly
line. Thought of in that light the extra development cost is a more
reasonable
Actually, in Google's case there is some assembly required which goes
beyond unpacking the server and sticking it in a rack, unless they now have
a supplier for their nodes. I had the impression that they assembled the
nodes themselves. In any case the lead time may be longer leading to
bigger
Utilization of something as large as Google is an interesting issue.
Given the structure that Samuel explained: they use a distributed
processing model where a master server(s) sends jobs to any available node
that has enough available CPU cycles. The ability to utilize all those
processors will
The shared L2 reduces the penalty for those situations when you can't avoid
dispatching on a new engine. That is when the system is very busy. This
is one of the reasons for the difference in utilization. As the machine
gets busier other machines are forced into L2-L2 or remote L3-localL1
Yes tagging works, but you will find that the system z holds a lot more
translations in a two tiered TLB and has tagging as well. Thus the System
z does not have to retranslate as often.
Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301 295/6301
On Iau, 2006-05-18 at 09:51 -0400, Joseph Temple wrote:
Yes tagging works, but you will find that the system z holds a lot
Can someone tell me how to exit the list while I am on vacation. I want to
set an away message and don't want to flood the list with junk.
Joe Temple
IBM Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301 295/6301 cell 914-706-5211
Home office 845-338-1448
40 matches
Mail list logo