Re: snmp process monitoring....

2006-08-10 Thread Joseph Temple
Can someone tell me how to exit the list while I am on vacation.  I want to
set an away message and don't want to flood the list with junk.


Joe Temple
IBM Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Who's been reading our list...

2006-05-18 Thread Joseph Temple
The shared L2 reduces the penalty for those situations when you can't avoid
dispatching on a new engine.  That is when the system is very busy.  This
is one of the reasons for the difference in utilization.  As the machine
gets  busier other machines are forced into L2-L2 or remote L3-localL1
(victim cache) transfers which have a high penalty.  In z the migration is
from shared L2 to L1.   The less affinity scheduling delays dispatching,
the more the system behaves like  a multiple server single queue system,
which is the optimum case.   The more scheduling delays dispatching the
more the system behaves like multiple single server single queue systems,
which will not perform well if the load has skew or high variability.
Thus if the affinities are hardened (often done in skewless benchmark runs)
skew will cause some cpus to overload while others are idle.   If there is
no affinity then there are more cache migrations.   In between the there is
a combination of the first and second cases and it is a matter of what the
migration penalty is v the queueing penalty for affinity scheduling.   Of
course this is yet  another reason that relative capacity is workload
dependent.

Another aspect of z's common L2 is that it always  holds a copy of the data
in  the L1s  attached to it and therefore  snooping is avoided.  High end
System x systems (X460 class) do this by keeping a shadow directory that
covers the on chip caches.


Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Alan
 Altmark/Endicott/
 [EMAIL PROTECTED]  
To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Who's been reading our list...

 05/18/2006 08:55
 AM


 Please respond to
 Linux on 390 Port
 [EMAIL PROTECTED]
 IST.EDU






On Thursday, 05/18/2006 at 10:03 ZE2, Martin Schwidefsky
[EMAIL PROTECTED] wrote:
 The cache is a different story. Mainframes have the advantage of a
 shared level 2 cache compared to x86. If a process migrates from one
 processor to another, the cache lines of the process just have to be
 loaded from level 2 cache to level 1 cache again before they can be
 accessed. On x86 it goes over memory.

The cache designs on the mainframe change from generation to generation to
deal with more work, changes in the relationship of CPU speed to memory
speed, and more CPUs.  You want the benefits of cache, but you want to
minimize the serialization/synchronization effects on the processors. This
is why we do our best to dispatch a virtual machine on the same CPU as was
used in the previous time slice.  The relationship between the CPUs and a
particular cache is not always equal, but is always the best if you use
the same CPU again.

Alan Altmark
z/VM Development
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Fw: [LINUX-390] Who's been reading our list...

2006-05-18 Thread Joseph Temple
Yes tagging works, but you will find that the system z holds a lot more
translations in a two tiered TLB and has tagging as well.   Thus the System
z does not have to retranslate as often.


Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Alan Cox
 [EMAIL PROTECTED]
 u.org.uk  To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Fw: [LINUX-390] Who's been
   reading our list...
 05/18/2006 07:23
 AM


 Please respond to
 Linux on 390 Port
 [EMAIL PROTECTED]
 IST.EDU






On Iau, 2006-05-18 at 10:03 +0200, Martin Schwidefsky wrote:
 On x86 it is the translation-lookaside-buffers (TLBs) which get flushed
 each time the control register 1 is loaded. Switching between threads is

[%cr3 not 1 but thats by the way]

 fine because the use the same translation table. Switching between
 processes has a performance penalty. On mainframes the TLBs are not
 flushed for any context switch.

Not on a current AMD x86 processor. The opteron and AMD64 processor line
uses tags on the tlb entries so that it can avoid this without the
underlying OS being changed.

 The cache is a different story. Mainframes have the advantage of a
 shared level 2 cache compared to x86. If a process migrates from one
 processor to another, the cache lines of the process just have to be
 loaded from level 2 cache to level 1 cache again before they can be
 accessed. On x86 it goes over memory.

Long ago yes but even with private L2 caches (which have advantages too)
you can send lines from cache to cache directly if your bus protocol is
sane.

Alan

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Fw: [LINUX-390] Who's been reading our list...

2006-05-18 Thread Joseph Temple
.   
   
   
   







Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794


   
 Alan Cox  
 [EMAIL PROTECTED] 
 u.org.uk  To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU 
 390 Port   cc
 [EMAIL PROTECTED] 
 IST.EDU  Subject
   Re: Fw: [LINUX-390] Who's been  
   reading our list... 
 05/18/2006 01:17  
 PM
   
   
 Please respond to 
 Linux on 390 Port 
 [EMAIL PROTECTED] 
 IST.EDU  
   
   




On Iau, 2006-05-18 at 09:51 -0400, Joseph Temple wrote:
 Yes tagging works, but you will find that the system z holds a lot more
 translations in a two tiered TLB and has tagging as well.   Thus the
System
 z does not have to retranslate as often.

How many tags does the Z have in the TLBs ?

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
attachment: pic05446.jpg
attachment: pic18015.jpg


Re: Google out of capacity?

2006-05-05 Thread Joseph Temple
I agree.To google the computer that they have custom designed and
programmed is their factor.   It produces their product.Thus for them
the development of this machine is like GM building their auto assembly
line.  Thought of in that light the extra development cost is a more
reasonable expense than it would be for a business that uses computers as
adjuncts to the actual  manufacture of product.  The point is that the
customization has a direct rather than an indirect lever on their ability
to produce.

They get to a $1000 per server because they don't even buy blades, or rack
optimized Intel servers, they buy parts.   Of course they have to amortize
their development expense and assembly cost which adds to people dollars.
The development is amortized over a volume of thousands.  Thus they have a
superniche.  Once they have their infrastructure in place for search it is
much easier for them to build applications to work there than to customize
new infrastructure around other applications.  Finally they save on
development because they use long standing research in parallel processing,
which has been popular because of entry cost for at least 20 years.

The questions become:
 Does the infrastructure break as some level of scale?   If seems that some
cracks are appearing.
Are they too inefficient with environmentals?  This changes with oil @ ~$75
per barrel.
Are they using too many people?  This one would be hard to extract.   The
people they use to run this thing are their factory workers which business
tend to view  differently than IT operations.  In another solution at least
some of the same people would likely have other tasks, but would not
necessarily go away.
Finally there is the question  Can anything else do the job?   The answer
to this is not clear either.  My guess is that a successful replacement
would either be a standard yet still very distributed solution like blades,
or a niche technology solution with lots of cores per chip, or a heavily
virtualized system like z Linux. or some hybrid. Simply replacing their
nodes with fewer larger nodes will probably not do it.


Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 shogunx
 [EMAIL PROTECTED]
 ak.ath.cx To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Google out of capacity?

 05/05/2006 02:02
 AM


 Please respond to
 Linux on 390 Port
 [EMAIL PROTECTED]
 IST.EDU






On Fri, 5 May 2006, Vic Cross wrote:

 On 05/05/2006, at 5:53am, Fargusson.Alan wrote:

  A long time ago I read that they did TCO studies, and found it less
  costly to buy lots of low cost hardware over buying fewer high cost
  systems.

 A long time ago is the point.  When I read similar, the server
 count was around 8000 -- it would seem that they've grown
 considerably beyond that now.  I doubt they've updated their TCO
 analysis accordingly...  :)

The real question is could a z9 outprocess existing clusters, out scale
them at the same rate, and do so in such a fashion as to make it
attractive to google to abandon its own in-house OS?


 Cheers,
 Vic Cross

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390


sleekfreak pirate broadcast
http://sleekfreak.ath.cx:81/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Google out of capacity?

2006-05-05 Thread Joseph Temple
Actually, in Google's case there is some assembly required which goes
beyond unpacking the server and sticking it in a rack, unless they now have
a supplier for their nodes.  I had the impression that they assembled the
nodes themselves.  In any case the lead time may be longer leading to
bigger inventory on hand, particularly if the throw away broken stuff
rather than repair.


Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Tim Hare
 [EMAIL PROTECTED]
 te.fl.us  To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Google out of capacity?

 05/05/2006 08:40
 AM


 Please respond to
 Linux on 390 Port
 [EMAIL PROTECTED]
 IST.EDU






I am not sure I remember this right, but I think they said the pay less
then $1,000 per server.  They can buy a lot of $1,000 systems for the
cost of one z9.


I'd say that's true in one sense - without the TCO calculations. But in
another sense - how much does it cost for a virtual Linux server? Assuming
(I know, I know.. ) their z/VM box has some capacity left, it costs very
little to create one more Linux image, and you don't have to wait for
someone to bring one from the storage room / warehouse.  Note that I'm
assuming Google buys extra hardware in advance, installing it as needed,
rather than waiting until they need it to buy it. If they wait until the
last minute to buy it, then you have to figure in delivery / shipping
costs.


Tim Hare
Senior Systems Programmer
Florida Department of Transportation
(850) 414-4209

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Google out of capacity?

2006-05-05 Thread Joseph Temple
Utilization of something as large as Google is an interesting issue.
Given the structure that Samuel explained: they use a distributed
processing model where a master server(s) sends jobs to any available node
that has enough available CPU cycles.  The ability to utilize all those
processors will depend on how well the master can load balance the work on
the nodes.  This usually depends on the variability in the size and shape
of the work to be done, the length (delay) of the feedback path that
delivers the I have cycles info to the master and the affinities of
various types of work to particular pools of capacity.  Generally speaking
high variability, long feed back path and affinity scheduling will lead to
reduced utilization.Round robin routing will not have the feedback and
affinity drivers, but will have some servers clogged while others are idle
given enough workload variability.  In general the more servers there are
in the cluster, the stronger these effects and the lower the utilization.
On the other hand if the master is able to break the work into relatively
uniform packages and distribute them the utilization can be quite high.
Depends on the load.


Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Little, Chris
 [EMAIL PROTECTED]
 hs.orgTo
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Google out of capacity?

 05/05/2006 01:46
 PM


 Please respond to
 Linux on 390 Port
 [EMAIL PROTECTED]
 IST.EDU






If the servers are running at a certain percentage of capacity, how would
virtualization help?  z or otherwise?

 -Original Message-
 From: Alan Altmark [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 05, 2006 12:43 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Google out of capacity?

 On Friday, 05/05/2006 at 10:09 EST, Rich Smrcina [EMAIL PROTECTED]
 wrote:
  I think it's a safe bet that many 54-way z9's would be required, and
 lots of
  fully loaded
  DS8000's, with the new 4G Ficon (insert tool man growl here).
 
  It would be a sweet coup if there were any interest.

 Picture it:  The year is 2050.  The public's demand for
 information has continued to grow unabated, and there are 9+B
 people on the planet
 (source: US Census Bureau).   The landfills are full of
 broken servers and
 the deserts are covered with solar collectors to fuel the
 server farms.
 These servers are located underground as there is no more
 space above ground.

 The heat from all the servers has altered the climate and
 raised the ocean levels.  All of our homes are on stilts.

 Or, they could choose some form of virtualization and save us
 all.  (He Who Must Not Be Annoyed says that z is not the
 answer for Google.)

 -- Chuckie

 --
 For LINUX-390 subscribe / signoff / archive access
 instructions, send email to [EMAIL PROTECTED] with the
 message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Google out of capacity?

2006-05-04 Thread Joseph Temple
IBM probably could build them, whether we could sell them at price Google
could afford is another issue...

Does anyone know how many of what class of servers are being used?   Also,
my guess is that some sort of hybrid might be the answer.  That is some of
the clusters may lend themselves to virtualization more than others,
yielding variable leverage for different platforms.


Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Alan Cox
 [EMAIL PROTECTED]
 u.org.uk  To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Google out of capacity?

 05/04/2006 11:25
 AM


 Please respond to
 Linux on 390 Port
 [EMAIL PROTECTED]
 IST.EDU






On Iau, 2006-05-04 at 09:31 -0500, Dave Jones wrote:
  But today is special - the CEO has admitted that the grand
  distributed PC approach hasn't worked.
 
  http://www.webmasterworld.com/forum30/34147.htm

Funny but that doesn't seem to be what the original referenced material
is about.

 
  Huge machine crisis?  Is there a zSeries salesman in the room?

I doubt IBM could build enough zSeries boxes in a decade to even match
the existing infrastructure 8)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: New z9 models

2006-04-28 Thread Joseph Temple
Nope... Some of us are too old for that, but we did play Adventure on
VM... maze of twisty turny little passages all different.   No graphics,
just keystrokes and imagination.


Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Alan
 Altmark/Endicott/
 [EMAIL PROTECTED]  
To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: New z9 models

 04/28/2006 12:51
 PM


 Please respond to
 Linux on 390 Port
 [EMAIL PROTECTED]
 IST.EDU






On Friday, 04/28/2006 at 10:05 EST, Tom Duerbusch
[EMAIL PROTECTED] wrote:
 I know we are all virtual...really...

 But in what universe can you have a 20 sided dice?  The pursuit of a 20
 sided dice, would make the pursuit of free beer look as easy as
 turning on the tap (for those of us that don't work at a brewery).

Dude!  You never played Dungeons  Dragons?  Two 20-sided dice
(icosahedra) give you a number between 00 and 99 with an even probability
distribution.  Picture at http://en.wikipedia.org/wiki/Icosahedron.
Octahedra and dodecahedra (12-sided) are used as well.  We even
occasionally used the more famous hexahedron.

I thought all computer geeks played DD   :-)

Alan Altmark
z/VM Development
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Business Week Article

2005-06-30 Thread Joseph Temple
Does the Intel Rack contain a switch like a blade center has?   If so this
is a classic example of Infrastructure Simplification and a lot of the
savings would be in network infrastructure and admin costs.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Meanor, Tim
 [EMAIL PROTECTED]
 ott.com   To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Business Week Article

 06/30/2005 12:12
 PM


 Please respond to
 Linux on 390 Port






Well, in all fairness, they moved 60 web sites to zLinux, but also
consolidated 500 applications from 560 x86 servers to a rack of 70 x86
servers running VMware.  That entire migration is what is expected to
save $10 million, not just the move to zLinux.

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Neale Ferguson
Sent: Wednesday, June 29, 2005 9:42 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Business Week Article


BusinessWeek article tells the story of First National Bank of Nebraska
consolidating 30 (Sun) Unix servers onto one mainframe. The shift
boosted hardware-utilization rates to about 70% - and Kucera expects to
save $10 million over five years. 'It's revolutionary,' he says. 'It's
really good stuff. It paid for itself in a year.'

http://www.businessweek.com/magazine/content/05_25/b3938622.htm

--
For LINUX-390 subscribe / signoff / archive access instructions, send
email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Business Week Article

2005-06-30 Thread Joseph Temple
The values that Uriel uses is a reasonable starting point when you don't
know  either the utilization or details about the application.  It is not
precisely the same as we use for sizing at IBM, but there is more to this
than our guess about the z.  The basis we use would make the Sun machine
have a different capacity  than the Aix machine so we are talking about an
extrapolation on an extrapolation.  In any case data can make the answer go
in either direction and the Uriels estimate is not extreme.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794


   
 Meanor, Tim 
 [EMAIL PROTECTED] 
 ott.com   To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU 
 390 Port   cc
 [EMAIL PROTECTED] 
 IST.EDU  Subject
   Re: Business Week Article   
   
 06/30/2005 12:10  
 PM
   
   
 Please respond to 
 Linux on 390 Port 
   
   




What sorts of tests did you run to arrive at this formula?  I'm just
curious because it seems like a bit much to compare a single IFL to a 4xCPU
AIX or Sun box.

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Uriel
Carrasquilla
Sent: Thursday, June 30, 2005 11:15 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Business Week Article


John:
The formula we use is 4xCPU (sun 1.2 GHz or aix 1.0 GHz) is the same as one
IFL under z890 running zLinux under LPAR. If zVM, we take away 10% power
for the zVM overhead. This is a rough estimate and gets refined upon the
real circumstances. Products licensing based on number of CPU's get
penalized but we spend more for the H/W (IBM is happy). There are other
benefits that are more important, such as saving in head counts that I
rather not get into but you can figure it out.

Regards,

[EMAIL PROTECTED]
NCCI
Boca Raton, Florida
561.893.2415
greetings / avec mes meilleures salutations / Cordialmente
mit freundlichen Grüßen / Med vänlig hälsning



  McKown, John

  [EMAIL PROTECTED]To:
LINUX-390@VM.MARIST.EDU
  insctr.com  cc:

  Sent by: Linux onSubject:  Re: Business Week
Article
  390 Port

  [EMAIL PROTECTED]

  IST.EDU



  06/30/2005 09:35

  AM

  Please respond to

  Linux on 390 Port







 -Original Message-
 From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
 Pieter Harder
 Sent: Thursday, June 30, 2005 8:18 AM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Business Week Article


  Run Oracle on the z/890 as a super server,
  (We just eliminated z/VM and Linux on the zSeries because nobody
  knew what to do with them

 John,

 Just curious, on what are they planning to run Oracle then? zOS? You
 guys mus have gobs of money

 Best regards,
 Pieter Harder

We don't have gobs of money. That scenario is what is being touted by
certain managers. Personally, I don't buy it, but what do I know? They say
that Oracle is licensed by the number of processors in the box. And that a
license for z/OS on two processors (regardless of which processor it is -
z800, z890, z990, all the same cost) costs the same as a license for
Windows running on two processors. This seems silly to me, but I cannot
refute it.

Also, at one time, we did test Oracle on Linux under z/VM (back when we had
those products). The Oracle DBAs (who ran the test themselves) came to the
conclusion that a z800 single IFL and 1 Gb of memory did not perform as
well as a 10Gb Sun system with 10 processors. Well, duh!

--
John McKown
Senior Systems Programmer
UICI Insurance Center
Information Technology

This message (including any attachments) contains confidential information
intended for a specific individual and purpose, and its' content is
protected by law.  If 

Re: Business Week Article

2005-06-30 Thread Joseph Temple
This comparison is also in the range of values that have been measured.
However it is near the low end of the range for commercial work.  We
usually see this type of comparison when there are long pathlengths of work
per byte processed, query oriented workloads with little locking or writes,
or  data base loads that exploit private caches and don't benefit from
zSeries shared cache.   A relatively small cache working set and high
sustained utilization of the Intel processor will drive this kind of
result.   It can also occur when comparing a fully utilized dedicated intel
machine to a IFL in an LPAR with a lot of other work going on.  It is not a
center value to use when you don't understand the application behavior or
production utlization.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794


   
 Little, Chris   
 [EMAIL PROTECTED] 
 hs.orgTo
 Sent by: Linux on LINUX-390@VM.MARIST.EDU 
 390 Port   cc
 [EMAIL PROTECTED] 
 IST.EDU  Subject
   Re: Business Week Article   
   
 06/30/2005 01:16  
 PM
   
   
 Please respond to 
 Linux on 390 Port 
   
   




Were those Solaris/AIX servers underutilized?  By a lot?

Certainly workloads don't compare across platforms.  I agree with that, but
for database workloads I was told (informally) to assume a single z/900 IFL
at a Pentium III 700mhz.  Within reason, it seems correct.  We are probably
getting better than that, but certainly nowhere near what you are seeing.

Low CPU/high IO (webserving, maybe?) might translate better to linux on
zseries, I don't know.

-Original Message-
From: Uriel Carrasquilla [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 30, 2005 12:12 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Business Week Article

Crhis:
I suspect not all workloads are the same.  What we are finding out is that
loads are taking place much faster.  That was a large pecentage of our
usage
before.  We don't have any users signed on and we only run the zLinux image
to service requests.  It works for us and no one is complaining.

Regards,

[EMAIL PROTECTED]
NCCI
Boca Raton, Florida
561.893.2415
greetings / avec mes meilleures salutations / Cordialmente mit freundlichen
Grüßen / Med vänlig hälsning




  Little, Chris

  [EMAIL PROTECTED]To:
LINUX-390@VM.MARIST.EDU
  hs.org  cc:

  Sent by: Linux onSubject:  Re: Business Week
Article
  390 Port

  [EMAIL PROTECTED]

  IST.EDU





  06/30/2005 11:48

  AM

  Please respond to

  Linux on 390 Port









Are you getting that much performance out of an IFL? Four times the
performance of a single RISC unix server?

I say that because the Oracle database we moved from an HP server with 2
PA-RISC 8600 (440mhz) cpus consumed a little more than one IFL(100%-120% on
a two IFL LPAR).

This was about in line with what IBM recommended.

-Original Message-
From: Uriel Carrasquilla [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 30, 2005 10:15 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Business Week Article

John:
The formula we use is 4xCPU (sun 1.2 GHz or aix 1.0 GHz) is the same as one
IFL under z890 running zLinux under LPAR.
If zVM, we take away 10% power for the zVM overhead.
This is a rough estimate and gets refined upon the real circumstances.
Products licensing based on number of CPU's get penalized but we spend more
for the H/W (IBM is happy).
There are other benefits that are more important, such as saving in head
counts that I rather not get into but you can figure it out.

Regards,

[EMAIL PROTECTED]
NCCI
Boca Raton, Florida
561.893.2415
greetings / avec mes meilleures salutations / Cordialmente mit freundlichen
Grüßen / 

Re: Business Week Article

2005-06-30 Thread Joseph Temple
Part of the difference is the comparison of the HP to the Intel Box.
If you take the z900 uniprocessor IFL to be 256 MIPS my first guess for the
HP 440 MHz 2way would have been 290 MIPS, but for the PIII 700 MHz I would
have come up with 83.5 MIPS. I do these comparisons by using extrapolation
data we purchase from Ideas International or using  Internal IBM
extrapolations (they yield similar results but cover different machines),
and then apply a middle of the road conversion factor and a 70% utlization
factor.   Any way the 290 MIPS would chew up an IFL and then some. I
can't share the conversion factors with you because of agreements IBM has
with benchmark councils and with Ideas.   In any case I think your
estimator is over valuing the PIII 700 MHz engine.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794


   
 Little, Chris   
 [EMAIL PROTECTED] 
 hs.orgTo
 Sent by: Linux on LINUX-390@VM.MARIST.EDU 
 390 Port   cc
 [EMAIL PROTECTED] 
 IST.EDU  Subject
   Re: Business Week Article   
   
 06/30/2005 01:23  
 PM
   
   
 Please respond to 
 Linux on 390 Port 
   
   




I would guess that the SQL is not very optimized.  Also, the app pushes a
lot of math that could be done client side to the database.

700 may be a somewhat low, but the HPUX server had two 440mhz PA-RISC 8?00
cpu's.  When we moved it, it ate all of one IFL, plus a bit.  It's bloated
since then and has been chewing on a good portion of both recently.

-Original Message-
From: Rich Smrcina [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 30, 2005 12:15 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Business Week Article

700Mhz?  That seems pretty low.

Little, Chris wrote:
 Were those Solaris/AIX servers underutilized?  By a lot?

 Certainly workloads don't compare across platforms.  I agree with
 that, but for database workloads I was told (informally) to assume a
 single z/900 IFL at a Pentium III 700mhz.  Within reason, it seems
 correct.  We are probably getting better than that, but certainly nowhere
near what you are seeing.

 Low CPU/high IO (webserving, maybe?) might translate better to linux
 on zseries, I don't know.

 -Original Message-
 From: Uriel Carrasquilla [mailto:[EMAIL PROTECTED]
 Sent: Thursday, June 30, 2005 12:12 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Business Week Article

 Crhis:
 I suspect not all workloads are the same.  What we are finding out is
 that loads are taking place much faster.  That was a large pecentage
 of our usage before.  We don't have any users signed on and we only
 run the zLinux image to service requests.  It works for us and no one is
complaining.

 Regards,

 [EMAIL PROTECTED]
 NCCI
 Boca Raton, Florida
 561.893.2415
 greetings / avec mes meilleures salutations / Cordialmente mit
 freundlichen Grüßen / Med vänlig hälsning




   Little, Chris

   [EMAIL PROTECTED]To:
 LINUX-390@VM.MARIST.EDU
   hs.org  cc:

   Sent by: Linux onSubject:  Re: Business
Week
 Article
   390 Port

   [EMAIL PROTECTED]

   IST.EDU





   06/30/2005 11:48

   AM

   Please respond to

   Linux on 390 Port









 Are you getting that much performance out of an IFL? Four times the
 performance of a single RISC unix server?

 I say that because the Oracle database we moved from an HP server with
 2 PA-RISC 8600 (440mhz) cpus consumed a little more than one
 IFL(100%-120% on a two IFL LPAR).

 This was about in line with what IBM recommended.

 -Original Message-
 From: Uriel Carrasquilla [mailto:[EMAIL PROTECTED]
 Sent: Thursday, June 30, 2005 10:15 AM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Business Week 

Re: Business Week Article

2005-06-30 Thread Joseph Temple
This one is often either overlooked or overblown.   zSeries zealots will
talk about how the IO processors do the I/O for z and I/O being done in the
cpus for others.  While neither machine really DOES I/O in either the CPU
or SAP (IOP), there is high priority set up code which must be done to
start and finish I/O's.  In the zSeries this code is done in the SAPs.  The
real benefit of doing this is that the code is always run on an engine
which is running at low utilization reducing the wait time significantly.
Also the I/O processing context does not muck up the L1 cache of the CPs.
To avoid excessive wait time and cache working set problems it is sometimes
(but not always) necessary to run UNIX and Intel machines at lower
utilization.   When this happens there is a clear advantage to the zSeries.

The other side of the coin is that the zSeries cannot use the SAP cycles
for CPU work which for any given infrastructure reduces the number of CPUs
in the machine. For this reason the SAPs do not count toward the machines
capacity or pricing beyond their inclusion in the  infrastructure of the
machine.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794


   
 Uriel 
 Carrasquilla  
 Uriel_Carrasquil  To
 [EMAIL PROTECTED]  LINUX-390@VM.MARIST.EDU 
 Sent by: Linux on  cc
 390 Port  
 [EMAIL PROTECTED] Subject
 IST.EDU  Re: Business Week Article   
   
   
 06/30/2005 01:33  
 PM
   
   
 Please respond to 
 Linux on 390 Port 
   
   




Chris:
The z890 uses a separate CPU for all of the I/O's.  Something we don't have
on the aix/sun platform.
The application does lots of I/O to get the data and the server had tons of
cache which was of little value for our purposes.
We trimmed memory to just what was needed, mostly PGA plus 128 MB for the
OS plus a bonus 128 MB to feel good.
Now, instead of CPU cycles on my main processor managing large cache, the
work is shifted to the I/O CPU.
We run aix/sun at about 60% to keep customers happy.  Now I can run busier
and we meet our requirements.
We are also running Apache, SMB and NFS.  We collect a lot of data on a
daily basis and load into our dbms.

Regards,

[EMAIL PROTECTED]
NCCI
Boca Raton, Florida
561.893.2415
greetings / avec mes meilleures salutations / Cordialmente
mit freundlichen Grüßen / Med vänlig hälsning



  Little, Chris

  [EMAIL PROTECTED]To:
LINUX-390@VM.MARIST.EDU
  hs.org  cc:

  Sent by: Linux onSubject:  Re: Business Week
Article
  390 Port

  [EMAIL PROTECTED]

  IST.EDU



  06/30/2005 01:16

  PM

  Please respond to

  Linux on 390 Port







Were those Solaris/AIX servers underutilized?  By a lot?

Certainly workloads don't compare across platforms.  I agree with that, but
for database workloads I was told (informally) to assume a single z/900 IFL
at a Pentium III 700mhz.  Within reason, it seems correct.  We are probably
getting better than that, but certainly nowhere near what you are seeing.

Low CPU/high IO (webserving, maybe?) might translate better to linux on
zseries, I don't know.

-Original Message-
From: Uriel Carrasquilla [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 30, 2005 12:12 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Business Week Article

Crhis:
I suspect not all workloads are the same.  What we are finding out is that
loads are taking place much faster.  That was a large pecentage of our
usage
before.  We don't have any users signed on and we only run the zLinux image
to service requests.  It works for us and no one is complaining.

Regards,

[EMAIL PROTECTED]
NCCI
Boca Raton, 

Re: CPU Comparison

2005-03-18 Thread Joseph Temple
Larry,
the conversion rate for  Intel platforms to zSeries ranges from 3 MHz per
MIPS to 30 MHz per MIPS depending on how much data you are pushing through
the caches and out much serialization (context switches, locks, etc.).
This is typically lower for the web servers and higher for the backend.
Because zSeries capacity is almost always a mixed workload capacity the
utilization peak and utilization variation on an interval by interval basis
comes into play.  This is because you have to configure each blade for its
peak, but you configure zSeries for the composite peak which is almost
always less or can be managed to be much less than the sum of the peaks.
Because some workloads tend to saturate blades at lower utilization  than
zSeries the composite utilization peak is usually quite low.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Davis, Larry
 [EMAIL PROTECTED]
 ielsenMedia.com   To
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   CPU Comparison

 03/18/2005 11:42
 AM


 Please respond to
 Linux on 390 Port






I know that this has been asked in many different ways, but has anyone
actually put together a comparison chart or list of CPU speeds for
zSeries, I32, I64, and other processors.

I have a department that wants to have a group of dedicated Blades
assigned to specific clients. We want to show a cost to Performance
ratio for purchasing blades and scaling a zSeries system.

The application uses Java, in the backend to run the queries and
retrieve the data, and Web servers to interface to a client and present
the results.


Larry Davis,
Senior Systems Engineer
Nielsen Media Research
813-366-2380



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Why Zseries

2005-02-13 Thread Joseph Temple
Ralph,
How often do you change releases?   If you go to Microsoft do your
programmers understand that you are going into a keep up or die culture?
Yes the application programming fashionable, and hardware is cheap, but
microsoft does not provide long term support with regard to release levels.
The shift can be costly either in revamping how you do everything, or in
finding yourself unsupported because microsoft moves faster that you are
able to keep up.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Why Zseries

2005-02-10 Thread Joseph Temple
Let me add to what Joe added.
When you combine the low utilization that  many (not all) dense rack
mounted servers run at it becomes even easier for z to win the
througput/KVA race. Even if we don't include non production servers and
look at clusters for a single application, the peak composite utilization
of distributed solutions is very often under 25%. Adding non production
servers, mixing applications and doing workload management with VM can
increase utilization leverage on relative capacity well above the 4 to 1
indicated by the production data.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Joe Poole
 [EMAIL PROTECTED]
 omTo
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Why Zseries

 02/10/2005 01:46
 PM


 Please respond to
 Linux on 390 Port






On Thursday 10 February 2005 01:32 pm, Levy, Alan wrote:
 David - thanks. This is what that I was looking for.

Let me add to what Dr. David said with a few metrics.  The densely
packed servers of today draw between .1 and .7 kVA of power.  If you
take .3 kVA as an average, it takes only 15 servers to match the 4.5
kVA of a z900.  Add more CPUs and internal disk to the servers, and
it's not uncommon to find a single server rack that exceeds the power
requirements of the zSeries with a Shark thrown in for good measure.

I've heard that it's become difficult to add the required power to some
of the buildings in NYC to support the server induced power load.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Why Zseries

2005-02-10 Thread Joseph Temple
Speaking of KVA, has anyone else heard about anyone hooking up the old
machine floor plumbing to chillers in order get cool enough air on the
floor  to cool dense racks or blade centers.  Just wondering if what I
heard is a rumour, at fact or a mainframe geek joke :-)


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Joe Poole
 [EMAIL PROTECTED]
 omTo
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Why Zseries

 02/10/2005 01:46
 PM


 Please respond to
 Linux on 390 Port






On Thursday 10 February 2005 01:32 pm, Levy, Alan wrote:
 David - thanks. This is what that I was looking for.

Let me add to what Dr. David said with a few metrics.  The densely
packed servers of today draw between .1 and .7 kVA of power.  If you
take .3 kVA as an average, it takes only 15 servers to match the 4.5
kVA of a z900.  Add more CPUs and internal disk to the servers, and
it's not uncommon to find a single server rack that exceeds the power
requirements of the zSeries with a Shark thrown in for good measure.

I've heard that it's become difficult to add the required power to some
of the buildings in NYC to support the server induced power load.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Why Zseries

2005-02-10 Thread Joseph Temple
Kielek is correct, but consider this.

1. Given the availability of the application, there is a small difference
between Linux on z and Linux  on Intel simply because the zSeries
reliability takes the hardware multiplier on availability closer to 1.

2.  Yes we can configure the Intel with redundant servers to mitigate the
reliability difference.

  a. In the case of stateful applications such failovers are small
outages because they take measurable time (up to minutes or even hours on
thorny situations).  In this case the reduction of the number outages by
better hardware availability still helps.
  b. In the case of stateless applications you need less redundant
hardware on z than on intel because you can effectively run the z at higher
utilization than the intel machine for many applications.   This is because
many workloads cause the Intel boxes to saturate a relatively low
utilization.  When this happens it takes more than n+1 boxes to deliver
n+1 availability is  than one unless the Intel machine is run at very low
utilization.  Since the zSeries solution is more like to be CPU bound the
utilization at which n+1 can be n+1 is higher.
  c.) Since the failing component is most likely the application or the
linux, there is the opportunity to set the Linux on z farm up in such a
way that the remaining linux images get the capacity that the failed linux
had.  In other words a redundant linux/application instance is provided but
still less redundant capacity is required.  This depends on being able to
detect the failure and restart the failed  linux with reduced resources
until it is ready to accept the load.
That is the failed linux gets hard capped and the remaining soft capped
images grab the resulting whitespace.


Joe Temple
Executive Architect
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



 Kielek, Samuel
 [EMAIL PROTECTED]
 rriott.comTo
 Sent by: Linux on LINUX-390@VM.MARIST.EDU
 390 Port   cc
 [EMAIL PROTECTED]
 IST.EDU  Subject
   Re: Why Zseries

 02/10/2005 01:46
 PM


 Please respond to
 Linux on 390 Port






It is important to also understand that Linux is not capable (at least
today) of directly exploiting many of those hardware benefits,
especially in terms of the mainframes RAS features. That is to say,
there can be instances where the mainframe is up and chugging along, VM
is doing just fine, but Linux is dead in the water for example.

Unfortunately the original question is simply too vague. Comparing Linux
on zSeries to x86, pSeries, SPARC, etc. (if that was question) is as
much a philosophical exercise as it is technical. What really needs to
be focused on is the task that you wish to run, how it works and what it
needs to perform its job(s). Then we can evaluate the pros and cons to
running that task on each of the OS/architecture combinations.

-Sam

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Adam Thornton
Sent: Thursday, February 10, 2005 12:39 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Why Zseries


On Feb 10, 2005, at 10:58 AM, Levy, Alan wrote:

 Actually, I probably phrased my question wrong.

 What I was looking for is why choose Zseries Linux over ANY OTHER
 operating system's Linux ?

We're getting closer, but:

zSeries is an architecture, not an operating system, and Linux doesn't
generally run *on* an operating system[0], it *is* an operating system.

That said, this looks like Question 2, if you substitute
architecture's for operating system's.  In which case: hardware
reliability and fault tolerance, excellent sustained I/O capacity,
potential for in-the-box really-high-speed interconnect with (usually
z/OS) big databases in another LPAR, humongous consolidation
opportunities if you have lightly-loaded guests, and incredibly rapid
deployment of machines for testing and development.

Adam

[0] Unless run in a virtualization environment, like z/VM or VMware or
Virtual PC, or Hercules, which is an emulation environment running
under some OS.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: need to compare apples to oranges (HP Unix CPU to zVM IFL CPU with Oracle)

2004-07-13 Thread Joseph Temple




Ken,
Richard is on the right track here.   Short of a Size390 sizing I can tell
you that the range of relative capacity between a z900 is quite broad.  The
actual result will depend on what kind of workload is being done (query
only, some updates, heavy transactional), the cache working set size (how
much stress  the workload puts on the hardware caches), and the usage(what
is the peak utilization  and does the cpu utilization for the various DB's
peak simultaneously?).   Size390 will send you a questionaire which will
allow Techline to assess the workload and utilization conversion factors
for your case.Your IBM rep should be able to put you in touch with a
FTSS (field techie)  who can either put you in touch with Techline or run a
quicksizer tool to give you  a first guess.  If you don't have an IBM rep
email [EMAIL PROTECTED] and let me know where you are located.


Joe Temple
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  richard truett
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  .netcc:
  Sent by: Linux onSubject:  Re: need to compare apples to 
oranges (HP Unix CPU to zVM IFL CPU with
  390 Port  Oracle)
  [EMAIL PROTECTED]
  IST.EDU


  07/13/2004 11:21
  AM
  Please respond to
  Linux on 390 Port






 Ken,  IBM has a couple of modleing tools available to size the workload on
zLinux and IFLs/Memory.  If you have a locak IBM rep or business partner
rep
  You may want to ask about the Size390 or New Workload sizing that IBM
Techline performs.  This service is no cost and can assist in getting an
estimate to the z900 requirements.



If you do not have these contacts, let me know and we can discuss off-list.




---Original Message---



From: Linux on 390 Port

Date: 07/13/04 10:14:48

To: [EMAIL PROTECTED]

Subject: need to compare apples to oranges (HP Unix CPU to zVM IFL CPU with
Oracle)



Hi,



We are looking at a pilot project to test an Oracle database running on

Linux/zVM.  Currently we have about five applications that run on various

HP Unix servers.  Each of these applications connect to their own Oracle

instance.  Each instance is about 300GB, so we have 1.5TB for the

databases.  Our test would be to move the five Oracle instances to a

Linux/zVM server running on an IFL on a z900.  We will have one 300GB copy

of the database, and each of the five instances would appear to have their

own copy since the updates for each instance will be intercepted and

written to a private area.



The Unix group says that the Oracle instances consume between two and

three CPUs on a HP Superdome 750Mhz box.  The Project Office wants to know

how that consumption would compare to a z900 IFL.  We said that we really

need to perform the pilot to get the numbers, but they said they really

need the numbers before we can do the pilot.  Does anyone know how to

compare the CPUs between the two platforms?



The Project people are also worried that the VM overhead will result in

slow response times.  We can try and perform a test via a standard script

on each box.  Has experience in the performance to be gained/lost between

the two platforms?



On the Oracle side, if I had a database of 300GB with an instance name of

ORA1, and I want to change the instance name to ORA2, how many records

need to be changed?  If the instance name is connected to every record,

then this project will have trouble since we are trying to share the 300GB

base with multiple instances.  We would use the I/O intercept software to

write the changes to a private area, and if it needs to update 300GB of

data, then it is not a feasible solution.  The DBA group is dubious of

this concept (read project), so we need to demonstrate that it will work.



We do not expect to save money in the hardware costs of this project.  The

saving should come from the flexibility to deploy new images, and the

quicker turnaround for the database restores with the I/O intercept

software.  However, if we are going to need six IFLs to run this DB, then

it is unlikely they will let us proceed no matter how much flexibility we

could gain.



Any information would be appreciated.



Thanks,



Ken Vance

Amadeus



--

For LINUX-390 subscribe / signoff / archive access instructions,

send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit

http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit

Re: Performance with Multiple CPUs

2004-06-22 Thread Joseph Temple
I think the answer is that zSeries Linux  scales to more  than 4
processors.   This is obviously workload dependent.   Not sure what you
mean by negative  if you mean that 4 processors get less work done than
3, probably won't see that on zSeries.  If you mean that you get less than
4X the uniprocessor performance you will see that on most workloads on
virtually all machines.   If you mean that you see less throughput on linux
4ways than on windows 4 ways,  you may or may not see it on zLinux.  This
will depend on whether the z hardware overcomes whatever software advantage
that Windows has.  (The two effects will mitigate each other but are not
really related, so there is no single answer.)


Joe Temple
Sr. Consulting Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Walter Wojcik
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  com cc:
  Sent by: Linux onSubject:  Performance with Multiple CPUs
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  06/22/2004 11:58
  AM
  Please respond to
  Linux on 390 Port






Does anyone have experience running zLinux in an LPAR which has multiple
CPUs defined?  We have experienced negative performance characteristics
with Intel Linux versions (RedHat, SUSSE) when the Intel machine had 4
processors.  We were wondering if the same performance degradations appear
on the mainframe.

Walter E. Wojcik
[EMAIL PROTECTED]
phone: (781) 301-2000
fax:  (781) 301-2001

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Second Wind for Big Iron

2004-03-24 Thread Joseph Temple
Actually, Blue Gene is neither a new sort of machine nor a mainframe.   It
is essentially a very large Scalable Parallel (from around 1992) machine
using Linux and Intel instead of AIX and Power.Thus it is a new sort of
SP.I think using it as the example of big iron causes the article to
miss the point.   Linux on z, USS, Java on z, and z with tcp/ip networking
are the types of things that make the new mainframe.   System structure,
RAS and LPARs on p690 are much better non z examples than Blue Gene.   They
did get the size right though; Blue Gene is Big Iron.


Joe Temple
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Phil Payne
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  arch.comcc:
  Sent by: Linux onSubject:  Re: Second Wind for Big Iron
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  03/23/2004 04:02
  AM
  Please respond to
  Linux on 390 Port






http://www.businessweek.com/magazine/content/04_13/b3876068.htm

Absolute twaddle.

System/360 was the world's first open system.  Principles of Operation and
the Channel OEMI
manuals permitted plug compatibility - OS/360 was public domain and the
source code was freely
available from IBM for the cost of the tapes.

Open source was, if you like, the reason for System/360's initial success.
And, IMO, OCO is
one of the reasons for its decline.

And the article clearly implies that Blue Gene is the new sort of
mainframe, with the old
sort headed for extinction.  Hmm.  Not according to what I'm hearing about
IBM's plans.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Perpetuating Myths about the zSeries

2003-10-31 Thread Joseph Temple
The best way to understand is to take measurements of the running
production systems.  There are many tools for doing this and you may
already be gathering at least  the data that you would need.   The way to
look at utilization is to plot the utilization on intervals for a peak
period, day or week depending on the data you get.  You should be looking
for intervals around 15 minutes in length. (Over 30 minutes smooths things
too much and under 5 minutes tends to visually (and mathematically) hide
the troughs.  This is because we use the peak to do our sizings and as
intervals get shorter the peak approaches 100% regardless of the
utilization.  (utilization is a statistic, at the cycle level the machine
is either busy or its not. For short enough intervals the utilization data
will either be zero or 100%.)

Anyway,  you then stack these graphs in a stacked bar or area chart and
then note the composite peak.  This will tell you what your aggregate
utilization is. If all the servers peak at the same time, then the peak
utilization on each one has to be low to get  favorable consolidation
effects.

The second thing to do is to look at the saturation curve for the servers
in question.  Gather throughput data on the same intervals as your
utilization data. (This can be network data or packet rates if you don't
have anything else).  Plot the throughput v utilization for each server.
You are looking to see if the curve bends over or saturates at higher
utilization.   I like to plot linear, power and logaritmic trends through
the data.  Usually the power curve has the best fit, but the linear and
logarithmic curves provide bounds with which to compare it.  The more
linear the data is the more cpu intense the application is, and therefore
the lower the utilization has to be to get a good conversion ratio.  If the
curve is bent over, the average utilzation is low, and the workload peaks
at an off time you have an ideal candidate.

You can also start gathering I/O and context switch rates.  High rates here
usually indicate non cpu intense or mixed applications.

IBM has people who can help you get the data and analyze it.




Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Eric Sammons
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  t.frb.org   cc:
  Sent by: Linux onSubject:  Re: Perpetuating Myths about 
the zSeries
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  10/31/2003 07:55
  AM
  Please respond to
  Linux on 390 Port






What about memory intensive?  And how do you gage the CPU intensive
applications?  For example we are planning to migrate some of our Solaris
(SPARC) applications off of SPARC and into the z/VM Linux world.  If I am
looking at candidates for this migration I see systems (SPARC) with 10 -
30 percent utilization.  What happens when I decide these word loads are
good candidates with their low cpu usage on the SPARC platform but then
install them into the Z environment and find out that they now have a cpu
usage of 80 - 90 percent?  Is this possible?  Is there a good way to judge
what applications on a given platform might be best suited for migration?
Right now I am recommending that any candidate first do a QA of their
application in the Z environment prior to do doing the full and final
migration.

thanks!
Eric Sammons
(804)697-3925
FRIT - Infrastructure Engineering





Post, Mark K [EMAIL PROTECTED]
Sent by: Linux on 390 Port [EMAIL PROTECTED]
10/30/2003 04:49 PM
Please respond to Linux on 390 Port

To: [EMAIL PROTECTED]
cc:
Subject:Re: Perpetuating Myths about the zSeries

My answer was, and still is (and likely always will be) avoid any
application that is CPU intensive.  Yes, the zSeries has gotten faster,
but
so has Intel.  The price-performance curve for CPU intensive work still
favors Intel.  I've seen nothing in the IBM announcements that would lead
me
to change any of the recommendations I've been making for the last 3
years.
Unless and until the price-performance curve for zSeries matches that of
Intel (or comes a couple of orders of magnitude closer), I will continue
to
make the same recommendations.


Mark Post

-Original Message-
From: Jim Sibley [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 29, 2003 7:31 PM
To: [EMAIL PROTECTED]
Subject: Re: Perpetuating Myths about the zSeries


-snip-
Linux on all sorts of platforms was just a gleam in
someone's eye 5 years ago.  It started getting pushed
on the zSeries 3 years ago and the software and
hardware have made great strides in the last 3 years.

So CGI may not be appropriate today. So what is there
we said was not appropriate 2 or 3 years ago that may
be appropriate today on Linux zSeries?




Re: Perpetuating Myths about the zSeries

2003-10-31 Thread Joseph Temple
I don't know of any plans to make RMF-PM available on other platforms.   I
will look around, but it will be a week or so; others may be able to help
sooner.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  David Boyes
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  e.net   cc:
  Sent by: Linux onSubject:  Re: Perpetuating Myths about 
the zSeries
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  10/31/2003 11:48
  AM
  Please respond to
  Linux on 390 Port






 What about memory intensive?  And how do you gage the CPU intensive
 applications?  For example we are planning to migrate some of
 our Solaris
 (SPARC) applications off of SPARC and into the z/VM Linux
 world.

Something that occurred to me (and since Joe Temple is kindly answering
questions): are there any plans to make the RMF-PM data collection agent
available on platforms other than zLinux? While it's not the best tool
available, it'd be handy to be able to do before/after comparisons with the
same tools reporting to FCON/Perf Toolkit.

 Right now I am recommending that any candidate first do a QA of their
 application in the Z environment prior to do doing the full and final
 migration.

Never a bad idea, particularly given your employer...8-)

-- db


Re: Perpetuating Myths about the zSeries

2003-10-29 Thread Joseph Temple
Actually, while I/O is the classic example of why processor speed is not
everything,  you don't have to move that far beyond the processor itself to
show this.   Note that the various types of servers have different size and
structures of L1, L2, L3,  caches and memory interfaces.   Also  note that
the memory latency and bandwidth varies from machine to machine.   Finally
note that the faster the processor the higher the latencies are in terms of
number of cycles.   Benchmarks are sensitive to this, but not uniformly.
For example TPC-C got a big boost from 8MB L2 caches,  but SPECint is
almost totally insensitive to L2 size.  Real work has a tendency to show
even more variability.  That is there are more daemons,etc. chewing up
cache space causing more misses, and the code is not as extensively tuned.
This tide will float all boats (even if you run tpc-c for a living don't
expect the tuned rates if you also are running security, monitoring,
accounting,etc.).  However, the differences in memory hierarchy will cause
some machines to be impacted more than others.   There is really no way
except experience to tell how an application treats the memory hierarchy in
this regard.  As a result defining relative capacity with a single metric
is not possible, and any benchmark or metric that is suggested for this
will not match the real world, which will exhibit more dynamic
variability both with time and workload.

On thing is for sure though:  If a cache is blown, during the miss time the
processor is 100% busy as measured by normal means, and the throughput is
zero.   You can see if this is happening to an significant extent by
plotting throughput v processor utilization. (Throughput can come from the
application or be estimated by network data rate).   Draw linear, power
and logaritmic trends through the data.  If the best trend is linear and
the line intercepts the vertical axis near or below the origin then there
is little or no saturation and little pressure on the memory hierarchy.
If the best trend is logarithmic then there is heavy saturation and chances
are that the workload is blowing the caches as load is applied.   If the
power curve fits the best the answer is somewhere in the middle.   As the
exponent of the power curve approaches 1 the workload is exhibiting less
saturation because the trend becomes linear.  The more saturation exhibited
the higher the utilization at which zLinux consolidation is viable.  This
is because the raw conversion factor is typically better for more saturated
workloads.

My point is there is no such thing as a metric which defines the relative
capacity of various servers because of the differences in their memory
hierarchies.
Processor speed is an indicator, but it is insufficient to do any real
comparisons.   The processor speeds are what they are. You can measure them
with MHz, BOGOMIPS, SPECint, or hello world and a stop watch.  You still
will not understand the relative capacity for any particular piece of work
once you do.  This is because work always causes the CPU speed differences
to be mitigated by other bottlenecks such as waiting for memory or I/O and
the impact varies with workload, time, and machine architecture. The rules
developed early on were based on some intuitive understanding that heavy
computational work is less impacted by non processor bottlenecks, whereas
transactional workloads with lots of locking and data sharing are more
impacted.  Since most other machines come from a heritage which emphasizes
the former and zSeries heritage is firmly in the latter, the intuitive
choices are generally correct and don't change with normal evolutionary
changes in processor speed.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Alan
  Altmark/Endicott/To:   [EMAIL PROTECTED]
  [EMAIL PROTECTED]cc:
  Sent by: Linux onSubject:  Re: Perpetuating Myths about 
the zSeries
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  10/29/2003 02:00
  PM
  Please respond to
  Linux on 390 Port






On Wednesday, 10/29/2003 at 10:08 PST, Jim Sibley
[EMAIL PROTECTED] wrote:

 I disagree about your I/O wait is independent of
 processor speed because that is only one component of
 reponse time. The faster processor can initiate I/O's
 faster and can service interrupts faster, thus
 reducing internal queue wait times.

The faster processor can start more I/Os per second than a slower
processor.   A faster I/O processor can move data off the channel into
memory faster (and vice versa).  The speed of the channel itself and the
device does not change.  Yes, you CAN change it, but it is a function (and
price) that is independent of CPU selection.

 As far as transaction transit time, that's a
 

Re: ECKD-less system?

2003-09-24 Thread Joseph Temple
The first card in the 1130 deck actually did the entire cold start.  You
could operate the machine after but had to know the control panel.  Of
course its always easier with some software.   I had a friend who could
cold start an 1130 faster than you could take the card off the top of the
reader put it in the hopper and press start.  He did this entirely by
manipulating the switches on the front panel.  Talk about arcane skills...


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Tom Duerbusch
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  net cc:
  Sent by: Linux onSubject:  Re: ECKD-less system?
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  09/24/2003 12:38
  PM
  Please respond to
  Linux on 390 Port






In LPAR, couldn't you still IPL from a tape drive?

Now, if there was only an Escon attached reader.

Just think of the card deck and resulting floor sort

I remember IPL'ing an IBM 1130 via cards.  Just one hand full.  The Suse
Linux 8.1 starter system is over 20,000 cards (2 cases, 5 boxes per
case).  If you actually had the entire system on cards, could you ever,
actually, IPL?

Tom Duerbusch
THD Consulting

Arnd Bergmann wrote:

On Wednesday 24 September 2003 16:50, Adam Thornton wrote:


I haven't tried this, so I might be totally smoking crack, but:

Is there any reason you couldn't build the FCP driver into the kernel,
and then IPL from an NSS.  You'd need an ECKD-based system to build the
inital NSS but then once it was done I don't see why you couldn't IPL
from NSS (no initrd, of course) and mount all your disks as SCSI.



Correct. You could also IPL from VM virtual reader devices or from a
shared dasd device when using the VM load parameter patch. Note that
the your IPL disk needs to have the kernel on it but not necessarily
the root file system.

Of course, none of this works on LPAR installations and VM always
needs DASD devices to run on itself.

Arnd 





Re: InfoWorld Article - Microsoft Benchmarks Step Up Linux Assault

2003-09-05 Thread Joseph Temple
Also, if you move a bunch of intel servers onto the faster intel server it
still probably still is being run at low or very low utilization and how
many customers would want to put that many eggs in basket that is has an
Intel server's reliability?


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Jim Sibley
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  hoo.com cc:
  Sent by: Linux onSubject:  Re: InfoWorld Article - 
Microsoft Benchmarks Step Up Linux Assault
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  09/05/2003 01:59
  PM
  Please respond to
  Linux on 390 Port






I wonder who in IBM made this statement!

IBM has no plans to begin running Linux performance
benchmarks for the mainframe, the IBM spokesman said.
We typically don't run industry standard performance
benchmarks for any software on the mainframe, he
said. The value of the mainframe is based on its
ability to securely run multiple applications on a
single platform as opposed to purely seeking
outstanding performance of one application on a single
platform.

He obviuosly does not understand the power of the
mainframe. IBM has always talked about throughput
rather than performance - elapsed time of any given
transaction and throughput.

The real questions are:

Could the zSeries give a the same or higher
transaction rate?

Hang a terabyte on the a zSeries 116 or 216, give it
64 GB memory, and see how much work you can put
through it. (The zSeries 116 or 216 is in the range of
the same clock rate).

How many Microsoft/Intel's would it take to do the
same work! And what would the cost/transaction be?

=
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley

Computer are useless.They can only give answers. Pablo Picasso

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: Whither consolidation and what then?

2003-07-30 Thread Joseph Temple
Tzfir Cohen wrote
And if you had all of those Office machines as separate images on a
giant T-Rex, those IT folks would still have to manually patch each and
every image separately, and spend 15 minutes on that.

As for cloning, patch distribution etc.: those solutions
are exactly solutions (?) to the management problem. As you mentioned
in the beginning, just cramming many images on one mainframe won't make
it go away.

One of the interesting things about total cost is that
centralization/consolidation surfaces costs that are otherwise hidden.  I
certainly don't need to have admin functions for my laptop nickel and
diming my time, but it never shows up on the books.  The inexpensiveness of
PC's is one of the myths of IT that go right along with the inflated cpu
speed myths about the mainframe...

VM does provide an opportunity to do somewhat better with cloning solutions
by allowing the files which make an image to be shared.  These files need
only be updated once for a set of images.   My view is that there is a
trade off between cost to consolidate images and the cost to manage images.
This says that for each application/situation there is an optimum amount of
image consolidation which is driven ability of zLinux to scale up, the
ability of the application to scale up,  and the skill and mindset of the
application integration programmers,  v the ability of the system
programmers/sysadmins  to exploit VM and Linux cloning tools for
automation.  Basically, you are looking for the minimum number of images
that can be run at the availability you want, and won't cost you the farm
to get to.  I would also expect the number of images per unit of work to
shrink over time as this optimum level is sought.   This should be true
whether you are working with blades or VM/linux or some combination of
both.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794


Re: Performance question

2003-06-24 Thread Joseph Temple
While reliability is not the same as availability,  availability depends on
it.  The less reliable the component pieces are the more redundancy is
required to be available.It depends on what your target availability
is.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Alan Cox
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  u.org.ukcc:
  Sent by: Linux onSubject:  Re: Performance question
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  06/23/2003 03:25
  PM
  Please respond to
  Linux on 390 Port






On Llu, 2003-06-23 at 20:22, Peter Flass wrote:
 Actually, if availability is critical, run on a mainframe.  Everything
 on a mainframe is duplicated (or more), and nearly everything is
 hot-swappable.  If one power supply fails, the other takes over.  If a
 DASD fails, RAID recovers without a hiccup.  You probably need two OSA
 adapters attached to two phone lines, preferably to two central offices,
 but that's about all.

Mainframe is a very expensive way to get availability, and a very poor
one at the extreme end. Its a very good way of getting reliability and
the two are quite different.


Re: Processor Comparisons

2003-03-27 Thread Joseph Temple
It takes somewhere between 1 and 4 900 MHz SPARC III to do the work of a
z900T (around 900 MHz) engine, DEPENDING on the workload.  This is at equal
utilization on both machines.  Use 1.5 to 2 if you don't know what the
workload is.
 We use the following rules for utilization:
Benchmark 100%
Head to Head Production (Single Boxes) 60%
Other machine duplexed for availability 40%
Hot Backup for other machine 30%
Production Server Consolidation 25%
Server Consolidation with Test, QA, etc. 15%
ISV 10%

So a server consolidation case would be 6-8 SPARC's per Turbo.  Lots of
caveates, lots of complicating factors but this will get you a start.  Get
your IBM rep for a sizing.  They should be able to get local FTSS
assistance for these kinds of estimates and help to get the utilization
measured, etc.



Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Gerard Graham
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  cc:
  Sent by: Linux onSubject:  Processor Comparisons
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  03/27/2003 10:23
  AM
  Please respond to
  Linux on 390 Port






 Has anyone gotten anywhere with processor comparisons ? I am now being
asked to give estimates such as what is the IBM equivalent in MIPs to a Sun
E4500/Solaris 8 CPU box or a Sun 4200 Solaris, I myself look at these
machines and have no idea how big or small they are.  I am not talking
utilization or what apps are running on it I am just trying to get some
kind of grid for comparisons. I am constantly being asked these questions
and people are tired of the canned it depends answer. It is difficult to
move forward with the Linux on z/series without some kind of ballpark
numbers. Again ballpark I can't give a potential customer the how a CPU
works answer they will just look away and go and by another server.  Any
help is appreciated.


-
This message and its attachments may contain  privileged and confidential
information.  If you are not the intended recipient(s), you are prohibited
from printing, forwarding, saving or copying this email.  If you have
received this e-mail in error, please immediately notify the sender and
delete this e-mail and its attachments from your computer.


Re: Interesting perspective

2003-03-19 Thread Joseph Temple
John Summerfield writes If the application stays up, ir's more reliable...
... I'm sure that's actually true in IBM mainframes too.

I read recently a new disk drive from IBM, I guess in many respects a
successor to RAID.

A disk failed? Leave it there, swap in a spare.

Zero maintenance because failed components are swapped out of service,
spares swapped in.

Are the individual drives especially reliable? No. Is the storage device
especially reliable? Yes. Does anyone care about the fine distinction?
No.

If the application stays up its more available.  If it were more reliable
you would not have to take any action (ie spend money) to repair the failed
part.  This is where the distinction is, and to the extent that the repairs
cost time and money, people care.  This is what drives the idea of
autonomic computing; the more the systems can repair themselves, the more
availability turns into reliability.  Clusters are a long way from
autonomic today.  I guess my main points were that clusters in and of
themselves do not  make reliability irrelevant,  many clusters have less
availability than we might think at first blush, and they all lose
availability as the load on them grows.  And yes this is true for all
systems, including zSeries.  But zSeries does have advantages that come
from Virtualization, the balanced machine structure, the autonomic
features built in and the high reliability of the hardware.

Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  John Summerfield
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  afe.com.au  cc:
  Sent by: Linux on 390Subject:  Re: Interesting 
perspective
  Port
  [EMAIL PROTECTED]
  EDU


  03/19/2003 06:34 AM
  Please respond to
  Linux on 390 Port






On Tue, 18 Mar 2003, Joseph Temple wrote:

 I would point out that clustering makes hardware more available, not more
 reliable.

If the application stays up, ir's more reliable.

 The things actually fail more often because there is  more to
 fail,

I'm sure that's actually true in IBM mainframes too.

I read recently a new disk drive from IBM, I guess in many respects a
successor to RAID.

A disk failed? Leave it there, swap in a spare.

Zero maintenance because failed components are swapped out of service,
spares swapped in.

Are the individual drives especially reliable? No. Is the storage device
especially reliable? Yes. Does anyone care about the fine distinction?
No.


 but the user sees the cluster as available during the failures.
 There  are some problems with availability clustering as it is usually
 done, causing the cluster to have lower (often significantly lower
 availability) than it is designed to have.  The first problem is that as
 the utilization rises on a cluster the redundancy in the cluster drops,
 unfortunately so does the reliability of the components.   Systems whose
 load is growing start losing availability from day one as the workload
 grows.   Most folks add hardware when they need it for capacity, not when
 they need it for availability.  Next when running with one or more
 redundant servers down, the probability of failure of the remaining
servers
 increases due to stress brougth on by higher utilization.   Because of
this
 n+1 availability is not usually a good enough design point.  The second
 problem is that utilization must be kept quite low if to maintain
 redundancy unless the utilization grows linearly with load.  If the
through
 put tails off or saturates as the utilization goes up, the
utilization
 required to maintain redundancy is lower than intuitively expected.
Most
 people don't have a clue about how their workload saturates on a cluster,
 let alone at what utilization they loose the redundancy required to get
the
 availability they desire.  Furthermore, n+2 availability is met a lower
 utilizations than n+1 availability.   The third problem is that  failover
 time is often long enough to count as a measurable outage, particularly
 when a data base or shared state is involved.  As far as I know the IBM
 Parallel Sysplex with data sharing and redundant coupling facilities is
 the only system that can avoid a measurable outage on a failover.  In
 today's multiple tiered systems the availability of the tiers is
 mulitplied, so that the availability of the whole solution is somewhat
less
 than the availability of the weakest tier.

 Finally, the Linux on z Solution has a an advantage on patching in that
 multiple virtual machines can share the unpatched and patched versions.
 You only have to update the shared image once and then roll the boot/ipl
of
 the VMs to point to the new version.  In addition the virtual machines'
 redundant capacity can be handled by letting the remaining machines have

Re: Interesting perspective

2003-03-18 Thread Joseph Temple
Tzafrir Cohen said:Yes, but if you bring clustering into the game, then
suddenly cheaper
hardwares can become more relieble.


The author also forgets that the guests need patching as well. Having
all of them as guests on a mainframe, or as separate machines in a farm
is not all that different in that respect, because remote-management
tools are good enough for the basic tasks.

And you can still load the new software to one unused computer in the
farm, start it, and then swap-out the bad computer you want to retire.
Requires some more hardware, but the hardware is much cheaper, anyway.


A bigger problem is that there are simply more machines to patch. This
is the basic issue: machines are not patched because their admins (or
admin-replacements) don't bother. Admining a system is not a task that
requires a special admin (that should be aware of patching).



I would point out that clustering makes hardware more available, not more
reliable.  The things actually fail more often because there is  more to
fail, but the user sees the cluster as available during the failures.
There  are some problems with availability clustering as it is usually
done, causing the cluster to have lower (often significantly lower
availability) than it is designed to have.  The first problem is that as
the utilization rises on a cluster the redundancy in the cluster drops,
unfortunately so does the reliability of the components.   Systems whose
load is growing start losing availability from day one as the workload
grows.   Most folks add hardware when they need it for capacity, not when
they need it for availability.  Next when running with one or more
redundant servers down, the probability of failure of the remaining servers
increases due to stress brougth on by higher utilization.   Because of this
n+1 availability is not usually a good enough design point.  The second
problem is that utilization must be kept quite low if to maintain
redundancy unless the utilization grows linearly with load.  If the through
put tails off or saturates as the utilization goes up, the utilization
required to maintain redundancy is lower than intuitively expected.   Most
people don't have a clue about how their workload saturates on a cluster,
let alone at what utilization they loose the redundancy required to get the
availability they desire.  Furthermore, n+2 availability is met a lower
utilizations than n+1 availability.   The third problem is that  failover
time is often long enough to count as a measurable outage, particularly
when a data base or shared state is involved.  As far as I know the IBM
Parallel Sysplex with data sharing and redundant coupling facilities is
the only system that can avoid a measurable outage on a failover.  In
today's multiple tiered systems the availability of the tiers is
mulitplied, so that the availability of the whole solution is somewhat less
than the availability of the weakest tier.

Finally, the Linux on z Solution has a an advantage on patching in that
multiple virtual machines can share the unpatched and patched versions.
You only have to update the shared image once and then roll the boot/ipl of
the VMs to point to the new version.  In addition the virtual machines'
redundant capacity can be handled by letting the remaining machines have
the resources of the VM that was rolled out which it then are  reclaimed on
restart.  The hardware utilization stays relatively constant because
workloads saturate zSeries machines less than other machines.  This is
because saturation comes from non processor bottlenecks and the zSeries
machines are more robust in supply of  other resources per CPU configured.
As a result  a virtual cluster will see higher redundancy at any
utilization and therefore will be more available than the equivalent
cluster, EVEN IF THE CLUSTER HARDWARE were AS RELIABLE AS zSERIES, which it
is not.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794


Re: URGENT! really low performance. A related question...

2003-02-21 Thread Joseph Temple
Lucius wrote Forgive me for being dumb here, but I'd like to ask How?
If you're
sharing a VM minidisk among several Linux guests, how can you update the
contents without having all of the guests brought down?

You start with 2 LPARs and 2 VMs each has a set of shared Linux disks at
whatever level.  Each can use the full capacity of the machine (Sharing
engines, channels, OSAs, etc.).  When you take one side down to upgrade it
the other takes on the load.  You can use one VM and 2 sets of Linux disks
and share memory if constrained inthat manner, but then VM outages take you
down.

The outage time is essentially a failover time, which depends on what you
do with the data.  Being an IBM z guy, I would say that to maximize uptime
the data belongs in a zOS data sharing sysplex, where the state and
lock structures are kept in redundant coupling facilities, essentially
eliminating the failover time there, but any failover mechanism can be
used.

Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Lucius, Leland
  Leland.Lucius@ecTo:   [EMAIL PROTECTED]
  olab.comcc:
  Sent by: Linux onSubject:  Re: URGENT! really low 
performance. A related question...
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  02/19/2003 11:39
  AM
  Please respond to
  Linux on 390 Port






 With disk sharing and VM, the apparent outage for maintenance
 of Linux can be virtually eliminated.

Forgive me for being dumb here, but I'd like to ask How?  If you're
sharing a VM minidisk among several Linux guests, how can you update the
contents without having all of the guests brought down?

Thanks,

Leland



Re: URGENT! really low performance. A related question...

2003-02-20 Thread Joseph Temple
Alan Cox wrote  Was it alpha or gamma emitters they got in their materials
?

It was alpha particles and it was not unique to IBM.  Basically it started
with dynamic RAM.  A passing particle drains charge from the memory cell
causing soft errorsECC became mandatory  for dynamic ram when the
64K bit  chips were introduced because the charge held was small enough
that the drainage changed the cell's state.  The IBM 8130 was the first
machine to ship with 64Kbit chips (yes that's K) and we had to scramble to
retrofit ECC into the design.  The 4381 shipped around the same time and
had similar problems.  As things got smaller  static memory also started to
be affected and we started to see ECC on caches as well as mainstorage.  I
don't know this for a fact but I suspect Sun's L2 cache problems were
related to soft errors.

One other wrinkle,  IBM's use of flip chip did make the problem more
pronounced because the active chip area was on the side closed to the
substrate which is an emitter.  The wire bond technique used by other
vendors mitigated but did not eliminate the problem, because the emitted
particiles had to get through the whole chip to hit the memory cells.
However the soft error rate still indicated the use of ECC, particularly as
memory got denser.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Alan Cox
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  u.org.ukcc:
  Sent by: Linux onSubject:  Re: URGENT! really low 
performance. A related question...
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  02/20/2003 10:23
  AM
  Please respond to
  Linux on 390 Port






On Thu, 2003-02-20 at 01:00, John Alvord wrote:
 And Lord protect you if the packaging accidently contained materials
 which generated gamma rays. Another tale of woe from the IBM 1980s

Gamma seems odd, it doesn't interact much most times, now alpha emitters
I could believe. Was it alpha or gamma emitters they got in their materials
?



Re: URGENT! really low performance. A related question...

2003-02-20 Thread Joseph Temple
John Summerfield wrote

 Why do people keep referring to the speed of light? In what I learned of
electronics, signals are carried by electrons travelling round in
conductors
(and semiconductors). AFAIK electrons are quite a bit slower than photons.

Well, according to a guy named James Clerk Maxwell light is electromagnetic
radiation.  Electrical current (the flow of charge) in a conductor is
induced by an electromagntic wave.  At low frequency and short distance the
idea of voltage and current works fine and the wave is ignored.  As things
get faster or longer (ie POwer Lines) the conductors need to be treated as
wave guides more than as conductors.  As far as the math goes, I believe
that you have to start using transmission line characteristics as soon as
the delay on the line matters.  The conductors in a modern chip are not
treated as simple wires with no impedance or delay but are modeled with
inductance, capacitance, and resistance, in much the same way that a
transmission line is modeled.  An upper bound for electromagnetic wave
speed is C, unless you get into some really hairy quantum physics
paradoxes. (Read Shroedingers Kittens, I forget the author's name)  On the
other hand practical physical limitations slows waves down.  How close to C
you get depends on the medium and practical things like the need to dampen
reflections on the line.  That is the fastest line is useless if the signal
on it rings enough to prevent further use of the line.

Wow I thought I had forgotten all that stuff 30 years ago when I burned my
fields and waves book...



Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



Re: URGENT! really low performance. A related question...

2003-02-19 Thread Joseph Temple
John Summerfield wrote That tells me you weren't current on your
maintenance...

Software currency is an issue, but there is very good reason why people
bring their systems down only once or twice a year for maintenance.  They
lose money when they do it.  The balance between having the right fixes on
and keeping the system up is an art.

Of course it is also possible in a sysplex to put maintenance on without
taking a system wide outage.I know of  a bank that has a weekly
maintenance cycle but has kept their sysplex up for more than 5 years.
While sysplex is a Z/OS thing similar things can be done with LPAR, VM and
some relatively simple failover scripts.

There remain a few hardware and microcode updates which require that a box
be taken down, but such maintenance is relatively rare and usually is not
urgent.  Security alerts for VM and Z/OS are practically nonexistent,  and
it is not necessary to take down VM to do maintenance on the Linux systems.
With disk sharing and VM, the apparent outage for maintenance of Linux can
be  virtually eliminated.

This is one of the key elements of TCO.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  John Summerfield
  summer@computerdatasTo:   [EMAIL PROTECTED]
  afe.com.au  cc:
  Sent by: Linux on 390Subject:  Re: URGENT! really low 
performance. A related question...
  Port
  [EMAIL PROTECTED]
  EDU


  02/18/2003 07:59 PM
  Please respond to
  Linux on 390 Port






 I just IPL'ed the S/390 Sunday 2/9/03 it was up since we installed our
new
 MP3000 1/9/02 that's January 9, 2002. I IPLed to install
 Z/VM 4.3.0 (Scheduled Change)


That tells me you weren't current with your maintenance;-)


If you looked at the security advisories and decided they were not needed,
that's fine. However, I suspect that many people who report how long
*their*
systems have been up have neglected their maintenance.


--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my
disposition.

==
If you don't like being told you're wrong,
be right!



Re: URGENT! really low performance. A related question...

2003-02-19 Thread Joseph Temple
John Summerfield wrote  I presume, from what you say, that Java isn't all
that wonderful on zSeries? Improved CPU performance may make it so.

One cannot make such blanket statements.  JAVA is a language, not a
workload.  Yes it does have characteristics that cause it to have long path
lengths.  However, it also has characteristics that trash caches,
particularly if the programmer takes OO programming seriously.  Small
caches get trashed faster than large caches particularly when they  are in
fast engines.  The balance of pathlength and cache misses is entirely
dependent upon the application  in any language.  Java just happens to be
less efficient on all fronts than earlier languages, but then Fortran is
less efficient than assembler.  I would argue that the slide in code
efficiency is balanced by the increase in processor speed over time for all
machines.  Relative capacity is more related to how the programmer writes
the application and how much compute v data  is involved.  Long ago we used
to call the ratio of Execution to Bandwidth the E/B ratio.  This ratio
still applies, when E/B is large the other machines will look better than
z.  When it is small the z shines.   This is true regardless of language.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



Re: URGENT! really low performance. A related question...

2003-02-18 Thread Joseph Temple
Mark Drvodelsky wrote But the question still does not appear to be
answered - why does the
mainframe have to run at such a low clock speed?

The answer to your question has to do with how chip real estate is used.
In a zSerires micro processor the primary usage of area is for large L1
caches and error detection/recovery hardware.  Basically, increases in
cache size result in decreases in clock rate.  This is because there is
more load on the critical signals.  Secondly, to date the zSeries
microprocessor pipleline does not do super scalar  processing.  That is
it finishes 1 instruction per cyle at best.  This is because it takes
consideratbly more work and hardware to do mainframe style error recovery
functions when more than 1 instruction can complete in a cyc;le.  While
super scalar execution does not help with clock speed it does help with
cpu intense measurements like SPECint.

However,  since the cache is larger the zSeries will wait for memory less
often than other machines.Metrics like SPECint and MHz ignore cache
misses. So the question becomes how much are the caches missing?  The more
they miss the better the zSeries looks.  This is very workload dependent.
One driver of cache misses is context switches; another is I/O.   If you
attempt to make an Intel server very busy,  the cache miss rate will climb,
causing throughput to saturate, unless the work is very CPU intense and
cache working set per transaction or per user is very small.

 The reason the Robert Nix's print server dabacle occured is that IBM made
the mistake of treating Samba file/print as a single type of workload. We
didn't understand at the time that a print server can behave like a network
to network  prototcol server.  These servers  actually move very little
data through the cpu.  Such a machine has very little context switching and
the I/O is network to network which will actually drive very little data
through the caches.  The combination makes the workload cpu intense and if
busy a bad candidate for Linux/z.   By contrast a Samba file server can be
doing enough disk to network I/O which pushes more data through the caches
changing blocks to packets.  This can cause distributed servers can get I/O
and cache bound.  Samba can be either  CPU or  I/O intense, and the single
context makes the cpu intense workloads unattractive for z particularly if
the machines are busy.

So the answer to your question is that we could build a zSeries
microprocessor which is as fast as  any other processor,  but to do so
would cause us to lose the fundamental strengths in context switching, data
caching and I/O.  There is alwasy a trade off between  speed and capacity.
zSeries favors capacity; Intel favors speed.  How much L1 cache should be
given up to increase the clock rate?  How much RAS and recovery function
should be given up to improve SPECint?   We have seen this situation
improve over time, and IBM will continue to improve its microprocessor
design, but zSeries cannot simply abandon strength in large working set
workloads to crank up the clock speed and/or instruction rate for workoads
with small working sets.  This particularly true when the virtualization
and workload management which drive consolidation and mixed workloads is
dependent on the very hardware capabilities that would have to be given up.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794



  Mark Darvodelsky
  Mark_Darvodelsky@royalTo:   [EMAIL PROTECTED]
  sun.com.aucc:
  Sent by: Linux on 390  Subject:  Re: URGENT! really low 
performance. A related question...
  Port
  [EMAIL PROTECTED]
  U


  02/16/2003 08:32 PM
  Please respond to Linux
  on 390 Port






But the question still does not appear to be answered - why does the
mainframe have to run at such a low clock speed?

Perhaps someone with some hardware knowledge could explain it? Why can't
the clock be cranked up to be the same speed as the latest Pentium?

Most of us mainframe guys understand its inherent advantages, but as
someone has already commented, it often just doesn't wash with management
if a cheap Pentium outperforms a million-dollar mainframe.

Regards.
Mark Darvodelsky
Data Centre - Mainframe  Facilities
Royal SunAlliance Australia
Phone: +61-2-99789081
Email: [EMAIL PROTECTED]





CAUTION - This message is intended for the addressee named above
It may contain privileged or confidential information. If you are not the
intended recipient of this message you must not use, copy, distribute or
disclose it to anyone other than the addressee. If you have received
this message in error please return the message to the sender by
replying to it and then delete the message from your computer.

Internet emails are not 

Re: URGENT! really low performance.

2003-02-14 Thread Joseph Temple
Robert Nix wrote:  But, if one image starts doing compiles or compression
of large quantities of data, or any other CPU bound task, everyone will
suffer.

Actually you have a choice.  If the compiles, etc. are relegated to a
compute server you can make it suffer rather than everyone else, also, if
you cap the cpu given the guests you can minimize the intensity of t the
suffering when cpu heavy tasks occur, but it will go on for a longer period
of time. It's a matter of prioities and how you distribute work among
virtual  machines.  The beauty of Linux is that the  compute intense
server  can be a  virtual or real machine, but it is still LInux.In
the past such a scheme using reeal machines would split the work between
ZOS and WIndows which is a lot more complex.   We need to start thinking
about things like Grids of virtual and real servers.

Joe Temple
[EMAIL PROTECTED]
845-435-6301



  Nix, Robert P.
  Nix.Robert@mayo.To:   [EMAIL PROTECTED]
  edu cc:
  Sent by: Linux onSubject:  Re: URGENT! really low 
performance.
  390 Port
  [EMAIL PROTECTED]
  IST.EDU


  02/13/2003 04:01
  PM
  Please respond to
  Linux on 390 Port





Mainframes do I/O exceptionally well, but when it comes to compute bound
tasks, they do very poorly. If you think about a tar operation, the
compression is a fairly compute-intensive operation.

We're running a 9672-R56 w/ one IFL. During our initial trial, we found the
IFL to be about the same as a 300 or 400mHz PC for compute-bound tasks. The
strength of the mainframe comes in for burst-type execution and I/O
throughput. Things like multiple web servers running in individual Linux
images. File serving. Anything where: A) The CPU isn't expected to be taxed
a great deal. and B) the CPU isn't going to be utilized for long periods of
time. This allows the CPU to be shared among a larger quantity of images,
giving all of them the impression of a dedicated box.

But, if one image starts doing compiles or compression of large quantities
of data, or any other CPU bound task, everyone will suffer.


Robert P. Nixinternet: [EMAIL PROTECTED]
Mayo Clinic  phone: 507-284-0844
RO-CE-8-857page: 507-270-1182
200 First St. SW
Rochester, MN 55905
   Codito, Ergo Sum
In theory, theory and practice are the same,
 but in practice, theory and practice are different.


 -Original Message-
 From: Alex Leyva [SMTP:[EMAIL PROTECTED]]
 Sent: Thursday, February 13, 2003 3:10 PM
 To:   [EMAIL PROTECTED]
 Subject:  URGENT! really low performance.

 Hi all, i have a problem, we have a z800, the configuration is:
 1 cp 80 MIPS
 1 IFL
 8 Gb storage
 3 partitions:
 -os/390 2.6
 -os/390 2.6
 -z/vm 4.3
 840 gb (shark)

 the cp is dedicated to both os/390, and the ifl to z/vm, 2gb to
 both os/390, and 6 gb to z/vm.

 Redhat 7.2 as a z/vm guest:

 [root@linux1 root]# uname -a
 Linux linux1.xxx.xxx.xxx 2.4.9-38lvm #1 SMP mii feb 12 12:25:01 CST
 2003 s390 unknown
 [root@linux1 s390]# cat /proc/cpuinfo
 vendor_id   : IBM/S390
 # processors: 1
 bogomips per cpu: 630.78
 processor 0: version = FF,  identification = 02900A,  machine = 2066
 [root@linux1 s390]# cat /proc/meminfo
 total:used:free:  shared: buffers:  cached:
 Mem:  1045737472 364187648 6815498240 15532032 317743104
 Swap: 4098211840 409821184


 Default installation, the z/vm has one week installed:

 q cplevel
 z/VM Version 4 Release 3.0, service level 0201 (64-bit)
 Generated at 05/09/02 17:30:26 EST
 IPL at 02/07/03 12:13:53 EST

 when we make a tar -gzipping it- from a directory with 100Mb, we have
 that:
 -the hmc indicates that the ifl is at 99% utilization.
 -real time monitor indicates that the processor is at 99% utilization:
 | USERID %CPU %CP %EM ISEC PAG  WSS  RES   UR PGES SHARE VMSIZE
TYP,CHR,STAT |
 | LINUX1 99 .15  99  4.4 .00 100K 100K   .01  50%A 1G
VUS,QDS,DISP |
 | SYSTEM.08 .08 .00  .00 .000 5060   .0  536 . 2G SYS,
|
 | VMRTM .02 .01 .01  .63 .00  462  483   .00   3%A32M
VUS,IAB,SIMW |
 -top at the linux shows:
 30 processes: 27 sleeping, 3 running, 0 zombie, 0 stopped
 CPU states: 97.6% user,  2.3% system,  0.0% nice,  0.0% idle
 Mem:  1021228K av,  279636K used,  741592K free,   0K shrd,   14120K
buff
 Swap:  400216K av,   0K used,  400216K free  234992K
cached

 we apply some performance related commands like:
 set quickdsp linux1 on real
 set share linux1 relative 300 real
 set share linux1 absolute 50% real

 and the time went from 1m3.6s to 1m2.039s in the better case, the people
 from ibm (they are here yet) can give me an answer 

Re: VM for Intel?

2002-02-20 Thread Joseph Temple

I apologize for not following the whole thread here, but in case it has not
been mentioned, the following should be pointed out to further
differentiate z from x as far as virtualization goes:   The zSeries
architecture and hardware design contains facilities  not found in the Inel
machine.  These facitlies (mainly SIE and EMIF  virtual memory related
architectures  go a long way in reducing the overhead fo virutalization
which must be done entirely in software on Intel.   Furthermore the small
caches and relatively high aggregate memory latency of the intel machines
means that they suffer more from the increase in context switching that
occurs when  virtualization is done.  So yes they can do it, but not nearly
as well.

Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794
-- Forwarded by Joseph Temple/Poughkeepsie/IBM on
02/20/2002 07:57 AM ---

Jim Elliott [EMAIL PROTECTED]@VM.MARIST.EDU on 02/19/2002
08:50:15 PM

Please respond to Linux on 390 Port [EMAIL PROTECTED]

Sent by:Linux on 390 Port [EMAIL PROTECTED]


To:[EMAIL PROTECTED]
cc:
Subject:Re: VM for Intel?



 But that was my question. Since IBM and VMWare are partnering on
 this effort, would IBM have contributed any sort of functionality
 lifted from z/VM? If not, why the partnership? ...

Mark: Just as IBM supports Linux across all four of our server lines,
we also wanted to support partitioning across all four of our server
lines. With this announcement (which we previewed at LinuxWorldExpo in
NYC), we now have that function available. There is no z/VM code in
VMware's ESX server (what IBM will be shipping on selected xSeries
servers).

Regards, Jim Elliott - Linux Advocate, IBM Canada