John S. Giltner, Jr. wrote:
R.S. wrote:
< Snipped >
Disclaimer: I'm not neither mainframe nor z/Linux enemy. I just try to
discuss real costs or savings from using IFL. Sometimes hardware RAS
can justify it, sometimes not, especially when Linux images are used
for non mission-critical applications. Nowadays PC servers can be
really dens-packed, can use the same (or much cheaper) SAN storage,
can be well-administered en-mass. 100 Linux images mean 100 operating
systems, 100 root users. On 100 PCs or single VM, doesn't matter.
I can matter matter, 100 PC's need at least 100 network connections and
100 connections to the SAN. Normaly you would have 2 network
connections for each server and two connections to the SAN for redudency
and performance. However, with z/VM you could share 2, 4, or even 6 LAN
and SAN connections. The SAN and Networking requirement are much less
with the mainframe.
Even with blade if you get a max of 14 servers in a blade center, you
will still need more network connections and SAN connections.
Again, you compare port numbers, not dollars. Single LAN card costs
about 5$ (reatil price PCI 32-bit, Eth 100BaseT). Probably veeery good
3Com LAN card costs approx. 100$. Two-port OSA Express costs approx.
20000$. That means you can buy 200 PC cards instead.
FC card is more expensive: approx. 1000$. It's still much less than
FICON cards (FICON card price is AFAIK comparable to OSA).
I don't know VM and Linux under VM - do they really share FICON ports?
I've heard recently about NPIV - it is something like EMIF facility for
FCP chpids. Available on z9.
If you don't have FCP then you are constrained to CKD DASD and
"mainframe" tape and this kind of storage is most expensive. Also the
best, but you don't always need the best.
BTW: we talk about VM, it's not free, AFAIK it costs approx 25000$ per
engine. It was assumed to use two IFLs, so VM price is 50000$.
No only that, but you would need a few more people to manage 100
physicall PC's than you would to manage 100 virtual machines. You still
need the same number of people to manange the actually OS enviroment.
I work with opens system folks who have over 100 servers installed.
Blades, 1U's, bigger machines. The only activity is to install and mount
new hardware. The rest of job is systems administering. As it was said,
computers can be physical or virtual, doesn't matter.
<biting mode on>
Hint: One can mention footprint and power consuption as very important
factors (and traditionally forget about dollars). <g>
<biting mode off>
--
Radoslaw Skorupka
Lodz, Poland
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html