> Space is cheap, power & cooling is not.
Space may be cheap in North and South Dakota, Wyoming, parts of Nebraska, etc., 
but not so much in the sites where mainframes thrive... Manhattan, the rest of 
NYC and its suburbs, adjoining NJ, San Francisco, Chicago, Atlanta, Dallas, 
etc.  Distributed server 'rack-sprawl' takes up far more expensive floorspace 
in costly locations for comparable power provided by mainframes in the same 
locations when TCO (Total Cost of Ownership) is fully and accurately calculated.

Also, consider the distributed server admin, and other staff (cablers, SAN 
provisioners, security admins, etc.) involvement every 3 years when it's time 
to upgrade to new hardware.  Or the cost of maintaining older equipment and the 
cost of its inevitable outages when failing to upgrade every 3 years.  Compare 
that distributed server admin time to mainframe staff requirements when IBM 
does a mainframe push-pull upgrade.  The mainframe staff is involved in the 
planning, and post-upgrade validation testing, but there is usually very little 
mainframe, or Linux on z Systems, staff time required compared distributed 
server upgrades.  The mainframe upgrade is usually accomplished in a 12-18 hour 
outage window, while typical distributed server upgrades of the same computing 
power spread over multiple months (during which the staff is side-tracked from 
most other work).

>(contracts, personnel, politics, legacy) will play a larger role in the 
>decision of Mainframe vs COTS,
Agreed.  IMHO corporate American senior management does not plan for success in 
timeframes much beyond the next quarterly report to Wall Street, or at most 3-4 
years from when they are employed, and which point they expect to leave on a 
golden parachute after having trimmed costs to the point that the company 
barely functions.  That belief is even more prominent in I.T. where it has been 
going on since before I started I.T. back in 1972, when the I.T. conflict was 
between mainframe centralization or de-centralization. What I kept seeing was 
that when a new CIO was hired (most lasted less than 4 years), the new CIO just 
chose the from the opposite centralization/de-centralization in place from the 
previous CIO... thus showing the 'enlightened wisdom' the new CIO brought to 
the organization.  Three or four years after the new CIO was hired, typical 
mismanagement on multiple fronts (I.T., business, accounting, etc.) resulted in 
a new CIO being brought aboard, with the opposite plan put into place, ad 
nauseam.  Now, the typical choice is distributed vs. mainframe computing.  But 
a typical CIO (having an MBA, but usually little expertise in what we used to 
call :Data Processing) doesn't have any deep I.T. experience upon which to make 
an independent educated decision.  So, the new CIO listens to the existing I.T. 
staff.  Guess which side of the mainframe-distributed computing staff have more 
people with which to conduct R&D?   How often does the CIO talk with the 
mainframe management (given that in a well-run I.T. team, the mainframe rarely 
has major hardware or software failures)?  Answer: typically, only when big 
budget changes come up, particularly every 3 years when the mainframe (a big 
ticket item) should be upgraded.  How often does the CIO talk with distributed 
server management?  Typically every couple weeks, when a distributed server 
manager comes in to say something like: the business is growing, we need to add 
a new $25,000 server to support the business.  Talk to distributed server 
management every few weeks, and pretty soon it's: Hey, CIO. I have tee time 
this weekend, or 50 yard-line tickets for a big game, want to join me?  Then, 
the next time the mainframe manager asks for a mainframe upgrade, those 
repeated 'small' $25,000 server purchases are simply forgotten in the light of 
that big mainframe upgrade expense.  Granted, both mainframe and distributed 
server upgrades should be budgeted, but one big number stands out in a 
red-tinted cell on a spreadsheet far more than many 'small' $25,000 purchases 
of 3 years - and the spreadsheet almost certainly won’t include any prior year 
$25,000 server expenses as part of Total Cost of Ownership.  Without a truly 
'enlightened, wise' and properly presented Total Cost of OWNERSHIP (not just 
Acquisition), the CIO will usually make the easy choice: keep costs low before 
floating away on a golden parachute in a few years... hang the real effect on 
the long term success of the company, not the CIO's problem.

You think that's sour grapes?  If you work for a public company driven by 
"shareholder value", think back about your own employer's long-term 
investments, how long your CIO's work(ed) there, what they did with the budgets 
during their time, and what effect those budgets had on the company aside from 
the stock price (which affects their own bonus and stock grants and stock 
option benefits), and how well you can serve the company customers/clients 
after repeated budget cuts.

Mike Walter

-----Original Message-----
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Willemina 
Konynenberg
Sent: Thursday, May 25, 2017 11:51 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: The Mainframe vs. the Server Farm: A Comparison

Of course.  You should budget things properly or you're not doing your job.  
Network hardware, power control, redundancy, etc, should all be taken into 
consideration.  And to some extent, the IBM maintenance contract needs to be 
replaced with man power to repair/replace broken parts as & when needed.  
However, I suspect that if you do, you will find that it still easily fits 
within a comparable overall budget.

I don't have numbers to do a sensible comparison of the power consumption.  
Foot print for COTS is likely somewhat bigger, but that is generally not a 
major concern these days.  Space is cheap, power & cooling is not.

Generally, I would expect that, in most organizations, other considerations 
(contracts, personnel, politics, legacy) will play a larger role in the 
decision of Mainframe vs COTS, before things like functional differences, plain 
!/$, or environment.

WFK


On 05/25/17 18:19, Marcy Cortes wrote:
> And cabling and network ports, etc.  And while you can maintain high 
> availability with those 500 things, you still will have failures and people 
> costs of repairing and putting that thing back into rotation.    Can be done, 
> but it’s a  cost people don't account for that I've seen.
> 
> -----Original Message-----
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of 
> Tom Huegel
> Sent: Thursday, May 25, 2017 9:09 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: [LINUX-390] The Mainframe vs. the Server Farm: A 
> Comparison
> 
> Don't forget to consider the mainframe has a much smaller enironmental 
> footprint that say 500 COTS.
> The cost savings in power comsumption, air conditioning, and floor space can 
> be huge.
> 
> On Thu, May 25, 2017 at 10:21 AM, Willemina Konynenberg 
> <w...@konynenberg.org
>> wrote:
> 
>> But according to the datasheets, upgrading, say, an H06 to an H13 
>> "requires planned down time", so if you started small and then want 
>> to grow, the only feasible (non-down-time) upgrade path is to buy a 
>> 2nd mainframe, which, as you point out "won't scale painlessly".
>>
>>
>> With a COTS based system, you work with a cluster configuration from 
>> the start (without requiring additional licenses), and have a rather 
>> more granular and disruption-free upgrade path.
>> And because it is designed from the ground up as a cluster, it is 
>> designed to be maintainable WHILE WORKING.  Replacing any hardware 
>> component of the cluster (ECC memory, CPU, I/O board, main board, 
>> network component, rack, ...) can be done while the system is running.
>>
>> So there isn't really any *functional* advantage to using a mainframe.
>> The question is whether you want to be running a cluster of, say, 2 -
>> 5 mainframes, or, say, 10 - 500 COTS boxen.  I.e. "what do you want 
>> to spend your money on".
>>
>> And no, you should not then have a bunch of sysadmins running around 
>> manually managing those 500 COTS boxen.  That's supposed to be automated...
>>
>>
>> WFK
>>
>> On 05/25/17 16:22, John Campbell wrote:
>>> As I recall from Appendix A of the "Linux for S/390" redbook, the
>>> S/390 (and, likely, zSeries) is designed to be maintainable WHILE WORKING.
>>>
>>> The multi-dimensional ECC memory allows a memory card to be replaced
>> WHILE
>>> the system is running.  Likewise, power supplies the CPs.
>>>
>>> I have to agree that the "second" zSeries box won't scale 
>>> painlessly;
>> The
>>> work to load balance would NOT be fun (and the second box has its 
>>> own issues w/r/t the management team, too).
>>>
>>> I recall, when dealing with the idea of putting an S/390 into a 
>>> Universal Server Farm in Secaucus, NJ (I had some fun helping define 
>>> the various networks as this predated the "hyperchannel" within the 
>>> BFI ("Big Iron")
>> as
>>> part of this USF integration) when it was killed for non-technical
>> reasons.
>>>
>>> -soup
>>>
>>> On Thu, May 25, 2017 at 8:41 AM, Philipp Kern <pk...@debian.org> wrote:
>>>
>>>> On 24.05.2017 00:03, John Campbell wrote:
>>>>> Cool...
>>>>>
>>>>> Though the real key is that the mainframe is designed for 
>>>>> something at
>> or
>>>>> beyond five 9s (99.999%) uptime.
>>>>>
>>>>> [HUMOR]
>>>>> Heard from a Tandem guy:  "Your application, as critical as it is, 
>>>>> is
>> on
>>>> a
>>>>> nine 5s (55.5555555%) platform."
>>>>> [/HUMOR]
>>>>
>>>> Mostly you trade complexity in hardware with complexity in software.
>>>> Mainframes do not scale limitless either, so you trade being able 
>>>> to grow your service by adding hardware with doing it within the 
>>>> boundaries of a sysplex.
>>>>
>>>> Your first statement is also imprecise. It's designed for five 9s 
>>>> excluding scheduled downtime. If you use the fact that hardware is 
>>>> unrealiable (after subtracting your grossly overstated
>>>> unreliability) to your advantage, you end up with a system where 
>>>> any component can fail and it doesn't matter. You win.
>>>>
>>>> Again, it then comes down to the trade-off question if you're 
>>>> willing to pay for the smart software and the smart brains to 
>>>> maintain it rather than paying IBM to provide service for the mainframe.
>>>>
>>>> Kind regards
>>>> Philipp Kern
>>>>
>>>> -------------------------------------------------------------------
>>>> --- For LINUX-390 subscribe / signoff / archive access 
>>>> instructions, send email to lists...@vm.marist.edu with the
>>>> message: INFO LINUX-390
>> or
>>>> visit
>>>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>>>> -------------------------------------------------------------------
>>>> --- For more information on Linux on System z, visit 
>>>> http://wiki.linuxvm.org/
>>>>
>>>
>>>
>>>
>>> --
>>> John R. Campbell         Speaker to Machines          souperb at gmail
>> dot
>>> com
>>> MacOS X proved it was easier to make Unix user-friendly than to fix
>> Windows
>>> "It doesn't matter how well-crafted a system is to eliminate errors; 
>>> Regardless  of any and all checks and balances in place, all systems 
>>> will fail
>> because,
>>>  somewhere, there is meat in the loop." - me
>>>
>>> --------------------------------------------------------------------
>>> -- For LINUX-390 subscribe / signoff / archive access instructions, 
>>> send email to lists...@vm.marist.edu with the message: INFO
>>> LINUX-390
>> or visit
>>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>>> --------------------------------------------------------------------
>>> -- For more information on Linux on System z, visit 
>>> http://wiki.linuxvm.org/
>>>
>>
>> ---------------------------------------------------------------------
>> - For LINUX-390 subscribe / signoff / archive access instructions, 
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 
>> or visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> ---------------------------------------------------------------------
>> - For more information on Linux on System z, visit 
>> http://wiki.linuxvm.org/
>>
> 
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions, send 
> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> ----------------------------------------------------------------------
> For more information on Linux on System z, visit 
> http://wiki.linuxvm.org/
> 

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit http://wiki.linuxvm.org/

Reply via email to