I worked for the major computer manufacturer where they tested new devices. We 
had 21 z/VM systems with many users on each system running tests that produced 
huge amounts of data, and all of it was stored in one SFS file system.
I was told it was the largest BFS (SFS) in the world.
The only problem we ever had was when we let an opie on the system and he 
formatted one of dasd volumes that belonged to the SFS...
With good backups, everything is recoverable.

-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] Behalf Of Scott 
Rohling
Sent: Wednesday, October 29, 2008 1:23 PM
To: [email protected]
Subject: Re: Reliability of SFS?


I certainly wasn't trying to say SFS wasn't reliable.  It's just a 'point of 
failure'.  And I say that it's a point of failure as opposed to a LINK of a 
minidisk, which doesn't require a properly defined filepool be available (think 
DR).  Of course, minidisks also have to have proper security defs (in some 
cases) to be linked... but same same in SFS.

As far as functionality -- no question SFS is more flexible, etc ..   but for a 
super important disk like the Linux guest startup -- I'd use a minidisk.

Maybe in the end, the best thing to do is IPL the 200 (100, wherever your boot 
disk is) in the directory entry for reliability sake.  But then you miss the 
flexibility of a well-crafted PROFILE EXEC that does things like call SWAPGEN, 
etc..

Anyway - while I have found SFS extremely reliable when it's running - I have 
just run into many situations where it was not up or not running properly and 
we were stuck -  until the SFS pool was fixed, restored, whatever.

Scott Rohling


On Wed, Oct 29, 2008 at 11:32 AM, Tom Duerbusch < [EMAIL PROTECTED]> wrote:


I'm surprised by another discussion that seems to say that SFS is not reliable 
or dependable.

Is that true in your shop?
How heavy of a use it it?

Here, I'm the major "human" user.  The other 6 users may or may not use it on 
any given day.
However, I count 34, CMS type servers, that I have running, that make use of 
SFS as part of their normal functions.  That includes PROP which logs to a SFS 
directory 24X7.  And FAQS/PCS serves system related jobs from SFS directories 
to the VSE machines.

I have 6 storage pools.  Historically there were of a size that the backup 
would fit on a single 3480 cart (compressed).  Now, that isn't a requirement.

All my VSE machines (14 currently) have their A-disk on SFS.  That directory is 
also where all the "systems" related code is stored (IPL procs, CICS stuff, Top 
Secret security stuff, DB2 stuff, and all vender related stuff).  No 
application related stuff to speak of.  In the 15 years, here, I've never had a 
problem of not being able to bring up VSE due to a SFS problem.

And in the last 5 years, I've never had a problem bringing up Linux images due 
to SFS availability.

I have had problems of the loss off the CMS saved segment due to a bad VM IPL.  
This was usually due to a duplicate CP-OWNED pack being brought up instead of 
the original.  Ahhh, for the days of being able to go to the IBM 3990 or IBM 
3880 and disabling the address of the wrong volume......

I've had SFS problems where the SFS backup cancelled due to tape I/O error and 
the backup wasn't restarted (which would unlock the storage pool that was 
locked), which caused users that want to access that pool to be denied.

But I was surprised at the people claiming that SFS wasn't reliable, when all 
you need it for, was to serve the PROFILE EXEC to bring up the Linux image.  I 
guess it is "once burnt, twice shy", and I guess I haven't been "burnt" yet.

In my world, I don't do CMS minidisks, if I have a SFS option available.

I think SFS is reliable.  Or am I just kidding my self?

Tom Duerbusch
THD Consulting



Reply via email to