Catching up on some older posts, sorry for the late response Tom.  :)

At one site I support we have multiple very large SFS servers, the
largest of which has over 116,830 3390 CKD cylinders (21,007,245 blocks
of space).  It has never crashed, is up 24/7, and continues to grow
every month.  I'd say it's completely stable.  :)

The biggest problem we have with an SFS server this large is backups.
VM:backup (CA) will only use a single backup stream since nearly 100% of
this space is all owned by a single root user.

-Mike

-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Tom Duerbusch
Sent: Wednesday, October 29, 2008 1:33 PM
To: [email protected]
Subject: Reliability of SFS?


I'm surprised by another discussion that seems to say that SFS is not
reliable or dependable.

Is that true in your shop?
How heavy of a use it it?

Here, I'm the major "human" user.  The other 6 users may or may not use
it on any given day. However, I count 34, CMS type servers, that I have
running, that make use of SFS as part of their normal functions.  That
includes PROP which logs to a SFS directory 24X7.  And FAQS/PCS serves
system related jobs from SFS directories to the VSE machines.

I have 6 storage pools.  Historically there were of a size that the
backup would fit on a single 3480 cart (compressed).  Now, that isn't a
requirement.

All my VSE machines (14 currently) have their A-disk on SFS.  That
directory is also where all the "systems" related code is stored (IPL
procs, CICS stuff, Top Secret security stuff, DB2 stuff, and all vender
related stuff).  No application related stuff to speak of.  In the 15
years, here, I've never had a problem of not being able to bring up VSE
due to a SFS problem.

And in the last 5 years, I've never had a problem bringing up Linux
images due to SFS availability.  

I have had problems of the loss off the CMS saved segment due to a bad
VM IPL.  This was usually due to a duplicate CP-OWNED pack being brought
up instead of the original.  Ahhh, for the days of being able to go to
the IBM 3990 or IBM 3880 and disabling the address of the wrong
volume......

I've had SFS problems where the SFS backup cancelled due to tape I/O
error and the backup wasn't restarted (which would unlock the storage
pool that was locked), which caused users that want to access that pool
to be denied. 

But I was surprised at the people claiming that SFS wasn't reliable,
when all you need it for, was to serve the PROFILE EXEC to bring up the
Linux image.  I guess it is "once burnt, twice shy", and I guess I
haven't been "burnt" yet.  

In my world, I don't do CMS minidisks, if I have a SFS option available.

I think SFS is reliable.  Or am I just kidding my self?

Tom Duerbusch
THD Consulting

Reply via email to