One of the SFS scenario's discussed 2 LPARs sharing an SFS filepool
holding common file(s) used to setup and IPL linux guests. 

 SFS would be "unreliable" if the LPAR running the SFS filepool was down
for maintenance and the other LPAR couldn't use those unavailable SFS
files to start its Linux guests. 

> 
--------------------------------------------------------
This e-mail, including any attachments, may be confidential, privileged or 
otherwise legally protected. It is intended only for the addressee. If you 
received this e-mail in error or from someone who was not authorized to send it 
to you, do not disseminate, copy or otherwise use this e-mail or its 
attachments.  Please notify the sender immediately by reply e-mail and delete 
the e-mail from your system.


-----Original Message-----

> From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
> Behalf Of Tom Duerbusch
> Sent: Wednesday, October 29, 2008 1:33 PM
> To: [email protected]
> Subject: Reliability of SFS?
> 
> I'm surprised by another discussion that seems to say that SFS is not
> reliable or dependable.
> 
> Is that true in your shop?
> How heavy of a use it it?
> 
> Here, I'm the major "human" user.  The other 6 users may or may not
use it
> on any given day.
> However, I count 34, CMS type servers, that I have running, that make
use
> of SFS as part of their normal functions.  That includes PROP which
logs
> to a SFS directory 24X7.  And FAQS/PCS serves system related jobs from
SFS
> directories to the VSE machines.
> 
> I have 6 storage pools.  Historically there were of a size that the
backup
> would fit on a single 3480 cart (compressed).  Now, that isn't a
> requirement.
> 
> All my VSE machines (14 currently) have their A-disk on SFS.  That
> directory is also where all the "systems" related code is stored (IPL
> procs, CICS stuff, Top Secret security stuff, DB2 stuff, and all
vender
> related stuff).  No application related stuff to speak of.  In the 15
> years, here, I've never had a problem of not being able to bring up
VSE
> due to a SFS problem.
> 
> And in the last 5 years, I've never had a problem bringing up Linux
images
> due to SFS availability.
> 
> I have had problems of the loss off the CMS saved segment due to a bad
VM
> IPL.  This was usually due to a duplicate CP-OWNED pack being brought
up
> instead of the original.  Ahhh, for the days of being able to go to
the
> IBM 3990 or IBM 3880 and disabling the address of the wrong
volume......
> 
> I've had SFS problems where the SFS backup cancelled due to tape I/O
error
> and the backup wasn't restarted (which would unlock the storage pool
that
> was locked), which caused users that want to access that pool to be
> denied.
> 
> But I was surprised at the people claiming that SFS wasn't reliable,
when
> all you need it for, was to serve the PROFILE EXEC to bring up the
Linux
> image.  I guess it is "once burnt, twice shy", and I guess I haven't
been
> "burnt" yet.
> 
> In my world, I don't do CMS minidisks, if I have a SFS option
available.
> 
> I think SFS is reliable.  Or am I just kidding my self?
> 
> Tom Duerbusch
> THD Consulting

Reply via email to