My former customer (a major bank in Belgium) ran since VM/SP R6 with
SFS as pre-req for their batch: no SFS ==> no batch run.  Later on, no
SFS even meant VM application not available.  At this shop, the z/VM
systems were even more stable than their z/OS systems.  Each of the 18
VM systems we used to have had at least 3 important SFS servers.
Personally, I used SFS directories as storage for my own files, but if
user KRIS was down for 15 minutes, no manager would start getting
nervous as banking transactions couldn't get out to the world (a
transaction could be several million $, and if sent too late, a fee of
just 1 per 1000 would not remain unnoticed).  And it all depended on
VM, SFS and DB2/VM being available.

So, I surely do **not** say SFS is not reliable, I don't think we had
any application outage caused by SFS somewhere after VM/ESA 1.1.
But still, minidisks are more reliable: "the minidisk is always
there".  An SFS server might be down, e.g. to reorganize its catalog,
or an abend e.g. due to log disk full and a long LUW that is active
and fills the whole log.  Backups of SFS require more attention too.
A DDR backup taken when the SFS server is active is worthless for
example.

Therefore: if SFS doesn't give enough advantages over minidisks, don't
use SFS.  Having Linuxes be independant on SFS also makes the life of
AUTOLOG1 easier: no need to postpone LINUX startup until one is sure
that SFS is up and running.

2008/10/29 Tom Duerbusch <[EMAIL PROTECTED]>:
> I'm surprised by another discussion that seems to say that SFS is not 
> reliable or dependable.
>
> Is that true in your shop?
> How heavy of a use it it?
>
> Here, I'm the major "human" user.  The other 6 users may or may not use it on 
> any given day.
> However, I count 34, CMS type servers, that I have running, that make use of 
> SFS as part of their normal functions.  That includes PROP which logs to a 
> SFS directory 24X7.  And FAQS/PCS serves system related jobs from SFS 
> directories to the VSE machines.
>
> I have 6 storage pools.  Historically there were of a size that the backup 
> would fit on a single 3480 cart (compressed).  Now, that isn't a requirement.
>
> All my VSE machines (14 currently) have their A-disk on SFS.  That directory 
> is also where all the "systems" related code is stored (IPL procs, CICS 
> stuff, Top Secret security stuff, DB2 stuff, and all vender related stuff).  
> No application related stuff to speak of.  In the 15 years, here, I've never 
> had a problem of not being able to bring up VSE due to a SFS problem.
>
> And in the last 5 years, I've never had a problem bringing up Linux images 
> due to SFS availability.
>
> I have had problems of the loss off the CMS saved segment due to a bad VM 
> IPL.  This was usually due to a duplicate CP-OWNED pack being brought up 
> instead of the original.  Ahhh, for the days of being able to go to the IBM 
> 3990 or IBM 3880 and disabling the address of the wrong volume......
>
> I've had SFS problems where the SFS backup cancelled due to tape I/O error 
> and the backup wasn't restarted (which would unlock the storage pool that was 
> locked), which caused users that want to access that pool to be denied.
>
> But I was surprised at the people claiming that SFS wasn't reliable, when all 
> you need it for, was to serve the PROFILE EXEC to bring up the Linux image.  
> I guess it is "once burnt, twice shy", and I guess I haven't been "burnt" yet.
>
> In my world, I don't do CMS minidisks, if I have a SFS option available.
>
> I think SFS is reliable.  Or am I just kidding my self?
>
> Tom Duerbusch
> THD Consulting
>



-- 
Kris Buelens,
IBM Belgium, VM customer support

Reply via email to