I have been a contented user of SFS since the HPO5 days. The only problems that I had in the early days were head crashes on the 3380 A04/B04 drives that VM was saddled with (we were the redheaded stepchild of an airline). The most recent problem I had was with a catalog that was corrupted by the software used to migrate the system when the datacenter was moved. I am confidant that it was the move that corrupted the catalog because similar corruption occurred elsewhere following the move.
Here, our use is very heavy. All of the source for TPF, both system and applications, is currently in SFS. Our general user filepool is comprised of 9 storage groups that occupy either 2 or 3 full-pack 3390-03s, each. Since we are an active TPF development shop, this filepool is heavily used. Much of our user support function is also dependent on this filepool. We have approximately 100 service machines, and it is rare for one of them to not depend on SFS. We have a second filepool used to contain logs from production systems. Its user SGs occupy 13 full-pack 3390-03s. It is used every day by the production support group in researching events, happenings and problems encountered by our production systems and/or the financial network. It also gets a good deal of traffic. Since our operation is global, with geographically dispersed systems and support groups, our SFS pools are busy 24X7. If we should have an SFS problem in the main filepool, it would be tantamount to having a system outage. Regards, Richard Schuh > -----Original Message----- > From: The IBM z/VM Operating System > [mailto:[EMAIL PROTECTED] On Behalf Of Tom Duerbusch > Sent: Wednesday, October 29, 2008 10:33 AM > To: [email protected] > Subject: Reliability of SFS? > > I'm surprised by another discussion that seems to say that > SFS is not reliable or dependable. > > Is that true in your shop? > How heavy of a use it it? > > Here, I'm the major "human" user. The other 6 users may or > may not use it on any given day. > However, I count 34, CMS type servers, that I have running, > that make use of SFS as part of their normal functions. That > includes PROP which logs to a SFS directory 24X7. And > FAQS/PCS serves system related jobs from SFS directories to > the VSE machines. > > I have 6 storage pools. Historically there were of a size > that the backup would fit on a single 3480 cart (compressed). > Now, that isn't a requirement. > > All my VSE machines (14 currently) have their A-disk on SFS. > That directory is also where all the "systems" related code > is stored (IPL procs, CICS stuff, Top Secret security stuff, > DB2 stuff, and all vender related stuff). No application > related stuff to speak of. In the 15 years, here, I've never > had a problem of not being able to bring up VSE due to a SFS problem. > > And in the last 5 years, I've never had a problem bringing up > Linux images due to SFS availability. > > I have had problems of the loss off the CMS saved segment due > to a bad VM IPL. This was usually due to a duplicate > CP-OWNED pack being brought up instead of the original. > Ahhh, for the days of being able to go to the IBM 3990 or IBM > 3880 and disabling the address of the wrong volume...... > > I've had SFS problems where the SFS backup cancelled due to > tape I/O error and the backup wasn't restarted (which would > unlock the storage pool that was locked), which caused users > that want to access that pool to be denied. > > But I was surprised at the people claiming that SFS wasn't > reliable, when all you need it for, was to serve the PROFILE > EXEC to bring up the Linux image. I guess it is "once burnt, > twice shy", and I guess I haven't been "burnt" yet. > > In my world, I don't do CMS minidisks, if I have a SFS option > available. > > I think SFS is reliable. Or am I just kidding my self? > > Tom Duerbusch > THD Consulting >
