Dear community,

this morning I started in a good mood, until I've checked my mailbox. Again a 
reported bug in Spectrum Scale that could lead to data loss. During the last 
year I was looking for a stable Scale version, and each time I've thought: 
"Yes, this one is stable and without serious data loss bugs" - a few day later, 
IBM announced a new APAR with possible data loss for this version.

I am supporting many clients in central Europe. They store databases, backup 
data, life science data, video data, results of technical computing, do HPC on 
the file systems, etc. Some of them had to change their Scale version nearly 
monthly during the last year to prevent running in one of the serious data loss 
bugs in Scale. From my perspective, it was and is a shame to inform clients 
about new reported bugs right after the last update. From client perspective, 
it was and is a lot of work and planning to do to get a new downtime for 
updates. And their internal customers are not satisfied with those many 
downtimes of the clusters and applications.

For me, it seems that Scale development is working on features for a specific 
project or client, to achieve special requirements. But they forgot the 
existing clients, using Scale for storing important data or running important 
workloads on it.

To make us more visible, I've used the IBM recommended way to notify about 
mandatory enhancements, the less favored RFE:

http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=109334

If you like, vote for more reliability in Scale.

I hope this a good way to show development and responsible persons that we have 
trouble and are not satisfied with the quality of the releases.


Regards,

Jochen







Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to