hi marc,
1. YES, Native Raid can recover from various failures: drawers, cabling,
controllers, power supplies, etc, etc.
Of course it must be configured properly so that there is no possible
single point of failure.
hmmm, this is not really what i was asking about. but maybe it's easier
in gss to do this properly (eg for 8+3 data protection, you only need 11
drawers if you can make sure the data+parity blocks are send to
different drawers (sort of per drawer failure group, but internal to the
vdisks), and the smallest setup is a gss24 which has 20 drawers).
but i can't rememeber any manual suggestion the admin can control this
(or is it the default?).
anyway, i'm certainly interested in any config whitepapers or guides to
see what is required for such setup. are these public somewhere? (have
really searched for them).
But yes, you should get your hands on a test rig and try out (simulate)
various failure scenarios and see how well it works.
is there a way besides presales to get access to such setup?
stijn
2. I don't know the details of the packaged products, but I believe you
can license the software and configure huge installations,
comprising as many racks of disks, and associated hardware as you desire
or need. The software was originally designed to be used
in the huge HPC computing laboratories of certain governmental and
quasi-governmental institutions.
3. If you'd like to know and/or explore more, read the pubs, do the
experiments, and/or contact the IBM sales and support people.
IF by some chance you do not get satisfactory answers, come back here
perhaps we can get your inquiries addressed by the
GPFS design team. Like other complex products, there are bound to be some
questions that the sales and marketing people
can't quite address.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss