On 01/31/2013 02:57 PM, Stephan von Krawczynski wrote:
You are asking in the wrong direction. The simple question is: is
there any dynamic configuration equally safe than a local config
file?

Yes, because inconsistent configurations are a danger too.

There is no probability, either you are a dead client or a working
one.

That's a total non sequitur.  As soon as there's more than one state,
there are probabilities of being in each one.  As it turns out, there is
also a third possibility - that you are a malfunctioning or incorrectly
configured client.  Fail-stop errors are the easy ones.

It wouldn't be useful to release a network filesystem that drops dead
in case of network errors.

Every network filesystem, or other distributed system of any kind, will
fail if there are *sufficient* errors.  Some are better than others at
making that number larger, some are better than others with respect to
the consequence of exceeding that threshold, but all will fail
eventually.  GlusterFS does handle a great many kinds of failure that
you probably can't even imagine, but neither it nor any other system can
handle all possible failures.

Most common network errors are not a matter of design, but of dead
iron.

It's usually both - a design that is insufficiently tolerant of
component failure, plus a combination of component failures that exceeds
that tolerance.  You seem to have a very high standard for filesystems
continuing to maintain 100% functionality - and I suppose 100%
performance as well - if there's any possibility whatsoever that they
could do so.  Why don't you apply that same standard to the part of the
system that you're responsible for designing?  Running any distributed
system on top of a deficient network infrastructure will lead only to
disappointment.

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to