13.07.2016 08:43, Pranith Kumar Karampuri пишет:


On Wed, Jul 13, 2016 at 9:41 AM, Dmitry Melekhov <[email protected] <mailto:[email protected]>> wrote:

    13.07.2016 07:46, Pranith Kumar Karampuri пишет:


    On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov <[email protected]
    <mailto:[email protected]>> wrote:



        12.07.2016 17:39, Pranith Kumar Karampuri пишет:
        Wow, what are the steps to recreate the problem?

        just set file length to zero, always reproducible.


    Changing things on the brick i.e. not from gluster volume mount
    is not something you want to do. In the worst case(I have seen
    this only once in the last 5 years though) where you do this it
    can lead to data loss also. So please be aware of it.

    Data replication with gluster is a way to avoid data loss, right?
    Or no? If not- why use gluster then?
    I though that gluster self-healing will heal or at least report
    missed files or files with wrong lenths- i.e. corruptions visible
    just by reading brick's directory,
    not comparing data as bit rot detection...
    If this is not a bug, then gluster is not what I expected :-(


Yes Data replication with gluster is a way to avoid data loss. Changing files directly on the brick is similar to changing internal data structures of a disk filesystem. Or changing the internal files of a database. Things may stop working as you expect it to. All the hardwork done by the stack is nullified if you fiddle with the data on the brick directly. To put it succinctly you enter into the area of undefined behavior if you start fiddling with the data on the brick directly. Unless it is a documented behaviour I suggest you don't do it.

Sorry, I'm talking not about direct data manipulation in bricks as way to use gluster, I'm talking about problems detection and recovery. As I already said- if I for some reason ( real case can be only by accident ) will delete file this will not be detected by self-heal daemon, and, thus, will lead to lower replication level, i.e. lower failure tolerance.



_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to