Hi Milind,

thanks again for your answer...
damn...i found an old mail from Venky Shankar and obviously i had a wrong view on ignore_deletes....
but
[ 20:42:53 ] - root@gluster-ger-ber-07 ~ $gluster volume geo-replication ger-ber-01 gluster-wien-07::aut-wien-vol-01 config ignore_deletes false
Reserved option
geo-replication command failed
[ 20:43:06 ] - root@gluster-ger-ber-07 ~ $gluster volume geo-replication ger-ber-01 gluster-wien-07::aut-wien-vol-01 config ignore_deletes no
Reserved option
geo-replication command failed
[ 20:43:11 ] - root@gluster-ger-ber-07  ~
i stopped the geo-replication and tried it again...same result.
possibly i should start a new replication from scratch and set ignore_deletes to false before starting the replication.

meanwhile the reason for the new 'operation not permitted' messages were found... a directory on the master and a file on the slave in the same directory-level with the same name...

best regards
dietmar





Am 20.01.2016 um 19:28 schrieb Milind Changire:
Dietmar,
I just looked at your very first post describing the problem and I found

ignore_deletes: true

in the geo-replication config command output.

If you'd like the slave volume to replicate file deletion as well, then the "ignore_deletes" should be set to "false"
That should help resolve the CREATE + DELETE + CREATE issue.

If this doesn't help, then strace output for gsyncd could be the savior.

-----
To add further ...
Lately we've stumbled across another issue with the CREATE + RENAME + CREATE sequence on geo-replication restart. Two fixes have been posted and are available for review upstream.

geo-rep: avoid creating multiple entries with same gfid -- http://review.gluster.org/13186 geo-rep: hard-link rename issues on changelog replay -- http://review.gluster.org/13189

I'll post info about the fix propagation plan for the 3.6.x series later.

--
Milind



On Wed, Jan 20, 2016 at 11:23 PM, Dietmar Putz <[email protected] <mailto:[email protected]>> wrote:

    Hi Milind,

    thank you for your reply...
    meanwhile i realized that the setfattr command don't work so i
    decided to delete the affected files and directories but without
    stopping the geo-replication...i did it before i red you mail.
    the affected folders are already replicated with the same gfid
    like on the master...so this is solved for the moment.
    afterwards i did not receive the 'errcode: 23' messages on the
    masters and the 'Operation not permitted' messages on the slaves
    for 2 1/2 days but the geo-replication restarted all the time
    about every 2 - 4 hours on each active master / slave with below
    shown "OSError: [Errno 16] Device or resource busy" message on
    master and slave.
    anytime the geo-replication restarts i saw following line embedded
    in the event of restart (as shown far below) :

    I [dht-layout.c:663:dht_layout_normalize] 0-aut-wien-vol-01-dht:
    Found anomalies in
    /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2/.dstXXb70G3x (gfid =
    00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0

    i have found 24 of these hidden .dst* folders. All of them are
    stored in the same 2 different subfolders on the master and slave
    but the shown .dstXXb70G3x is the only one that just exists on the
    slave volume. I checked that folder and deleted it on the slave
    since it was empty. i believe that such folders somehow belongs to
    the geo-replication process but i have no details.
    however, about one day after deletion this folder was recreated on
    the slave again but since deletion there are no more 'Found
    anomalies' messages.
    currently the geo-replication still restarts frequently with the
    far below shown message and unfortunately some 'Operation not
    permitted' messages appears again but for different files than before.
    I already checked all folders on master/slave for different gfid's
    but there are no more different gfid's. i guess there is no way
    around to compare all gfid's of all files on master and
    slave...since it is a dist-repl. volume there are several million
    lines.
    if geo-replication then still not works i will start the suggested
    strace of the gsyncd. in regard to the strace i have two questions...

    do i need to start the strace on all active masters / slaves or is
    it sufficient to trace one active master and the corresponding
    active slave ?
    should i try to capture a geo-rep restart by the strace or is it
    sufficient to let it run one minute randomly ?


    maybe this is of interest to solve the problem...on the active
    masters there are lines like :
    ...
    [2016-01-19 18:34:58.441606] W
    [master(/gluster-export):1015:process] _GMaster: incomplete sync,
    retrying changelogs: XSYNC-CHANGELOG.1453225971
    [2016-01-19 18:36:27.313515] W
    [master(/gluster-export):1015:process] _GMaster: incomplete sync,
    retrying changelogs: XSYNC-CHANGELOG.1453225971
    [2016-01-19 18:37:56.337660] W
    [master(/gluster-export):996:process] _GMaster: changelogs
    XSYNC-CHANGELOG.1453225971 could not be processed - moving on...
    [2016-01-19 18:37:56.339124] W
    [master(/gluster-export):1000:process] _GMaster: SKIPPED GFID =
    5a47cc07-f32f-4685-ae8e-4969995f3f1c,<huge list with gfid's>

    they end up in a huge list of comma separated gfid's. is there any
    hint to get benefit out of this xsync-changelogs, a way to find
    out what was incomplete ?

    one more thing that concerns me...i'm trying to understand the
    distributed geo-replication.
    the master volume is a living object, accessed by some clients who
    are upload, delete and recreate files and folders. not very
    frequent but i observed the mentioned two folders with different
    gfid's on master / slave and now they are deleted by any client on
    the master volume. The geo-replication is still in hybrid crawl
    and afaik can not delete or rename files and folders on the slave
    volume until changelog mode is reached. When now a client
    recreates the same folders on the master again they get a new gfid
    assigned which are different from the still existing gfid's on the
    slave i believe...so geo-replication should get in conflict again
    because of existing folders on the slave with the same path but a
    different gfid than on the master...like for any other files which
    are deleted and later recreated while geo-rep is in hybrid crawl.
    is that right ?
    if so it will be difficult to reach the changelog modus on large
    gluster volumes because in our case the initial hybrid crawl took
    some days for about 45 TB...or r/w access needs to be stopped for
    that time ?

    thanks in advance and
    best regards
    dietmar




    Am 19.01.2016 um 10:53 schrieb Milind Changire:

        Hi Dietmar,
        After discussion with Aravinda we realized that unfortunately
        the suggestion to:
               setfattr -n glusterfs.geo-rep.trigger-sync -v "1" <DIR>
               setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
        <file-path>

        won't work with 3.6.7, since provision for that workaround was
        added after 3.6.7.

        There's an alternative way to achieve the geo-replication:
        1. stop geo-replication
        2. delete files and directories with conflicting gfid on SLAVE
        3. use the "touch" command to touch files and directories with
        conflicting gfid
            on MASTER
        4. start geo-replication

        This *should* get things correctly replicated to SLAVE.
        Geo-replication should start with hybrid-crawl and trigger the
        replication to SLAVE.

        If not, then there's more to look at.
        You could then send us output of the strace command for the
        gsyncd process, while
        geo-replication is running:
        # strace -ff -p <gsyncd-pid> -o gsyncd-strace

        You could terminate strace after about one minute and send us
        all the gsyncd-strace.<pid>
        files which will help us debug the issue if its not resolved
        by the alternative
        mechanism mentioned above.

        Also, crawl status Hybrid Crawl is not an entirely bad thing.
        It just could mean
        that there's a lot of entries that are being processed.
        However, if things don't
        return back to the normal state after trying out the
        alternative suggestion, we
        could take a look at the strace output and get some clues.

        --
        Milind

        ----- Original Message -----
        From: "Dietmar Putz" <[email protected]
        <mailto:[email protected]>>
        To: "Aravinda" <[email protected]
        <mailto:[email protected]>>, [email protected]
        <mailto:[email protected]>, "Milind Changire"
        <[email protected] <mailto:[email protected]>>
        Sent: Thursday, January 14, 2016 11:07:55 PM
        Subject: Re: [Gluster-users] geo-replication 3.6.7 - no
        transition from hybrid to changelog crawl

        Hello all,

        after some days of inactivity i started another attempt to
        solve this
        geo-replication issue...step by step.
        it looks like that some of the directories on the slave volume
        does not
        have the same gfid like the corresponding directory on the
        master volume.

        for example :

        on a master-node i can see a lot of 'errcode: 23' lines like :
        [2016-01-14 09:58:36.96585] W [master(/gluster-export):301:regjob]
        _GMaster: Rsync: .gfid/a8d0387d-c5ad-4eeb-9fc6-637fb8299a50
        [errcode: 23]

        on the corresponding slave the corresponding message :
        [2016-01-14 09:57:06.070452] W
        [fuse-bridge.c:1967:fuse_create_cbk]
        0-glusterfs-fuse: 1185648:
        /.gfid/a8d0387d-c5ad-4eeb-9fc6-637fb8299a50
        => -1 (Operation not permitted)

        This is the file on the master, the file is still not
        replicated to the
        slave.

        120533444364 97332 -rw-r--r-- 2 2001 2001 99662854 Jan  8 13:40
        
/gluster-export/3912/uploads/BSZ-2015/Z_002895D0-C832-4698-84E6-89F34CDEC2AE_20144555_ST_1.mp4
        120533444364 97332 -rw-r--r-- 2 2001 2001 99662854 Jan  8 13:40
        /gluster-export/.glusterfs/a8/d0/a8d0387d-c5ad-4eeb-9fc6-637fb8299a50

        The directory on the slave already contain some files, all of
        them are
        not available on the master anymore, obviously deleted in
        meantime on
        the master by a client.
        i have deleted and recreated this file on the master and
        observed the
        logs for recurrence of the newly created gfid of this
        file...same as before.

        in
        http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20703
        a user reports a geo-replication problem which is possibly
        caused by
        different gfid's of underlying directories.
        and yes, the directory of this file-example above shows that
        the gfid of
        the underlying directory differs from the gfid on the master
        while the
        most other directories have the same gfid.

        master :
        ...
        # file: gluster-export/3912/uploads/BSP-2012
        trusted.gfid=0x8f1d480351bb455b9adde190f2c2b350
        --------------
        # file: gluster-export/3912/uploads/BSZ-2003
        trusted.gfid=0xe80adc088e604234b778997d8e8c2018
        --------------
        # file: gluster-export/3912/uploads/BSZ-2004
        trusted.gfid=0xfe417dd16bbe4ae4a6a1936cfee7aced
        --------------
        # file: gluster-export/3912/uploads/BSZ-2010
        trusted.gfid=0x8044e436407d4ed3a67c81df8a7ad47f ###
        --------------
        # file: gluster-export/3912/uploads/BSZ-2015
        trusted.gfid=0x0c30f50480204e02b65d4716a048b029 ###

        slave :
        ...
        # file: gluster-export/3912/uploads/BSP-2012
        trusted.gfid=0x8f1d480351bb455b9adde190f2c2b350
        --------------
        # file: gluster-export/3912/uploads/BSZ-2003
        trusted.gfid=0xe80adc088e604234b778997d8e8c2018
        --------------
        # file: gluster-export/3912/uploads/BSZ-2004
        trusted.gfid=0xfe417dd16bbe4ae4a6a1936cfee7aced
        --------------
        # file: gluster-export/3912/uploads/BSZ-2010
        trusted.gfid=0xd83e8fb568c74e33a2091c547512a6ce ###
        --------------
        # file: gluster-export/3912/uploads/BSZ-2015
        trusted.gfid=0xa406e1bec7f3454d8f2ce9c5f9c70eb3 ###


        now the question...how to fix this..?
        in the thread above Aravinda wrote :

        ...
        To fix the issue,
        -----------------
        Find the parent directory of "main.mdb",
        Get the GFID of that directory, using getfattr
        Check the GFID of the same directory in Slave(To confirm GFIDs
        are different)
        To fix the issue, Delete that directory in Slave.
        Set virtual xattr for that directory and all the files inside
        that directory.
               setfattr -n glusterfs.geo-rep.trigger-sync -v "1" <DIR>
               setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
        <file-path>

        Geo-rep will recreate the directory with Proper GFID and
        starts sync.

        deletion of the affected slave directory might be helpful...
        but do i have to execute above shown setfattr commands on the
        master or
        do they just speed up synchronization ?
        usually sync should start automatically or could there be a
        problem
        because crawl status is still in 'hybrid crawl'...?

        thanks in advance...
        best regards
        dietmar




        On 04.01.2016 12:08, Dietmar Putz wrote:

            Hello Aravinda,

            thank you for your reply.
            i just made a 'find /gluster-export -type f -exec ls -lisa
            {} \; >
            ls-lisa-gluster-export-`hostname`.out' on each brick and
            checked the
            output for files with less than 2 link counts.
            i found nothing...all files on each brick have exact 2 links.

            the entire output for all bricks contain more than 7
            million lines
            including .glusterfs but without non relevant directories
            and files..
            tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* |
            egrep -v
            'indices|landfill|changelogs|health_check' | wc -l
            7007316

            link count is on $4 :
            tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* |
            egrep -v
            'indices|landfill|changelogs|health_check' | awk
            '{if($4=="2")print}'
            | tail -1
            62648153697 4 -rw-rw-rw- 2 root root 1713 Jan  4 01:44
            /gluster-export/3500/files/16/01/387233/3500-6dqMmBcVby97PQtR.ism

            tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* |
            egrep -v
            'indices|landfill|changelogs|health_check' | awk
            '{if($4=="1")print}'
            tron@dp-server:~/geo_rep_3$
            tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* |
            egrep -v
            'indices|landfill|changelogs|health_check' | awk
            '{if($4!="2")print}'
            tron@dp-server:~/geo_rep_3$
            tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* |
            egrep -v
            'indices|landfill|changelogs|health_check' | awk '{print
            $4}' | sort |
            uniq -c
            7007316 2
            tron@dp-server:~/geo_rep_3$

            If i understood you right this can not be the reason for
            the problem.
            is there any other hint which i can check on the master or
            slave to
            analyse the problem....?

            Any help would be very appreciated
            best regards
            dietmar



            Am 04.01.2016 um 07:14 schrieb Aravinda:

                Hi,

                Looks like issue with Geo-rep due to race between
                Create and Rename.
                Geo-replication uses gfid-access (Mount Volume with
                aux-gfid-mount)
                to create and rename files. If Create and Rename
                replayed more than
                once then Geo-rep creates a two files with same
                GFID(not hardlink).
                This causes one file without backend GFID link.

                Milind is working on the patch to disallow the
                creation of second
                file with same GFID.
                @Milind, Please provide more update about your patch.

                As a workaround, identify all the files in Slave
                volume which do not
                have backend links and delete those files(Only in
                Slaves, keep backup
                if required)

                In Brick backend, Crawl and look for files with link
                count less than
                2. (Exclude .glusterfs and .trashcan directory)

                regards
                Aravinda

                On 01/02/2016 09:56 PM, Dietmar Putz wrote:

                    Hello all,

                    one more time i need some help with a
                    geo-replication problem.
                    recently i started a new geo-replication. the
                    master volume contains
                    about 45 TB data and the slave volume was new
                    created before
                    geo-replication setup was done.
                    master and slave is a 6 node distributed
                    replicated volume running
                    glusterfs-server 3.6.7-ubuntu1~trusty1.
                    geo-rep was starting without problems. since few
                    days the slave
                    volume contains about 200 GB more data than the
                    master volume and i
                    expected that the crawl status changes from
                    'hybrid crawl' to
                    'changelog crawl' but it remains in 'hybrid crawl'.
                    the 'status detail' output far below shows more
                    than 10 million
                    synced files while the entire master volume
                    contains just about 2
                    million files. some tests show that files are not
                    deleted on the
                    slave volume.
                    as far as i know the hybrid crawl has the
                    limitation of not
                    replicating deletes and renames to the slave thus
                    the geo-rep needs
                    to achieve the 'changelog crawl' status after
                    initial sync...
                    usually this should happen more or less
                    automatically, is this right ?

                    the geo-rep frequently fails with below shown
                    "OSError: [Errno 16]
                    Device or resource busy", this error appears about
                    every 3-4 hours
                    on each active master node.
                    i guess the frequent appearance of this error
                    prevent geo-rep from
                    changing to 'changelog crawl', does somebody
                    experienced such
                    problem, is this the cause of the problem ?

                    i found some similar reports on gluster.org
                    <http://gluster.org> for gfs 3.5, 3.6 and 3.7
                    but none of them point me to a solution...
                    does anybody know a solution or is there a
                    workaround to achieve the
                    changelog crawl status...?

                    Any help would be very appreciated
                    best regards
                    dietmar




                    Master gluster-ger-ber-07:
                    -----------------------------

                    [2016-01-02 11:39:48.122546] I
                    [master(/gluster-export):1343:crawl]
                    _GMaster: processing xsync changelog
                    
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a
                    6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451724692
                    [2016-01-02 11:42:55.182342] I
                    [master(/gluster-export):1343:crawl]
                    _GMaster: processing xsync changelog
                    
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a
                    6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451724751
                    [2016-01-02 11:44:11.168962] I
                    [master(/gluster-export):1340:crawl]
                    _GMaster: finished hybrid crawl syncing, stime:
                    (-1, 0)
                    [2016-01-02 11:44:11.246845] I
                    [master(/gluster-export):490:crawlwrap] _GMaster:
                    primary master
                    with volume id
                    6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ...
                    [2016-01-02 11:44:11.265209] I
                    [master(/gluster-export):501:crawlwrap] _GMaster:
                    crawl interval: 3
                    seconds
                    [2016-01-02 11:44:11.896940] I
                    [master(/gluster-export):1192:crawl]
                    _GMaster: slave's time: (-1, 0)
                    [2016-01-02 11:44:12.171761] E
                    [repce(/gluster-export):207:__call__]
                    RepceClient: call
                    18897:139899553576768:1451735052.09 (entry_ops)
                    failed on peer with OSError
                    [2016-01-02 11:44:12.172101] E
                    [syncdutils(/gluster-export):270:log_raise_exception]
                    <top>: FAIL:
                    Traceback (most recent call last):
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py",
                    line 164, in main
                         main_i()
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py",
                    line 643, in main_i
                         local.service_loop(*[r for r in [remote] if r])
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",
                    line 1344, in service_loop
                         g2.crawlwrap()
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
                    line 539, in crawlwrap
                         self.crawl(no_stime_update=no_stime_update)
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
                    line 1204, in crawl
                         self.process(changes)
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
                    line 956, in process
                         self.process_change(change, done, retry)
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
                    line 920, in process_change
                         self.slave.server.entry_ops(entries)
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py",
                    line 226, in __call__
                         return self.ins(self.meth, *a)
                       File
                    
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py",
                    line 208, in __call__
                         raise res
                    OSError: [Errno 16] Device or resource busy
                    [2016-01-02 11:44:12.258982] I
                    [syncdutils(/gluster-export):214:finalize] <top>:
                    exiting.
                    [2016-01-02 11:44:12.321808] I
                    [repce(agent):92:service_loop]
                    RepceServer: terminating on reaching EOF.
                    [2016-01-02 11:44:12.349766] I
                    [syncdutils(agent):214:finalize]
                    <top>: exiting.
                    [2016-01-02 11:44:12.435992] I
                    [monitor(monitor):141:set_state]
                    Monitor: new state: faulty
                    [2016-01-02 11:44:23.164284] I
                    [monitor(monitor):215:monitor]
                    Monitor:
                    ------------------------------------------------------------
                    [2016-01-02 11:44:23.169981] I
                    [monitor(monitor):216:monitor]
                    Monitor: starting gsyncd worker
                    [2016-01-02 11:44:23.216662] I
                    [changelogagent(agent):72:__init__]
                    ChangelogAgent: Agent listining...
                    [2016-01-02 11:44:23.239778] I
                    [gsyncd(/gluster-export):633:main_i]
                    <top>: syncing: gluster://localhost:ger-ber-01 ->
                    
ssh://root@gluster-wien-07-int:gluster://localhost:aut-wien-vol-01
                    [2016-01-02 11:44:26.358613] I
                    [master(/gluster-export):75:gmaster_builder]
                    <top>: setting up xsync
                    change detection mode
                    [2016-01-02 11:44:26.358983] I
                    [master(/gluster-export):413:__init__] _GMaster:
                    using 'rsync' as
                    the sync engine
                    [2016-01-02 11:44:26.359985] I
                    [master(/gluster-export):75:gmaster_builder]
                    <top>: setting up
                    changelog change detection mode
                    [2016-01-02 11:44:26.360243] I
                    [master(/gluster-export):413:__init__] _GMaster:
                    using 'rsync' as
                    the sync engine
                    [2016-01-02 11:44:26.361159] I
                    [master(/gluster-export):75:gmaster_builder]
                    <top>: setting up
                    changeloghistory change detection mode
                    [2016-01-02 11:44:26.361377] I
                    [master(/gluster-export):413:__init__] _GMaster:
                    using 'rsync' as
                    the sync engine
                    [2016-01-02 11:44:26.402601] I
                    [master(/gluster-export):1311:register] _GMaster:
                    xsync temp
                    directory:
                    
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a6e
                    41d8d6e56984/xsync
                    [2016-01-02 11:44:26.402848] I
                    [resource(/gluster-export):1318:service_loop]
                    GLUSTER: Register
                    time: 1451735066
                    [2016-01-02 11:44:27.26012] I
                    [master(/gluster-export):490:crawlwrap] _GMaster:
                    primary master
                    with volume id
                    6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ...
                    [2016-01-02 11:44:27.31605] I
                    [master(/gluster-export):501:crawlwrap] _GMaster:
                    crawl interval: 1
                    seconds
                    [2016-01-02 11:44:27.66868] I
                    [master(/gluster-export):1226:crawl]
                    _GMaster: starting history crawl... turns: 1,
                    stime: (-1, 0)
                    [2016-01-02 11:44:27.67043] I
                    [master(/gluster-export):1229:crawl]
                    _GMaster: stime not available, abandoning history
                    crawl
                    [2016-01-02 11:44:27.112426] I
                    [master(/gluster-export):490:crawlwrap] _GMaster:
                    primary master
                    with volume id
                    6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ...
                    [2016-01-02 11:44:27.117506] I
                    [master(/gluster-export):501:crawlwrap] _GMaster:
                    crawl interval: 60
                    seconds
                    [2016-01-02 11:44:27.140610] I
                    [master(/gluster-export):1333:crawl]
                    _GMaster: starting hybrid crawl..., stime: (-1, 0)
                    [2016-01-02 11:45:23.417233] I
                    [monitor(monitor):141:set_state]
                    Monitor: new state: Stable
                    [2016-01-02 11:45:48.225915] I
                    [master(/gluster-export):1343:crawl]
                    _GMaster: processing xsync changelog
                    
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a
                    6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451735067
                    [2016-01-02 11:47:08.65231] I
                    [master(/gluster-export):1343:crawl]
                    _GMaster: processing xsync changelog
                    
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a6
                    e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451735148
                    ...


                    slave gluster-wien-07 :
                    ------------------------

                    [2016-01-02 11:44:12.007744] W
                    [fuse-bridge.c:1261:fuse_err_cbk]
                    0-glusterfs-fuse: 1959820: SETXATTR()
                    /.gfid/5e436e5b-086b-4720-9e70-0e49c8e09698 => -1
                    (File exists)
                    [2016-01-02 11:44:12.010970] W
                    [client-rpc-fops.c:240:client3_3_mknod_cbk]
                    0-aut-wien-vol-01-client-5: remote operation
                    failed: File exists.
                    Path:
                    
<gfid:666bceac-7c14-4efd-81fe-8185458fcf1f>/11-kxyrM3NgdtBWPFv4.webm
                    [2016-01-02 11:44:12.011327] W
                    [client-rpc-fops.c:240:client3_3_mknod_cbk]
                    0-aut-wien-vol-01-client-4: remote operation
                    failed: File exists.
                    Path:
                    
<gfid:666bceac-7c14-4efd-81fe-8185458fcf1f>/11-kxyrM3NgdtBWPFv4.webm
                    [2016-01-02 11:44:12.012054] W
                    [fuse-bridge.c:1261:fuse_err_cbk]
                    0-glusterfs-fuse: 1959822: SETXATTR()
                    /.gfid/666bceac-7c14-4efd-81fe-8185458fcf1f => -1
                    (File exists)
                    [2016-01-02 11:44:12.024743] W
                    [client-rpc-fops.c:240:client3_3_mknod_cbk]
                    0-aut-wien-vol-01-client-5: remote operation
                    failed: File exists.
                    Path:
                    
<gfid:5bfd6f99-07e8-4b2f-844b-aa0b6535c055>/Gf4FYbpDTC7yK2mv.png
                    [2016-01-02 11:44:12.024970] W
                    [client-rpc-fops.c:240:client3_3_mknod_cbk]
                    0-aut-wien-vol-01-client-4: remote operation
                    failed: File exists.
                    Path:
                    
<gfid:5bfd6f99-07e8-4b2f-844b-aa0b6535c055>/Gf4FYbpDTC7yK2mv.png
                    [2016-01-02 11:44:12.025601] W
                    [fuse-bridge.c:1261:fuse_err_cbk]
                    0-glusterfs-fuse: 1959823: SETXATTR()
                    /.gfid/5bfd6f99-07e8-4b2f-844b-aa0b6535c055 => -1
                    (File exists)
                    [2016-01-02 11:44:12.100688] I
                    [dht-selfheal.c:1065:dht_selfheal_layout_new_directory]
                    0-aut-wien-vol-01-dht: chunk size = 0xffffffff /
                    57217563 = 0x4b
                    [2016-01-02 11:44:12.100765] I
                    [dht-selfheal.c:1103:dht_selfheal_layout_new_directory]
                    0-aut-wien-vol-01-dht: assigning range size
                    0x5542c4a3 to
                    aut-wien-vol-01-replicate-0
                    [2016-01-02 11:44:12.100785] I
                    [dht-selfheal.c:1103:dht_selfheal_layout_new_directory]
                    0-aut-wien-vol-01-dht: assigning range size
                    0x5542c4a3 to
                    aut-wien-vol-01-replicate-1
                    [2016-01-02 11:44:12.100800] I
                    [dht-selfheal.c:1103:dht_selfheal_layout_new_directory]
                    0-aut-wien-vol-01-dht: assigning range size
                    0x5542c4a3 to
                    aut-wien-vol-01-replicate-2
                    [2016-01-02 11:44:12.100839] I [MSGID: 109036]
                    [dht-common.c:6296:dht_log_new_layout_for_dir_selfheal]
                    0-aut-wien-vol-01-dht: Setting layout of
                    <gfid:d4815ee4-3348-4105-9136-d0219d956ed8>/.dstXXX0HUpRD
                    with
                    [Subvol_name: aut-wien-vol-01-re
                    plicate-0, Err: -1 , Start: 0 , Stop: 1430439074
                    ], [Subvol_name:
                    aut-wien-vol-01-replicate-1, Err: -1 , Start:
                    1430439075 , Stop:
                    2860878149 ], [Subvol_name:
                    aut-wien-vol-01-replicate-2, Err: -1 ,
                    Start: 2860878150 , Stop: 4294967295 ],
                    [2016-01-02 11:44:12.114192] W
                    [client-rpc-fops.c:306:client3_3_mkdir_cbk]
                    0-aut-wien-vol-01-client-2: remote operation
                    failed: File exists.
                    Path:
                    <gfid:cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2>/.dstXXb70G3x
                    [2016-01-02 11:44:12.114275] W
                    [client-rpc-fops.c:306:client3_3_mkdir_cbk]
                    0-aut-wien-vol-01-client-3: remote operation
                    failed: File exists.
                    Path:
                    <gfid:cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2>/.dstXXb70G3x
                    [2016-01-02 11:44:12.114879] W
                    [fuse-bridge.c:1261:fuse_err_cbk]
                    0-glusterfs-fuse: 1959831: SETXATTR()
                    /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2 => -1
                    (File exists)
                    [2016-01-02 11:44:12.118473] I
                    [dht-layout.c:663:dht_layout_normalize]
                    0-aut-wien-vol-01-dht: Found
                    anomalies in
                    /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2/.dstXXb70G3x
                    (gfid =
                    00000000-0000-0000-0000-000000000000). Holes=1
                    overlaps=0
                    [2016-01-02 11:44:12.118537] I
                    [dht-selfheal.c:1065:dht_selfheal_layout_new_directory]
                    0-aut-wien-vol-01-dht: chunk size = 0xffffffff /
                    57217563 = 0x4b
                    [2016-01-02 11:44:12.118562] I
                    [dht-selfheal.c:1103:dht_selfheal_layout_new_directory]
                    0-aut-wien-vol-01-dht: assigning range size
                    0x5542c4a3 to
                    aut-wien-vol-01-replicate-2
                    [2016-01-02 11:44:12.118579] I
                    [dht-selfheal.c:1103:dht_selfheal_layout_new_directory]
                    0-aut-wien-vol-01-dht: assigning range size
                    0x5542c4a3 to
                    aut-wien-vol-01-replicate-0
                    [2016-01-02 11:44:12.118613] I
                    [dht-selfheal.c:1103:dht_selfheal_layout_new_directory]
                    0-aut-wien-vol-01-dht: assigning range size
                    0x5542c4a3 to
                    aut-wien-vol-01-replicate-1
                    [2016-01-02 11:44:12.120352] I [MSGID: 109036]
                    [dht-common.c:6296:dht_log_new_layout_for_dir_selfheal]
                    0-aut-wien-vol-01-dht: Setting layout of
                    /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2/.dstXXb70G3x
                    with
                    [Subvol_name: aut-wien-vol-01-rep
                    licate-0, Err: -1 , Start: 1430439075 , Stop:
                    2860878149 ],
                    [Subvol_name: aut-wien-vol-01-replicate-1, Err: -1
                    , Start:
                    2860878150 , Stop: 4294967295 ], [Subvol_name:
                    aut-wien-vol-01-replicate-2, Err: -1 , Start: 0 ,
                    Stop: 1430439074 ],
                    [2016-01-02 11:44:12.630949] I
                    [fuse-bridge.c:4927:fuse_thread_proc]
                    0-fuse: unmounting /tmp/gsyncd-aux-mount-tOUOsz
                    [2016-01-02 11:44:12.633952] W
                    [glusterfsd.c:1211:cleanup_and_exit]
                    (--> 0-: received signum (15), shutting down
                    [2016-01-02 11:44:12.633964] I
                    [fuse-bridge.c:5607:fini] 0-fuse:
                    Unmounting '/tmp/gsyncd-aux-mount-tOUOsz'.
                    [2016-01-02 11:44:23.946702] I [MSGID: 100030]
                    [glusterfsd.c:2035:main] 0-/usr/sbin/glusterfs:
                    Started running
                    /usr/sbin/glusterfs version 3.6.7 (args:
                    /usr/sbin/glusterfs
                    --aux-gfid-mount
                    --log-file=/var/log/glusterfs/geo-replication-slav
                    
es/6a071cfa-b150-4f0b-b1ed-96ab5d4bd671:gluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.gluster.log
                    --volfile-server=localhost
                    --volfile-id=aut-wien-vol-01
                    --client-pid=-1 /tmp/gsyncd-aux-mount-otU3wS)
                    [2016-01-02 11:44:24.042128] I
                    [dht-shared.c:337:dht_init_regex]
                    0-aut-wien-vol-01-dht: using regex
                    rsync-hash-regex = ^\.(.+)\.[^.]+$
                    [2016-01-02 11:44:24.046315] I [client.c:2268:notify]
                    0-aut-wien-vol-01-client-0: parent translators are
                    ready, attempting
                    connect on transport
                    [2016-01-02 11:44:24.046532] I [client.c:2268:notify]
                    0-aut-wien-vol-01-client-1: parent translators are
                    ready, attempting
                    connect on transport
                    [2016-01-02 11:44:24.046664] I [client.c:2268:notify]
                    0-aut-wien-vol-01-client-2: parent translators are
                    ready, attempting
                    connect on transport
                    [2016-01-02 11:44:24.046806] I [client.c:2268:notify]
                    0-aut-wien-vol-01-client-3: parent translators are
                    ready, attempting
                    connect on transport
                    [2016-01-02 11:44:24.046940] I [client.c:2268:notify]
                    0-aut-wien-vol-01-client-4: parent translators are
                    ready, attempting
                    connect on transport
                    [2016-01-02 11:44:24.047070] I [client.c:2268:notify]
                    0-aut-wien-vol-01-client-5: parent translators are
                    ready, attempting
                    connect on transport
                    Final graph:
                    
+------------------------------------------------------------------------------+

                       1: volume aut-wien-vol-01-client-0
                       2:     type protocol/client
                       3:     option ping-timeout 10
                       4:     option remote-host gluster-wien-02-int
                       5:     option remote-subvolume /gluster-export
                       6:     option transport-type socket
                       7:     option username
                    6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
                       8:     option password
                    8777e154-476c-449a-89b2-3199872e4a1f
                       9:     option send-gids true
                      10: end-volume
                      11:
                      12: volume aut-wien-vol-01-client-1
                      13:     type protocol/client
                      14:     option ping-timeout 10
                      15:     option remote-host gluster-wien-03-int
                      16:     option remote-subvolume /gluster-export
                      17:     option transport-type socket
                      18:     option username
                    6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
                      19:     option password
                    8777e154-476c-449a-89b2-3199872e4a1f
                      20:     option send-gids true
                      21: end-volume
                      22:
                      23: volume aut-wien-vol-01-replicate-0
                      24:     type cluster/replicate
                      25:     subvolumes aut-wien-vol-01-client-0
                    aut-wien-vol-01-client-1
                      26: end-volume
                      27:
                      28: volume aut-wien-vol-01-client-2
                      29:     type protocol/client
                      30:     option ping-timeout 10
                      31:     option remote-host gluster-wien-04-int
                      32:     option remote-subvolume /gluster-export
                      33:     option transport-type socket
                      34:     option username
                    6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
                      35:     option password
                    8777e154-476c-449a-89b2-3199872e4a1f
                      36:     option send-gids true
                      37: end-volume
                      38:
                      39: volume aut-wien-vol-01-client-3
                      40:     type protocol/client
                      41:     option ping-timeout 10
                      42:     option remote-host gluster-wien-05-int
                      43:     option remote-subvolume /gluster-export
                      44:     option transport-type socket
                      45:     option username
                    6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
                      46:     option password
                    8777e154-476c-449a-89b2-3199872e4a1f
                      47:     option send-gids true
                      48: end-volume
                      49:
                      50: volume aut-wien-vol-01-replicate-1
                      51:     type cluster/replicate
                      52:     subvolumes aut-wien-vol-01-client-2
                    aut-wien-vol-01-client-3
                      53: end-volume
                      54:
                      55: volume aut-wien-vol-01-client-4
                      56:     type protocol/client
                      57:     option ping-timeout 10
                      58:     option remote-host gluster-wien-06-int
                      59:     option remote-subvolume /gluster-export
                      60:     option transport-type socket
                      61:     option username
                    6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
                      62:     option password
                    8777e154-476c-449a-89b2-3199872e4a1f
                      63:     option send-gids true
                      64: end-volume
                      65:
                      66: volume aut-wien-vol-01-client-5
                      67:     type protocol/client
                      68:     option ping-timeout 10
                      69:     option remote-host gluster-wien-07-int
                      70:     option remote-subvolume /gluster-export
                      71:     option transport-type socket
                      72:     option username
                    6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
                      73:     option password
                    8777e154-476c-449a-89b2-3199872e4a1f
                      74:     option send-gids true
                      75: end-volume
                      76:
                      77: volume aut-wien-vol-01-replicate-2
                      78:     type cluster/replicate
                      79:     subvolumes aut-wien-vol-01-client-4
                    aut-wien-vol-01-client-5
                      80: end-volume
                      81:
                      82: volume aut-wien-vol-01-dht
                      83:     type cluster/distribute
                      84:     subvolumes aut-wien-vol-01-replicate-0
                    aut-wien-vol-01-replicate-1
                    aut-wien-vol-01-replicate-2
                      85: end-volume
                      86:
                      87: volume aut-wien-vol-01-write-behind
                      88:     type performance/write-behind
                      89:     subvolumes aut-wien-vol-01-dht
                      90: end-volume
                      91:
                      92: volume aut-wien-vol-01-read-ahead
                      93:     type performance/read-ahead
                      94:     subvolumes aut-wien-vol-01-write-behind
                      95: end-volume
                      96:
                      97: volume aut-wien-vol-01-io-cache
                      98:     type performance/io-cache
                      99:     option min-file-size 0
                    100:     option cache-timeout 2
                    101:     option cache-size 1024MB
                    102:     subvolumes aut-wien-vol-01-read-ahead
                    103: end-volume
                    104:
                    105: volume aut-wien-vol-01-quick-read
                    106:     type performance/quick-read
                    107:     option cache-size 1024MB
                    108:     subvolumes aut-wien-vol-01-io-cache
                    109: end-volume
                    110:
                    111: volume aut-wien-vol-01-open-behind
                    112:     type performance/open-behind
                    113:     subvolumes aut-wien-vol-01-quick-read
                    114: end-volume
                    115:
                    116: volume aut-wien-vol-01-md-cache
                    117:     type performance/md-cache
                    118:     subvolumes aut-wien-vol-01-open-behind
                    119: end-volume
                    120:
                    121: volume aut-wien-vol-01
                    122:     type debug/io-stats
                    123:     option latency-measurement off
                    124:     option count-fop-hits off
                    125:     subvolumes aut-wien-vol-01-md-cache
                    126: end-volume
                    127:
                    128: volume gfid-access-autoload
                    129:     type features/gfid-access
                    130:     subvolumes aut-wien-vol-01
                    131: end-volume
                    132:
                    133: volume meta-autoload
                    134:     type meta
                    135:     subvolumes gfid-access-autoload
                    136: end-volume
                    137:
                    
+------------------------------------------------------------------------------+

                    [2016-01-02 11:44:24.047642] I
                    [rpc-clnt.c:1761:rpc_clnt_reconfig]
                    0-aut-wien-vol-01-client-5: changing port to 49153
                    (from 0)
                    [2016-01-02 11:44:24.047927] I
                    [client-handshake.c:1413:select_server_supported_programs]
                    0-aut-wien-vol-01-client-5: Using Program
                    GlusterFS 3.3, Num
                    (1298437), Version (330)
                    [2016-01-02 11:44:24.048044] I
                    [client-handshake.c:1200:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-5: Connected to
                    aut-wien-vol-01-client-5,
                    attached to remote volume '/gluster-export'.
                    [2016-01-02 11:44:24.048050] I
                    [client-handshake.c:1210:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-5: Server and Client
                    lk-version numbers are
                    not same, reopening the fds
                    [2016-01-02 11:44:24.048088] I [MSGID: 108005]
                    [afr-common.c:3684:afr_notify]
                    0-aut-wien-vol-01-replicate-2:
                    Subvolume 'aut-wien-vol-01-client-5' came back up;
                    going online.
                    [2016-01-02 11:44:24.048114] I
                    [client-handshake.c:188:client_set_lk_version_cbk]
                    0-aut-wien-vol-01-client-5: Server lk version = 1
                    [2016-01-02 11:44:24.048124] I
                    [rpc-clnt.c:1761:rpc_clnt_reconfig]
                    0-aut-wien-vol-01-client-0: changing port to 49153
                    (from 0)
                    [2016-01-02 11:44:24.048132] I
                    [rpc-clnt.c:1761:rpc_clnt_reconfig]
                    0-aut-wien-vol-01-client-1: changing port to 49153
                    (from 0)
                    [2016-01-02 11:44:24.048138] I
                    [rpc-clnt.c:1761:rpc_clnt_reconfig]
                    0-aut-wien-vol-01-client-2: changing port to 49153
                    (from 0)
                    [2016-01-02 11:44:24.048146] I
                    [rpc-clnt.c:1761:rpc_clnt_reconfig]
                    0-aut-wien-vol-01-client-3: changing port to 49153
                    (from 0)
                    [2016-01-02 11:44:24.048153] I
                    [rpc-clnt.c:1761:rpc_clnt_reconfig]
                    0-aut-wien-vol-01-client-4: changing port to 49153
                    (from 0)
                    [2016-01-02 11:44:24.049070] I
                    [client-handshake.c:1413:select_server_supported_programs]
                    0-aut-wien-vol-01-client-0: Using Program
                    GlusterFS 3.3, Num
                    (1298437), Version (330)
                    [2016-01-02 11:44:24.049094] I
                    [client-handshake.c:1413:select_server_supported_programs]
                    0-aut-wien-vol-01-client-3: Using Program
                    GlusterFS 3.3, Num
                    (1298437), Version (330)
                    [2016-01-02 11:44:24.049113] I
                    [client-handshake.c:1413:select_server_supported_programs]
                    0-aut-wien-vol-01-client-2: Using Program
                    GlusterFS 3.3, Num
                    (1298437), Version (330)
                    [2016-01-02 11:44:24.049131] I
                    [client-handshake.c:1413:select_server_supported_programs]
                    0-aut-wien-vol-01-client-1: Using Program
                    GlusterFS 3.3, Num
                    (1298437), Version (330)
                    [2016-01-02 11:44:24.049224] I
                    [client-handshake.c:1413:select_server_supported_programs]
                    0-aut-wien-vol-01-client-4: Using Program
                    GlusterFS 3.3, Num
                    (1298437), Version (330)
                    [2016-01-02 11:44:24.049307] I
                    [client-handshake.c:1200:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-0: Connected to
                    aut-wien-vol-01-client-0,
                    attached to remote volume '/gluster-export'.
                    [2016-01-02 11:44:24.049312] I
                    [client-handshake.c:1210:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-0: Server and Client
                    lk-version numbers are
                    not same, reopening the fds
                    [2016-01-02 11:44:24.049324] I [MSGID: 108005]
                    [afr-common.c:3684:afr_notify]
                    0-aut-wien-vol-01-replicate-0:
                    Subvolume 'aut-wien-vol-01-client-0' came back up;
                    going online.
                    [2016-01-02 11:44:24.049384] I
                    [client-handshake.c:1200:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-3: Connected to
                    aut-wien-vol-01-client-3,
                    attached to remote volume '/gluster-export'.
                    [2016-01-02 11:44:24.049389] I
                    [client-handshake.c:1210:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-3: Server and Client
                    lk-version numbers are
                    not same, reopening the fds
                    [2016-01-02 11:44:24.049400] I [MSGID: 108005]
                    [afr-common.c:3684:afr_notify]
                    0-aut-wien-vol-01-replicate-1:
                    Subvolume 'aut-wien-vol-01-client-3' came back up;
                    going online.
                    [2016-01-02 11:44:24.049418] I
                    [client-handshake.c:1200:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-2: Connected to
                    aut-wien-vol-01-client-2,
                    attached to remote volume '/gluster-export'.
                    [2016-01-02 11:44:24.049422] I
                    [client-handshake.c:1210:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-2: Server and Client
                    lk-version numbers are
                    not same, reopening the fds
                    [2016-01-02 11:44:24.049460] I
                    [client-handshake.c:1200:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-1: Connected to
                    aut-wien-vol-01-client-1,
                    attached to remote volume '/gluster-export'.
                    [2016-01-02 11:44:24.049465] I
                    [client-handshake.c:1210:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-1: Server and Client
                    lk-version numbers are
                    not same, reopening the fds
                    [2016-01-02 11:44:24.049493] I
                    [client-handshake.c:188:client_set_lk_version_cbk]
                    0-aut-wien-vol-01-client-0: Server lk version = 1
                    [2016-01-02 11:44:24.049567] I
                    [client-handshake.c:188:client_set_lk_version_cbk]
                    0-aut-wien-vol-01-client-3: Server lk version = 1
                    [2016-01-02 11:44:24.049632] I
                    [client-handshake.c:1200:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-4: Connected to
                    aut-wien-vol-01-client-4,
                    attached to remote volume '/gluster-export'.
                    [2016-01-02 11:44:24.049638] I
                    [client-handshake.c:1210:client_setvolume_cbk]
                    0-aut-wien-vol-01-client-4: Server and Client
                    lk-version numbers are
                    not same, reopening the fds
                    [2016-01-02 11:44:24.052103] I
                    [fuse-bridge.c:5086:fuse_graph_setup]
                    0-fuse: switched to graph 0
                    [2016-01-02 11:44:24.052150] I
                    [client-handshake.c:188:client_set_lk_version_cbk]
                    0-aut-wien-vol-01-client-2: Server lk version = 1
                    [2016-01-02 11:44:24.052163] I
                    [client-handshake.c:188:client_set_lk_version_cbk]
                    0-aut-wien-vol-01-client-4: Server lk version = 1
                    [2016-01-02 11:44:24.052192] I
                    [client-handshake.c:188:client_set_lk_version_cbk]
                    0-aut-wien-vol-01-client-1: Server lk version = 1
                    [2016-01-02 11:44:24.052204] I
                    [fuse-bridge.c:4015:fuse_init]
                    0-glusterfs-fuse: FUSE inited with protocol
                    versions: glusterfs 7.22
                    kernel 7.20
                    [2016-01-02 11:44:24.053991] I
                    [afr-common.c:1491:afr_local_discovery_cbk]
                    0-aut-wien-vol-01-replicate-2: selecting local
                    read_child
                    aut-wien-vol-01-client-5
                    [2016-01-02 11:45:48.613563] W
                    [client-rpc-fops.c:306:client3_3_mkdir_cbk]
                    0-aut-wien-vol-01-client-5: remote operation
                    failed: File exists.
                    Path: /keys
                    [2016-01-02 11:45:48.614131] W
                    [client-rpc-fops.c:306:client3_3_mkdir_cbk]
                    0-aut-wien-vol-01-client-4: remote operation
                    failed: File exists.
                    Path: /keys
                    [2016-01-02 11:45:48.614436] W
                    [fuse-bridge.c:1261:fuse_err_cbk]
                    0-glusterfs-fuse: 12: SETXATTR()
                    /.gfid/00000000-0000-0000-0000-000000000001 => -1
                    (File exists)
                    ...


                    [ 13:41:40 ] - root@gluster-ger-ber-07
                    /var/log/glusterfs/geo-replication/ger-ber-01
                    $gluster volume
                    geo-replication ger-ber-01
                    gluster-wien-07::aut-wien-vol-01 status
                    detail

                    MASTER NODE           MASTER VOL    MASTER BRICK
SLAVE STATUS CHECKPOINT
                    STATUS    CRAWL STATUS    FILES SYNCD    FILES
                    PENDING BYTES
                    PENDING    DELETES PENDING    FILES SKIPPED
                    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

                    gluster-ger-ber-07    ger-ber-01 /gluster-export
                    gluster-wien-07-int::aut-wien-vol-01    Active N/A
                    Hybrid Crawl    10743644 8192 0                0 0
                    gluster-ger-ber-11    ger-ber-01 /gluster-export
                    gluster-wien-03-int::aut-wien-vol-01    Active N/A
                    Hybrid Crawl    16037091 8192 0                0 0
                    gluster-ger-ber-10    ger-ber-01 /gluster-export
                    gluster-wien-02-int::aut-wien-vol-01    Passive N/A
                    N/A             0 0 0                0 0
                    gluster-ger-ber-12    ger-ber-01 /gluster-export
                    gluster-wien-06-int::aut-wien-vol-01    Passive N/A
                    N/A             0 0 0                0 0
                    gluster-ger-ber-09    ger-ber-01 /gluster-export
                    gluster-wien-05-int::aut-wien-vol-01    Active N/A
                    Hybrid Crawl    16180514 8192 0                0 0
                    gluster-ger-ber-08    ger-ber-01 /gluster-export
                    gluster-wien-04-int::aut-wien-vol-01    Passive N/A
                    N/A             0 0 0                0 0


                    [ 13:41:55 ] - root@gluster-ger-ber-07
                    /var/log/glusterfs/geo-replication/ger-ber-01
                    $gluster volume
                    geo-replication ger-ber-01
                    gluster-wien-07::aut-wien-vol-01 config
                    special_sync_mode: partial
                    state_socket_unencoded:
                    
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.socket
                    gluster_log_file:
                    
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.gluster.log
                    ssh_command: ssh -p 2503 -oPasswordAuthentication=no
                    -oStrictHostKeyChecking=no -i
                    /var/lib/glusterd/geo-replication/secret.pem
                    ignore_deletes: true
                    change_detector: changelog
                    ssh_command_tar: ssh -p 2503
                    -oPasswordAuthentication=no
                    -oStrictHostKeyChecking=no -i
                    /var/lib/glusterd/geo-replication/tar_ssh.pem
                    state_file:
                    
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.status
                    remote_gsyncd: /nonexistent/gsyncd
                    log_file:
                    
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.log
                    changelog_log_file:
                    
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01-changes.log
                    socketdir: /var/run
                    working_dir:
                    
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01
                    state_detail_file:
                    
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01-detail.status
                    session_owner: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
                    gluster_command_dir: /usr/sbin/
                    pid_file:
                    
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.pid
                    georep_session_working_dir:
                    
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/
                    gluster_params: aux-gfid-mount
                    volume_id: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
                    [ 13:42:11 ] - root@gluster-ger-ber-07
                    /var/log/glusterfs/geo-replication/ger-ber-01 $


                    _______________________________________________
                    Gluster-users mailing list
                    [email protected]
                    <mailto:[email protected]>
                    http://www.gluster.org/mailman/listinfo/gluster-users

            _______________________________________________
            Gluster-users mailing list
            [email protected] <mailto:[email protected]>
            http://www.gluster.org/mailman/listinfo/gluster-users


    _______________________________________________
    Gluster-users mailing list
    [email protected] <mailto:[email protected]>
    http://www.gluster.org/mailman/listinfo/gluster-users



_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to