We're running 3.1.3. we had a brief test of 3.2.0 and rolled back to 3.1.3
by reinstalling the Debian package.

0 root@pserver12:~ # gluster volume info all

Volume Name: storage0
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: rdma
Bricks:
Brick1: de-dc1-c1-pserver3:/mnt/gluster/brick0/storage
Brick2: de-dc1-c1-pserver5:/mnt/gluster/brick0/storage
Brick3: de-dc1-c1-pserver3:/mnt/gluster/brick1/storage
Brick4: de-dc1-c1-pserver5:/mnt/gluster/brick1/storage
Brick5: de-dc1-c1-pserver12:/mnt/gluster/brick0/storage
Brick6: de-dc1-c1-pserver13:/mnt/gluster/brick0/storage
Brick7: de-dc1-c1-pserver12:/mnt/gluster/brick1/storage
Brick8: de-dc1-c1-pserver13:/mnt/gluster/brick1/storage
Options Reconfigured:
network.ping-timeout: 5
nfs.disable: on
performance.cache-size: 4096MB

Best, Martin

-----Original Message-----
From: Mohit Anchlia [mailto:[email protected]] 
Sent: Wednesday, May 18, 2011 9:43 PM
To: Martin Schenker
Cc: [email protected]
Subject: Re: [Gluster-users] Client and server file "view", different
results?! Client can't see the right file.

Which version are you running? Can you also post output from volume info?

Meanwhile, anyone from dev want to answer??

On Wed, May 18, 2011 at 1:53 AM, Martin Schenker
<[email protected]> wrote:
> Here is another occurrence:
>
> The file 20819 is shown twice, different timestamps and attributes. 0
> filesize on pserver3, outdated on pserver5, just 12&13 seems to be in
sync.
> So what's going on?
>
>
> 0 root@de-dc1-c1-pserver13:~ # ls -al
>
/opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/h
> dd-images/2081*
> -rwxrwx--- 1 libvirt-qemu kvm 53687091200 May 18 08:44
>
/opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/h
> dd-images/20819
> -rwxrwx--- 1 libvirt-qemu kvm 53687091200 May 18 08:44
>
/opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/h
> dd-images/20819
>
> 0 root@de-dc1-c1-pserver3:~ # find /mnt/gluster/brick?/ -name 20819 |
xargs
> -i ls -al {}
> -rwxrwx--- 1 libvirt-qemu vcb 0 May 14 17:00
>
/mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> 0 root@de-dc1-c1-pserver3:~ # getfattr -dm -
>
/mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> getfattr: Removing leading '/' from absolute path names
> # file:
>
mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/
> hdd-images/20819
> trusted.gfid=0sa5/rvjUUQ3ibSf32O3izOw==
>
> 0 root@pserver5:~ # find /mnt/gluster/brick?/ -name 20819 | xargs -i ls
-al
> {}
> -rwxrwx--- 1 libvirt-qemu vcb 53687091200 May 14 17:00
>
/mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> 0 root@pserver5:~ # getfattr -dm -
>
/mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> getfattr: Removing leading '/' from absolute path names
> # file:
>
mnt/gluster/brick0/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/
> hdd-images/20819
> trusted.afr.storage0-client-0=0sAAAAAgAAAAIAAAAA
> trusted.afr.storage0-client-1=0sAAAAAAAAAAAAAAAA
> trusted.gfid=0sa5/rvjUUQ3ibSf32O3izOw==
>
> 0 root@pserver12:~ # find /mnt/gluster/brick?/ -name 20819 | xargs -i ls
-al
> {}
> -rwxrwx--- 1 libvirt-qemu kvm 53687091200 May 18 08:41
>
/mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> 0 root@pserver12:~ # getfattr -dm -
>
/mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> getfattr: Removing leading '/' from absolute path names
> # file:
>
mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/
> hdd-images/20819
> trusted.afr.storage0-client-6=0sAAAAAAAAAAAAAAAA
> trusted.afr.storage0-client-7=0sAAAAAAAAAAAAAAAA
> trusted.gfid=0sa5/rvjUUQ3ibSf32O3izOw==
> trusted.glusterfs.dht.linkto="storage0-replicate-0
>
> 0 root@de-dc1-c1-pserver13:~ # find /mnt/gluster/brick?/ -name 20819 |
xargs
> -i ls -al {}
> -rwxrwx--- 1 libvirt-qemu kvm 53687091200 May 18 08:39
>
/mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> 0 root@de-dc1-c1-pserver13:~ # getfattr -dm -
>
/mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef
> /hdd-images/20819
> getfattr: Removing leading '/' from absolute path names
> # file:
>
mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/
> hdd-images/20819
> trusted.afr.storage0-client-6=0sAAAAAAAAAAAAAAAA
> trusted.afr.storage0-client-7=0sAAAAAAAAAAAAAAAA
> trusted.gfid=0sa5/rvjUUQ3ibSf32O3izOw==
> trusted.glusterfs.dht.linkto="storage0-replicate-0
>
> Only entrance in log file on pserver5, no references in the other three
> logs/servers:
>
> 0 root@pserver5:~ # grep 20819
> /var/log/glusterfs/opt-profitbricks-storage.log
> [2011-05-17 20:37:30.52535] I
[client-handshake.c:407:client3_1_reopen_cbk]
> 0-storage0-client-7: reopen on
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819
succeeded
> (remote-fd = 6)
> [2011-05-17 20:37:34.824934] I [afr-open.c:435:afr_openfd_sh]
> 0-storage0-replicate-3:  data self-heal triggered. path:
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819,
reason:
> Replicate up down flush, data lock is held
> [2011-05-17 20:37:34.825557] E
> [afr-self-heal-common.c:1214:sh_missing_entries_create]
> 0-storage0-replicate-3: no missing files -
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819.
> proceeding to metadata check
> [2011-05-17 21:08:59.241203] I
> [afr-self-heal-algorithm.c:526:sh_diff_loop_driver_done]
> 0-storage0-replicate-3: diff self-heal on
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819: 6
blocks
> of 409600 were different (0.00%)
> [2011-05-17 21:08:59.275873] I
> [afr-self-heal-common.c:1527:afr_self_heal_completion_cbk]
> 0-storage0-replicate-3: background  data self-heal completed on
> /images/2078/ebb83b05-3a83-9d18-ad8f-8542864da6ef/hdd-images/20819
>
>
> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to