Of course. Here's the full log. Please, note that in FUSE mode
everything works apparently without problems. I've installed 4 vm and
updated them without problems.
Il 17/01/2018 11:00, Krutika Dhananjay ha scritto:
On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi
Srl <l...@gvnet.it <mailto:l...@gvnet.it>> wrote:
I've made the test with raw image format (preallocated too) and
the corruption problem is still there (but without errors in
bricks' log file).
What does the "link" error in bricks log files means ?
I've seen the source code looking for the lines where it happens
and it seems a warning (it doesn't imply a failure).
Indeed, it only represents a transient state when the shards are
created for the first time and does not indicate a failure.
Could you also get the logs of the gluster fuse mount process? It
should be under /var/log/glusterfs of your client machine with the
filename as a hyphenated mount point path.
For example, if your volume was mounted at /mnt/glusterfs, then your
log file would be named mnt-glusterfs.log.
-Krutika
Il 16/01/2018 17:39, Ing. Luca Lazzeroni - Trend Servizi Srl ha
scritto:
An update:
I've tried, for my tests, to create the vm volume as
qemu-img create -f qcow2 -o preallocation=full
gluster://gluster1/Test/Test-vda.img 20G
et voila !
No errors at all, neither in bricks' log file (the "link failed"
message disappeared), neither in VM (no corruption and installed
succesfully).
I'll do another test with a fully preallocated raw image.
Il 16/01/2018 16:31, Ing. Luca Lazzeroni - Trend Servizi Srl ha
scritto:
I've just done all the steps to reproduce the problem.
Tha VM volume has been created via "qemu-img create -f qcow2
Test-vda2.qcow2 20G" on the gluster volume mounted via FUSE.
I've tried also to create the volume with preallocated metadata,
which moves the problem a bit far away (in time). The volume is
a replice 3 arbiter 1 volume hosted on XFS bricks.
Here are the informations:
[root@ovh-ov1 bricks]# gluster volume info gv2a2
Volume Name: gv2a2
Type: Replicate
Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/bricks/brick2/gv2a2
Brick2: gluster3:/bricks/brick3/gv2a2
Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
Options Reconfigured:
storage.owner-gid: 107
storage.owner-uid: 107
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: off
/var/log/glusterfs/glusterd.log:
[2018-01-15 14:17:50.196228] I [MSGID: 106488]
[glusterd-handler.c:1548:__glusterd_handle_cli_get_volume]
0-management: Received get vol req
[2018-01-15 14:25:09.555214] I [MSGID: 106488]
[glusterd-handler.c:1548:__glusterd_handle_cli_get_volume]
0-management: Received get vol req
(empty because today it's 2018-01-16)
/var/log/glusterfs/glustershd.log:
[2018-01-14 02:23:02.731245] I
[glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] 0-glusterfs: No change
in volfile,continuing
(empty too)
/var/log/glusterfs/bricks/brick-brick2-gv2a2.log (the interested
volume):
[2018-01-16 15:14:37.809965] I [MSGID: 115029]
[server-handshake.c:793:server_setvolume] 0-gv2a2-server:
accepted client from
ovh-ov1-10302-2018/01/16-15:14:37:790306-gv2a2-client-0-0-0
(version: 3.12.4)
[2018-01-16 15:16:41.471751] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.4
failed
[2018-01-16 15:16:41.471745] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.4
->
/bricks/brick2/gv2a2/.glusterfs/a0/14/a0144df3-8d89-4aed-872e-5fef141e9e1efailed
[File exists]
[2018-01-16 15:16:42.593392] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.5
->
/bricks/brick2/gv2a2/.glusterfs/eb/04/eb044e6e-3a23-40a4-9ce1-f13af148eb67failed
[File exists]
[2018-01-16 15:16:42.593426] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.5
failed
[2018-01-16 15:17:04.129593] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.8
->
/bricks/brick2/gv2a2/.glusterfs/dc/92/dc92bd0a-0d46-4826-a4c9-d073a924dd8dfailed
[File exists]
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.8
->
/bricks/brick2/gv2a2/.glusterfs/dc/92/dc92bd0a-0d46-4826-a4c9-d073a924dd8dfailed
[File exists]" repeated 5 times between [2018-01-16
15:17:04.129593] and [2018-01-16 15:17:04.129593]
[2018-01-16 15:17:04.129661] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.8
failed
[2018-01-16 15:17:08.279162] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
->
/bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
[File exists]
[2018-01-16 15:17:08.279162] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
->
/bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
[File exists]
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
->
/bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
[File exists]" repeated 2 times between [2018-01-16
15:17:08.279162] and [2018-01-16 15:17:08.279162]
[2018-01-16 15:17:08.279177] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
failed
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.4
->
/bricks/brick2/gv2a2/.glusterfs/a0/14/a0144df3-8d89-4aed-872e-5fef141e9e1efailed
[File exists]" repeated 6 times between [2018-01-16
15:16:41.471745] and [2018-01-16 15:16:41.471807]
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.5
->
/bricks/brick2/gv2a2/.glusterfs/eb/04/eb044e6e-3a23-40a4-9ce1-f13af148eb67failed
[File exists]" repeated 2 times between [2018-01-16
15:16:42.593392] and [2018-01-16 15:16:42.593430]
[2018-01-16 15:17:32.229689] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.14
->
/bricks/brick2/gv2a2/.glusterfs/53/04/530449fa-d698-4928-a262-9a0234232323failed
[File exists]
[2018-01-16 15:17:32.229720] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.14
failed
[2018-01-16 15:18:07.154330] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.17
->
/bricks/brick2/gv2a2/.glusterfs/81/96/8196dd19-84bc-4c3d-909f-8792e9b4929dfailed
[File exists]
[2018-01-16 15:18:07.154375] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.17
failed
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.14
->
/bricks/brick2/gv2a2/.glusterfs/53/04/530449fa-d698-4928-a262-9a0234232323failed
[File exists]" repeated 7 times between [2018-01-16
15:17:32.229689] and [2018-01-16 15:17:32.229806]
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.17
->
/bricks/brick2/gv2a2/.glusterfs/81/96/8196dd19-84bc-4c3d-909f-8792e9b4929dfailed
[File exists]" repeated 3 times between [2018-01-16
15:18:07.154330] and [2018-01-16 15:18:07.154357]
[2018-01-16 15:19:23.618794] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.21
->
/bricks/brick2/gv2a2/.glusterfs/6d/02/6d02bd98-83de-43e8-a7af-b1d5f5160403failed
[File exists]
[2018-01-16 15:19:23.618827] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.21
failed
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.21
->
/bricks/brick2/gv2a2/.glusterfs/6d/02/6d02bd98-83de-43e8-a7af-b1d5f5160403failed
[File exists]" repeated 3 times between [2018-01-16
15:19:23.618794] and [2018-01-16 15:19:23.618794]
Thank you,
Il 16/01/2018 11:40, Krutika Dhananjay ha scritto:
Also to help isolate the component, could you answer these:
1. on a different volume with shard not enabled, do you see
this issue?
2. on a plain 3-way replicated volume (no arbiter), do you see
this issue?
On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay
<kdhan...@redhat.com <mailto:kdhan...@redhat.com>> wrote:
Please share the volume-info output and the logs under
/var/log/glusterfs/ from all your nodes. for investigating
the issue.
-Krutika
On Tue, Jan 16, 2018 at 1:30 PM, Ing. Luca Lazzeroni -
Trend Servizi Srl <l...@gvnet.it <mailto:l...@gvnet.it>> wrote:
Hi to everyone.
I've got a strange problem with a gluster setup: 3
nodes with Centos 7.4, Gluster 3.12.4 from
Centos/Gluster repositories, QEMU-KVM version 2.9.0
(compiled from RHEL sources).
I'm running volumes in replica 3 arbiter 1 mode (but
I've got a volume in "pure" replica 3 mode too). I've
applied the "virt" group settings to my volumes since
they host VM images.
If I try to install something (eg: Ubuntu Server
16.04.3) on a VM (and so I generate a bit of I/O inside
it) and configure KVM to access gluster volume directly
(via libvirt), install fails after a while because the
disk content is corrupted. If I inspect the block
inside the disk (by accessing the image directly from
outside) I can found many files filled with "^@".
Also, what exactly do you mean by accessing the image directly
from outside? Was it from the brick directories directly? Was
it from the mount point of the volume? Could you elaborate?
Which files exactly did you check?
-Krutika
If, instead, I configure KVM to access VM images via a
FUSE mount, everything seems to work correctly.
Note that the problem with install is verified 100%
time with QCOW2 image, while it appears only after with
RAW disk images.
Is there anyone who experienced the same problem ?
Thank you,
--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: https://www.trendservizi.it
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
<mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web:https://www.trendservizi.it
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web:https://www.trendservizi.it
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web:https://www.trendservizi.it
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: https://www.trendservizi.it
[2018-01-15 09:45:32.643980] I [MSGID: 100030] [glusterfsd.c:2524:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.4 (args: /usr/sbin/glusterfs --volfile-server=localhost --volfile-id=/gv2a2 /mnt/gv2a2)
[2018-01-15 09:45:32.646125] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-01-15 09:45:32.650513] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-01-15 09:45:32.662398] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2018-01-15 09:45:32.662619] W [MSGID: 101174] [graph.c:363:_log_if_unknown_option] 0-gv2a2-readdir-ahead: option 'parallel-readdir' is not recognized
[2018-01-15 09:45:32.662755] I [MSGID: 114020] [client.c:2360:notify] 0-gv2a2-client-0: parent translators are ready, attempting connect on transport
[2018-01-15 09:45:32.664328] I [MSGID: 114020] [client.c:2360:notify] 0-gv2a2-client-1: parent translators are ready, attempting connect on transport
[2018-01-15 09:45:32.664592] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-gv2a2-client-0: changing port to 49154 (from 0)
[2018-01-15 09:45:32.665871] I [MSGID: 114020] [client.c:2360:notify] 0-gv2a2-client-2: parent translators are ready, attempting connect on transport
[2018-01-15 09:45:32.667501] I [MSGID: 114057] [client-handshake.c:1478:select_server_supported_programs] 0-gv2a2-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2018-01-15 09:45:32.667768] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-gv2a2-client-1: changing port to 49154 (from 0)
[2018-01-15 09:45:32.667802] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-gv2a2-client-0: Connected to gv2a2-client-0, attached to remote volume '/bricks/brick2/gv2a2'.
[2018-01-15 09:45:32.667810] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-gv2a2-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2018-01-15 09:45:32.667841] I [MSGID: 108005] [afr-common.c:4929:__afr_handle_child_up_event] 0-gv2a2-replicate-0: Subvolume 'gv2a2-client-0' came back up; going online.
Final graph:
+------------------------------------------------------------------------------+
1: volume gv2a2-client-0
2: type protocol/client
3: option opversion 31202
4: option clnt-lk-version 1
5: option volfile-checksum 0
6: option volfile-key /gv2a2
7: option client-version 3.12.4
8: option process-uuid ovh-ov1-14594-2018/01/15-09:45:32:643798-gv2a2-client-0-0-0
9: option fops-version 1298437
10: option ping-timeout 42
11: option remote-host gluster1
12: option remote-subvolume /bricks/brick2/gv2a2
13: option transport-type socket
14: option transport.address-family inet
15: option username 8df65510-eb25-4ad2-8df2-67e7391f0b89
16: option password 2143499e-da84-4f3b-990a-62ef5787a5a1
17: option filter-O_DIRECT enable
18: option transport.tcp-user-timeout 0
19: option transport.socket.keepalive-time 20
20: option transport.socket.keepalive-interval 2
21: option transport.socket.keepalive-count 9
22: option send-gids true
23: end-volume
24:
25: volume gv2a2-client-1
26: type protocol/client
27: option ping-timeout 42
28: option remote-host gluster3
29: option remote-subvolume /bricks/brick3/gv2a2
30: option transport-type socket
31: option transport.address-family inet
32: option username 8df65510-eb25-4ad2-8df2-67e7391f0b89
33: option password 2143499e-da84-4f3b-990a-62ef5787a5a1
34: option filter-O_DIRECT enable
35: option transport.tcp-user-timeout 0
36: option transport.socket.keepalive-time 20
37: option transport.socket.keepalive-interval 2
38: option transport.socket.keepalive-count 9
39: option send-gids true
40: end-volume
41:
42: volume gv2a2-client-2
43: type protocol/client
44: option ping-timeout 42
45: option remote-host gluster2
46: option remote-subvolume /bricks/arbiter_brick_gv2a2/gv2a2
47: option transport-type socket
48: option transport.address-family inet
49: option username 8df65510-eb25-4ad2-8df2-67e7391f0b89
50: option password 2143499e-da84-4f3b-990a-62ef5787a5a1
51: option filter-O_DIRECT enable
52: option transport.tcp-user-timeout 0
53: option transport.socket.keepalive-time 20
54: option transport.socket.keepalive-interval 2
55: option transport.socket.keepalive-count 9
56: option send-gids true
57: end-volume
58:
59: volume gv2a2-replicate-0
60: type cluster/replicate
61: option afr-pending-xattr gv2a2-client-0,gv2a2-client-1,gv2a2-client-2
62: option arbiter-count 1
63: option data-self-heal-algorithm full
64: option eager-lock enable
65: option quorum-type auto
66: option shd-max-threads 8
67: option shd-wait-qlength 10000
68: option locking-scheme granular
69: option use-compound-fops off
70: subvolumes gv2a2-client-0 gv2a2-client-1 gv2a2-client-2
71: end-volume
72:
73: volume gv2a2-dht
74: type cluster/distribute
75: option lock-migration off
76: subvolumes gv2a2-replicate-0
77: end-volume
78:
79: volume gv2a2-shard
80: type features/shard
81: subvolumes gv2a2-dht
82: end-volume
83:
84: volume gv2a2-write-behind
85: type performance/write-behind
86: subvolumes gv2a2-shard
87: end-volume
88:
89: volume gv2a2-readdir-ahead
90: type performance/readdir-ahead
91: option parallel-readdir off
92: option rda-request-size 131072
93: option rda-cache-limit 10MB
94: subvolumes gv2a2-write-behind
95: end-volume
96:
97: volume gv2a2-open-behind
98: type performance/open-behind
99: subvolumes gv2a2-readdir-ahead
100: end-volume
101:
102: volume gv2a2-md-cache
103: type performance/md-cache
104: subvolumes gv2a2-open-behind
105: end-volume
106:
107: volume gv2a2
108: type debug/io-stats
109: option log-level INFO
110: option latency-measurement off
111: option count-fop-hits off
112: subvolumes gv2a2-md-cache
113: end-volume
114:
115: volume meta-autoload
116: type meta
117: subvolumes gv2a2
118: end-volume
119:
+------------------------------------------------------------------------------+
[2018-01-15 09:45:32.669239] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-gv2a2-client-0: Server lk version = 1
[2018-01-15 09:45:32.669540] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-gv2a2-client-2: changing port to 49154 (from 0)
[2018-01-15 09:45:32.670982] I [MSGID: 114057] [client-handshake.c:1478:select_server_supported_programs] 0-gv2a2-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2018-01-15 09:45:32.671410] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-gv2a2-client-1: Connected to gv2a2-client-1, attached to remote volume '/bricks/brick3/gv2a2'.
[2018-01-15 09:45:32.671420] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-gv2a2-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2018-01-15 09:45:32.671560] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-gv2a2-client-1: Server lk version = 1
[2018-01-15 09:45:32.672456] I [MSGID: 114057] [client-handshake.c:1478:select_server_supported_programs] 0-gv2a2-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2018-01-15 09:45:32.672945] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-gv2a2-client-2: Connected to gv2a2-client-2, attached to remote volume '/bricks/arbiter_brick_gv2a2/gv2a2'.
[2018-01-15 09:45:32.672952] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-gv2a2-client-2: Server and Client lk-version numbers are not same, reopening the fds
[2018-01-15 09:45:32.673078] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-gv2a2-client-2: Server lk version = 1
[2018-01-15 09:45:32.673483] I [fuse-bridge.c:4201:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.26
[2018-01-15 09:45:32.673496] I [fuse-bridge.c:4831:fuse_graph_sync] 0-fuse: switched to graph 0
[2018-01-15 09:45:32.674214] I [MSGID: 108031] [afr-common.c:2376:afr_local_discovery_cbk] 0-gv2a2-replicate-0: selecting local read_child gv2a2-client-0
[2018-01-15 09:45:48.906877] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 15:07:58.831791] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 15:08:01.706972] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 15:34:35.342045] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 15:34:37.253822] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 16:03:13.435521] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 16:03:14.369659] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 16:05:56.977964] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 16:05:58.216845] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 17:49:47.403444] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
The message "I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0" repeated 2 times between [2018-01-16 17:49:47.403444] and [2018-01-16 17:49:49.325307]
[2018-01-16 21:01:28.936463] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-16 21:01:53.567103] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
The message "I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0" repeated 2 times between [2018-01-16 21:01:53.567103] and [2018-01-16 21:01:53.568248]
[2018-01-16 21:08:00.438102] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
The message "I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0" repeated 7 times between [2018-01-16 21:08:00.438102] and [2018-01-16 21:08:37.853072]
[2018-01-16 21:18:14.171077] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
The message "I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv2a2-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0" repeated 5 times between [2018-01-16 21:18:14.171077] and [2018-01-16 21:18:30.542806]
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users