Hi, Thank you for reporting this. It appears to be a problem with 1xn volumes (single dht subvol) and I could reproduce this with a single brick pure distribute volume. I have filed a BZ for this [1] and posted a patch.
The messages do not indicate a problem and can be ignored. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1537457 On 23 January 2018 at 11:16, Nithya Balachandran <[email protected]> wrote: > > > On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl < > [email protected]> wrote: > >> Here's the volume info: >> >> >> Volume Name: gv2a2 >> Type: Replicate >> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x (2 + 1) = 3 >> Transport-type: tcp >> Bricks: >> Brick1: gluster1:/bricks/brick2/gv2a2 >> Brick2: gluster3:/bricks/brick3/gv2a2 >> Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter) >> Options Reconfigured: >> storage.owner-gid: 107 >> storage.owner-uid: 107 >> user.cifs: off >> features.shard: on >> cluster.shd-wait-qlength: 10000 >> cluster.shd-max-threads: 8 >> cluster.locking-scheme: granular >> cluster.data-self-heal-algorithm: full >> cluster.server-quorum-type: server >> cluster.quorum-type: auto >> cluster.eager-lock: enable >> network.remote-dio: enable >> performance.low-prio-threads: 32 >> performance.io-cache: off >> performance.read-ahead: off >> performance.quick-read: off >> transport.address-family: inet >> nfs.disable: off >> performance.client-io-threads: off >> >> The only client I'm using is FUSE client used to mount the gluster >> volume. On the gluster volume there is a maximum of 15 files ("big" files >> because they host VM images) accessed by qemu-kvm as normal files on the >> FUSE mounted volume. >> >> By inspecting the code I've found that the message is logged in 2 >> situations: >> >> 1) A real "Hole" in DHT >> >> 2) A "virgin" file being created >> >> I think this is the second situation because that message appears only >> when I create a new qcow2 volume to host VM image. >> >> These messages should ideally be seen only for directories and I have > never seen it with a null gfid so far. I'll try to reproduce this and get > back to you. > > Regards, > Nithya > > >> Il 17/01/2018 04:54, Nithya Balachandran ha scritto: >> >> Hi, >> >> >> On 16 January 2018 at 18:56, Ing. Luca Lazzeroni - Trend Servizi Srl < >> [email protected]> wrote: >> >>> Hi, >>> >>> I'm testing gluster 3.12.4 and, by inspecting log files >>> /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines >>> saying: >>> >>> [2018-01-15 09:45:41.066914] I [MSGID: 109063] >>> [dht-layout.c:716:dht_layout_normalize] 0-gv0-dht: Found anomalies in >>> (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 >>> [2018-01-15 09:45:45.755021] I [MSGID: 109063] >>> [dht-layout.c:716:dht_layout_normalize] 0-gv0-dht: Found anomalies in >>> (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 >>> [2018-01-15 14:02:29.171437] I [MSGID: 109063] >>> [dht-layout.c:716:dht_layout_normalize] 0-gv0-dht: Found anomalies in >>> (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 >>> >>> What do they mean ? Is there any real problem ? >>> >>> >> Please provide the following details: >> gluster volume info >> what clients you are using and what operations being performed >> Any steps to reproduce this issue. >> >> Thanks, >> Nithya >> >> Thank you, >>> >>> >>> -- >>> Ing. Luca Lazzeroni >>> Responsabile Ricerca e Sviluppo >>> Trend Servizi Srl >>> Tel: 0376/631761 >>> Web: https://www.trendservizi.it >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> [email protected] >>> http://lists.gluster.org/mailman/listinfo/gluster-users >>> >> >> >> -- >> Ing. Luca Lazzeroni >> Responsabile Ricerca e Sviluppo >> Trend Servizi Srl >> Tel: 0376/631761 >> Web: https://www.trendservizi.it >> >> >
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
