Hi Guys,
I pushed some VMs to the GlusterFS storage this week and ran them there.
For a maintenance task, I moved these VMs to Proxmox-Node-2 and took Node-1
offline for a short time.
After moving them back to Node-1 there were some file corpses left (see
attachment). In the logs I can't find anything about the gfids :)
┬[15:36:51] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># gvi
Cluster:
Status: Healthy GlusterFS: 9.3
Nodes: 3/3 Volumes: 1/1
Volumes:
glusterfs-1-volume
Replicate Started (UP) - 3/3 Bricks Up - (Arbiter
Volume)
Capacity: (17.89% used) 83.00 GiB/466.00
GiB (used/total)
Self-Heal:
192.168.1.51:/data/glusterfs (4
File(s) to heal).
Bricks:
Distribute Group 1:
192.168.1.50:/data/glusterfs
(Online)
192.168.1.51:/data/glusterfs
(Online)
192.168.1.40:/data/glusterfs
(Online)
Brick 192.168.1.50:/data/glusterfs
Status: Connected
Number of entries: 0
Brick 192.168.1.51:/data/glusterfs
Status: Connected
Number of entries: 4
Brick 192.168.1.40:/data/glusterfs
Status: Connected
Number of entries: 0
┬[15:37:03] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># cat
/data/glusterfs/.glusterfs/ad/e6/ade6f31c-b80b-457e-a054-6ca1548d9cd3
22962
┬[15:37:13] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># grep -ir 'ade6f31c-b80b-457e-a054-6ca1548d9cd3'
/var/log/glusterfs/*.log
Am Mo., 1. Nov. 2021 um 07:51 Uhr schrieb Thorsten Walk :
> After deleting the file, output of heal info is clear.
>
> >Not sure why you ended up in this situation (maybe unlink partially
> failed on this brick?)
>
> Neither did I, this was a completely fresh setup with 1-2 VMs and 1-2
> Proxmox LXC templates. I let it run for a few days and at some point it had
> the mentioned state. I continue to monitor and start with fill the bricks
> with data.
> Thanks for your help!
>
> Am Mo., 1. Nov. 2021 um 02:54 Uhr schrieb Ravishankar N <
> ravishanka...@pavilion.io>:
>
>>
>>
>> On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk wrote:
>>
>>> Hi Ravi, the file only exists at pve01 and since only once:
>>>
>>> ┬[19:22:10] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># stat
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>> File:
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>> Size: 6 Blocks: 8 IO Block: 4096 regular file
>>> Device: fd12h/64786dInode: 528 Links: 1
>>> Access: (0644/-rw-r--r--) Uid: (0/root) Gid: (0/root)
>>> Access: 2021-10-30 14:34:50.385893588 +0200
>>> Modify: 2021-10-27 00:26:43.988756557 +0200
>>> Change: 2021-10-27 00:26:43.988756557 +0200
>>> Birth: -
>>>
>>> ┬[19:24:41] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># ls -l
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>> .rw-r--r-- root root 6B 4 days ago
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>>
>>> ┬[19:24:54] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># cat
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>> 28084
>>>
>>> Hi Thorsten, you can delete the file. From the file size and contents,
>> it looks like it belongs to ovirt sanlock. Not sure why you ended up in
>> this situation (maybe unlink partially failed on this brick?). You can
>> check the mount, brick and self-heal daemon logs for this gfid to see if
>> you find related error/warning messages.
>>
>> -Ravi
>>
>
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users