Re: [Gluster-users] NFS-Ganesha Cluster - User rw Mount

2021-11-05 Thread Taste-Of-IT
Hi,

yes all files belong to nobody. I access via nfs4. So i must add an uuid to the 
ganesha.conf file under export? Do you have an example how it is correct and 
how to mount from client side?

thx
Taste

Am 05.11.2021 17:33:03, schrieb Ivan Rossi:
> Notice if all the files appear to belong to "nobody" when you access from a
> remote user.
> are you mounting using nfs4 or nfs3? with nfs4 you need to have the same
> uids and gids on the server too.
> it is a nfs4 thing.
> 
> Il giorno ven 5 nov 2021 alle ore 16:25 Taste-Of-IT 
> ha scritto:
> 
> > Hi,
> > i have installed latest GlusterFS with NFS Ganesha Cluster. Write access
> > via root is no problem, but if i want to mount it in e.g. linux mint as
> > user, i can only read. The ganesha.conf has Access_type=rw and
> > squas=no_roto_squash,disable_acl=true and sectyp="sys".
> >
> > Any idea?
> > thanks
> > Taste
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

2021-11-05 Thread Thorsten Walk
Hi Guys,

I pushed some VMs to the GlusterFS storage this week and ran them there.
For a maintenance task, I moved these VMs to Proxmox-Node-2 and took Node-1
offline for a short time.
After moving them back to Node-1 there were some file corpses left (see
attachment). In the logs I can't find anything about the gfids :)


┬[15:36:51] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># gvi

Cluster:
 Status: Healthy GlusterFS: 9.3
 Nodes: 3/3  Volumes: 1/1

Volumes:

glusterfs-1-volume
Replicate  Started (UP) - 3/3 Bricks Up  - (Arbiter
Volume)
   Capacity: (17.89% used) 83.00 GiB/466.00
GiB (used/total)
   Self-Heal:
  192.168.1.51:/data/glusterfs (4
File(s) to heal).
   Bricks:
  Distribute Group 1:
 192.168.1.50:/data/glusterfs
(Online)
 192.168.1.51:/data/glusterfs
(Online)
 192.168.1.40:/data/glusterfs
(Online)


Brick 192.168.1.50:/data/glusterfs
Status: Connected
Number of entries: 0

Brick 192.168.1.51:/data/glusterfs




Status: Connected
Number of entries: 4

Brick 192.168.1.40:/data/glusterfs
Status: Connected
Number of entries: 0


┬[15:37:03] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># cat
/data/glusterfs/.glusterfs/ad/e6/ade6f31c-b80b-457e-a054-6ca1548d9cd3
22962


┬[15:37:13] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># grep -ir 'ade6f31c-b80b-457e-a054-6ca1548d9cd3'
/var/log/glusterfs/*.log

Am Mo., 1. Nov. 2021 um 07:51 Uhr schrieb Thorsten Walk :

> After deleting the file, output of heal info is clear.
>
> >Not sure why you ended up in this situation (maybe unlink partially
> failed on this brick?)
>
> Neither did I, this was a completely fresh setup with 1-2 VMs and 1-2
> Proxmox LXC templates. I let it run for a few days and at some point it had
> the mentioned state. I continue to monitor and start with fill the bricks
> with data.
> Thanks for your help!
>
> Am Mo., 1. Nov. 2021 um 02:54 Uhr schrieb Ravishankar N <
> ravishanka...@pavilion.io>:
>
>>
>>
>> On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk  wrote:
>>
>>> Hi Ravi, the file only exists at pve01 and since only once:
>>>
>>> ┬[19:22:10] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># stat
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>>   File:
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>>   Size: 6   Blocks: 8  IO Block: 4096   regular file
>>> Device: fd12h/64786dInode: 528 Links: 1
>>> Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
>>> Access: 2021-10-30 14:34:50.385893588 +0200
>>> Modify: 2021-10-27 00:26:43.988756557 +0200
>>> Change: 2021-10-27 00:26:43.988756557 +0200
>>>  Birth: -
>>>
>>> ┬[19:24:41] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># ls -l
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>> .rw-r--r-- root root 6B 4 days ago 
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>>
>>> ┬[19:24:54] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># cat
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>> 28084
>>>
>>> Hi Thorsten, you can delete the file. From the file size and contents,
>> it looks like it belongs to ovirt sanlock. Not sure why you ended up in
>> this situation (maybe unlink partially failed on this brick?). You can
>> check the mount, brick and self-heal daemon logs for this gfid to  see if
>> you find related error/warning messages.
>>
>> -Ravi
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] NFS-Ganesha Cluster - User rw Mount

2021-11-05 Thread Taste-Of-IT
Hi,
i have installed latest GlusterFS with NFS Ganesha Cluster. Write access via 
root is no problem, but if i want to mount it in e.g. linux mint as user, i can 
only read. The ganesha.conf has Access_type=rw and 
squas=no_roto_squash,disable_acl=true and sectyp="sys".

Any idea?
thanks
Taste




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Issues with glustershd with release 8.4 and 9.1

2021-11-05 Thread Ville-Pekka Vainio
Hi!

Bumping an old thread, because there’s now activity around this bug. The github 
issue is https://github.com/gluster/glusterfs/issues/2492
We just hit this bug after an update from GlusterFS 7.x to 9.4. We did not see 
this in our test environment, so we did the update, but the bug is still there. 
Apparently the fix should be https://github.com/gluster/glusterfs/pull/2509 
which should get backported to 9.x.

We worked around this issue by identifying the server with the bug and 
restarting the GlusterFS processes on it. On an EL/CentOS/Fedora-based system 
there was one small thing that surprised me, maybe this will help others.

There’s the service /usr/lib/systemd/system/glusterfsd.service which does not 
really start anything, just runs /bin/true, but when stopped, will kill the 
brick processes on the server. If you try doing “systemctl stop glusterfsd” but 
you have not started the service (even though starting it does nothing), 
systemd will not do anything. If you first start the service and then stop it, 
systemd will actually run the ExecStop command.


Best regards,
Ville-Pekka




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users