Re: [Gluster-users] How to find out data alignment for LVM thin volume brick

2023-06-06 Thread Strahil Nikolov
Have you checked this page: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/brick_configuration
 ?
The alignment depends on the HW raid stripe unit size.
Best Regards,Strahil Nikolov 
 
 
  On Tue, Jun 6, 2023 at 2:35, mabi wrote:   Hello,

I am preparing a brick as LVM thin volume for a test slave node using this 
documentation:  

https://docs.gluster.org/en/main/Administrator-Guide/formatting-and-mounting-bricks/

but I am confused regarding the right "--dataalignment" option to be used for 
pvcreate. The documentation mentions the following under point 1:

"Create a physical volume(PV) by using the pvcreate command. For example:

pvcreate --dataalignment 128K /dev/sdb

Here, /dev/sdb is a storage device. Use the correct dataalignment option based 
on your device.

    Note: The device name and the alignment value will vary based on the device 
you are using."

As test disk for this brick I have an external USB 500GB SSD disk from Samsung 
PSSD T7 (https://semiconductor.samsung.com/consumer-storage/portable-ssd/t7/) 
but my question is where do I find the information on which alignment value I 
need to use for this specific disk?

Best regards,
Mabi 




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Qustionmark in permission and Owner

2023-06-06 Thread Strahil Nikolov
Usually when you see '' for user, group, date - it's a split brain 
situation (could be a gfid split brain) and Gluster can't decide which copy is 
bad .
Best Regards,Strahil Nikolov 
 
 
  On Mon, Jun 5, 2023 at 23:30, Diego Zuccato wrote:   
Seen something similar when FUSE client died, but it marked the whole 
mountpoint, not just some files.
Might be a desync or communication loss between the nodes?

Diego

Il 05/06/2023 11:23, Stefan Kania ha scritto:
> Hello,
> 
> I have a strange problem on a gluster volume
> 
> If I do an "ls -l" in a directory insight a mountet gluster-volume I 
> see, only for some files, questionmarks for the permission, the owner, 
> the size and the date.
> Looking at the same directory on the brick it self, everything is ok.
> After rebooting the nodes everything is back to normal.
> 
> System is Debian 11 and Gluster is version 9. The filesystem is LVM2 
> thin provisioned and formated with XFS.
> 
> But as I said, the brick is ok only the mountet volume is having the 
> problem.
> 
> Any hind what it could be?
> 
> Thank's
> 
> Stefan
> 
> 
> 
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

-- 
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo replication procedure for DR

2023-06-06 Thread Strahil Nikolov
It's just a setting on the target volume:
gluster volume set  read-only OFF
Best Regards,Strahil Nikolov 
 
 
  On Mon, Jun 5, 2023 at 22:30, mabi wrote:   Hello,

I was reading the geo replication documentation here:

https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/

and I was wondering how it works when in case of disaster recovery when the 
primary cluster is down and the the secondary site with the volume needs to be 
used?

What is the procedure here to make the secondary volume on the secondary site 
available for read/write?

And once the primary site is back online how do you copy back or sync all data 
changes done on the secondary volume on the secondary site back to the primary 
volume on the primary site?

Best regards,
Mabi




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Using glusterfs for virtual machines with qcow2 images

2023-06-06 Thread Strahil Nikolov
Hi Chris,
here is a link to the settings needed for VM storage: 
https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before 
changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov  
 
  On Mon, Jun 5, 2023 at 13:55, Christian 
Schoepplein wrote:   Hi,

we'd like to use glusterfs for Proxmox and virtual machines with qcow2 
disk images. We have a three node glusterfs setup with one volume and 
Proxmox is attached and VMs are created, but after some time, and I think 
after much i/o is going on for a VM, the data inside the virtual machine 
gets corrupted. When I copy files from or to our glusterfs 
directly everything is OK, I've checked the files with md5sum. So in general 
our glusterfs setup seems to be OK I think..., but with the VMs and the self 
growing qcow2 images there are problems. If I use raw images for the VMs 
tests look better, but I need to do more testing to be sure, the problem is 
a bit hard to reproduce :-(.

I've also asked on a Proxmox mailinglist, but got no helpfull response so 
far :-(. So maybe you have any helping hint what might be wrong with our 
setup, what needs to be configured to use glusterfs as a storage backend for 
virtual machines with self growing disk images. e.g. Any helpfull tip would 
be great, because I am absolutely no glusterfs expert and also not a expert 
for virtualization and what has to be done to let all components play well 
together... Thanks for your support!

Here some infos about our glusterfs setup, please let me know if you need 
more infos. We are using Ubuntu 22.04 as operating system:

root@gluster1:~# gluster --version
glusterfs 10.1
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
root@gluster1:~#

root@gluster1:~# gluster v status gfs_vms

Status of volume: gfs_vms
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick gluster1.linova.de:/glusterfs/sde1enc
/brick                                      58448    0          Y      1062218
Brick gluster2.linova.de:/glusterfs/sdc1enc
/brick                                      50254    0          Y      20596
Brick gluster3.linova.de:/glusterfs/sdc1enc
/brick                                      52840    0          Y      1627513
Brick gluster1.linova.de:/glusterfs/sdf1enc
/brick                                      49832    0          Y      1062227
Brick gluster2.linova.de:/glusterfs/sdd1enc
/brick                                      56095    0          Y      20612
Brick gluster3.linova.de:/glusterfs/sdd1enc
/brick                                      51252    0          Y      1627521
Brick gluster1.linova.de:/glusterfs/sdg1enc
/brick                                      54991    0          Y      1062230
Brick gluster2.linova.de:/glusterfs/sde1enc
/brick                                      60812    0          Y      20628
Brick gluster3.linova.de:/glusterfs/sde1enc
/brick                                      59254    0          Y      1627522
Self-heal Daemon on localhost              N/A      N/A        Y      1062249
Bitrot Daemon on localhost                  N/A      N/A        Y      3591335
Scrubber Daemon on localhost                N/A      N/A        Y      3591346
Self-heal Daemon on gluster2.linova.de      N/A      N/A        Y      20645
Bitrot Daemon on gluster2.linova.de        N/A      N/A        Y      987517
Scrubber Daemon on gluster2.linova.de      N/A      N/A        Y      987588
Self-heal Daemon on gluster3.linova.de      N/A      N/A        Y      1627568
Bitrot Daemon on gluster3.linova.de        N/A      N/A        Y      1627543
Scrubber Daemon on gluster3.linova.de      N/A      N/A        Y      1627554
 
Task Status of Volume gfs_vms
--
There are no active volume tasks
 
root@gluster1:~#

root@gluster1:~# gluster v status gfs_vms detail

Status of volume: gfs_vms
--
Brick                : Brick gluster1.linova.de:/glusterfs/sde1enc/brick
TCP Port            : 58448              
RDMA Port            : 0                  
Online              : Y                  
Pid                  : 1062218            
File System          : xfs                
Device              : /dev/mapper/sde1enc 
Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode