Re: [Gluster-users] To RAID or not to RAID...

2020-01-14 Thread Hu Bert
Hi, our old setup is not really comparable, but i thought i'd drop some lines... we once had a Distributed-Replicate setup with 4 x 3 = 12 disks (10 TB hdd). Simple JBOD, every disk == brick. Was running pretty good, until one of the disks died. The restore (reset-brick) took about a month,

Re: [Gluster-users] healing of big files

2020-01-14 Thread Strahil
Warning ! Do not disable sharding on any case from now on ! This is very important, or you might get a real trouble (even dataloss). If you need to disable it - create a new volume without sharding and copy the data -> storage migration . Best Regards, Strahil Nikolov On Jan 14, 2020 21:20,

Re: [Gluster-users] To RAID or not to RAID...

2020-01-14 Thread Strahil
Hi Markus, You are right .I think that the 3 node setup matches distributed volume. According to https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ Dispersed volumes use erasure codes to have a 'parity' on a separate brick. In such case you can afford to

Re: [Gluster-users] remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0

2020-01-14 Thread Artem Russakovskii
Hi, Any updates here please? Sincerely, Artem -- Founder, Android Police , APK Mirror , Illogical Robot LLC beerpla.net | @ArtemR On Mon, Jan 6, 2020 at 4:18 PM Artem Russakovskii wrote: > Thanks Amar, > >

Re: [Gluster-users] healing of big files

2020-01-14 Thread vlad f halilov
14.01.2020 18:21, Strahil пишет: Are you using the 'virt' group for setting all optimal settings for virtualization ? The groups' settings are in /var/lib/gluster/groups on each node. Yes, with sharding enabled and these options i have fully utilized channel bandwidth and transfer just

Re: [Gluster-users] To RAID or not to RAID...

2020-01-14 Thread Markus Kern
Hi Strahil, thanks for you answer - but now I am completely lost :) From this documentation: https://docs.oracle.com/cd/E52668_01/F10040/html/gluster-312-volume-distr-disp.html "As a dispersed volume must have a minimum of three bricks, a distributed dispersed volume must have at least six

Re: [Gluster-users] healing of big files

2020-01-14 Thread Strahil
Are you using the 'virt' group for setting all optimal settings for virtualization ? The groups' settings are in /var/lib/gluster/groups on each node. Best Regards, Strahil NikolovOn Jan 14, 2020 10:57, vlad f halilov wrote: > > Hi there. I'm trying to deploy replicated 2+1 cluster with

[Gluster-users] To RAID or not to RAID...

2020-01-14 Thread Markus Kern
Greetings again! After reading RedHat documentation regarding optimizing Gluster storage another question comes to my mind: Let's presume that I want to go the distributed dispersed volume way. Three nodes which two bricks each. According to RedHat's recommendation, I should use RAID6 as

[Gluster-users] Compiling Gluster RPMs for v5.x on Suse SLES15

2020-01-14 Thread David Spisla
Dear Gluster Community, I want to compile my own Gluster RPMs for v5.x on a Suse Sles15 machine. I am using the spec file from here: https://github.com/gluster/glusterfs-suse/blob/sles15-glusterfs-5/glusterfs.spec There is a Build Requirement 'rpcgen' which causes confusion to me. I had a chat

Re: [Gluster-users] healing of big files

2020-01-14 Thread Michael Böhm
Am Di., 14. Jan. 2020 um 10:38 Uhr schrieb vlad f halilov : > Hi there. I'm trying to deploy replicated 2+1 cluster with gluster 5.5.1 > (debian 10) as vm image storage for libvirt, but have an issue with > healing disconnected node. Example: > ... > > 4) After node5 has reconnected, healing

Re: [Gluster-users] Expanding distributed dispersed volumes

2020-01-14 Thread Ashish Pandey
Yes, you are right. You have to add 2 bricks on each server to expend storage capacity of this volume. I am assuming the volume config is 4+2. --- Ashish - Original Message - From: "Markus Kern" To: gluster-users@gluster.org Sent: Tuesday, January 14, 2020 2:26:55 PM Subject:

[Gluster-users] healing of big files

2020-01-14 Thread vlad f halilov
Hi there. I'm trying to deploy replicated 2+1 cluster with gluster 5.5.1 (debian 10) as vm image storage for libvirt, but have an issue with healing disconnected node. Example: 1) node4 + node5 - storages with one brick, node2 - arbiter. 2) Creating volume with defaults, and making a file

Re: [Gluster-users] Gluster Periodic Brick Process Deaths

2020-01-14 Thread Nico van Royen
Hi, I am a bit surprised by this response. Quite recently we created a Redhat support case about the same issue (brick process crashing when scanned), and Redhats response was simply that the "solution" was to not scan the bricks and that this issue will not be resolved (RH support case

[Gluster-users] Expanding distributed dispersed volumes

2020-01-14 Thread Markus Kern
Greetings! I am unsure if I understood that volume properly - or, to be more specific, I am unsure how to increase available storage there properly. Say I have three servers with two bricks on each server, providing one volume. Free space gets low in that volume and I want to expand it.