[Gluster-users] MPIO for Single Client?

2019-08-23 Thread Matthew Evans
Does Gluster support any sort of MPIO-like functionality for a single client? 
Maybe when using a Distributed Stripe? Or is a single client always limited to 
the throughput of a single node for each session?
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Brick Reboot => VMs slowdown, client crashes

2019-08-23 Thread Carl Sirotic
-- Forwarded message --From: Carl Sirotic Date: Aug. 23, 2019 7:00 p.m.Subject: Re: [Gluster-users] Brick Reboot => VMs slowdown, client crashesTo: Joe Julian Cc: 
AVIS DE CONFIDENTIALITÉ : Ce courriel peut contenir de l'information privilégiée et confidentielle. Nous vous demandons de le détruire immédiatement si vous n'êtes pas le destinataire.
CONFIDENTIALITY NOTICE: This email may contain information that is privileged and confidential. Please delete immediately if you are not the intended recipient.
--- Begin Message ---
<<< text/html; charset=utf-8: Unrecognized >>>
--- End Message ---
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Brick Reboot => VMs slowdown, client crashes

2019-08-23 Thread Carl Sirotic

Okay,

so it means, at least I am not getting the expected behavior and there 
is hope.


I put the quorum settings that I was told a couple of emails ago.

After applying virt group, they are

cluster.quorum-type auto
cluster.quorum-count (null)
cluster.server-quorum-type server
cluster.server-quorum-ratio 0
cluster.quorum-reads no

Also,

I just put the ping timeout to 5 seconds now.


Carl

On 2019-08-23 5:45 p.m., Ingo Fischer wrote:

Hi Carl,

In my understanding and experience (I have a replica 3 System running 
too) this should not happen. Can you tell your client and server 
quorum settings?


Ingo

Am 23.08.2019 um 15:53 schrieb Carl Sirotic 
mailto:csiro...@evoqarchitecture.com>>:



However,

I must have misunderstood the whole concept of gluster.

In a replica 3, for me, it's completely unacceptable, regardless of 
the options, that all my VMs go down when I reboot one node.


The whole purpose of having a full 3 copy of my data on the fly is 
suposed to be this.


I am in the process of sharding every file.

But even if the healing time would be longer, I would still expect a 
non-sharded replica 3 brick with vm boot disk, to not go down if I 
reboot one of its copy.



I am not very impressed by gluster so far.

Carl

On 2019-08-19 4:15 p.m., Darrell Budic wrote:
/var/lib/glusterd/groups/virt is a good start for ideas, notably 
some thread settings and choose-local=off to improve read 
performance. If you don’t have at least 10 cores on your servers, 
you may want to lower the recommended shd-max-threads=8 to no more 
than half your CPU cores to keep healing from swamping out regular 
work.


It’s also starting to depend on what your backing store and 
networking setup are, so you’re going to want to test changes and 
find what works best for your setup.


In addition to the virt group settings, I use these on most of my 
volumes, SSD or HDD backed, with the default 64M shard size:


performance.io -thread-count: 32# seemed good 
for my system, particularly a ZFS backed volume with lots of spindles

client.event-threads: 8
cluster.data-self-heal-algorithm: full# 10G networking, uses more 
net/less cpu to heal. probably don’t use this for 1G networking?

performance.stat-prefetch: on
cluster.read-hash-mode: 3# distribute reads to least loaded server 
(by read queue depth)


and these two only on my HDD backed volume:

performance.cache-size: 1G
performance.write-behind-window-size: 64MB

but I suspect these two need another round or six of tuning to tell 
if they are making a difference.


I use the throughput-performance tuned profile on my servers, so you 
should be in good shape there.


On Aug 19, 2019, at 12:22 PM, Guy Boisvert 
> wrote:


On 2019-08-19 12:08 p.m., Darrell Budic wrote:
You also need to make sure your volume is setup properly for best 
performance. Did you apply the gluster virt group to your volumes, 
or at least features.shard = on on your VM volume?


That's what we did here:


gluster volume set W2K16_Rhenium cluster.quorum-type auto
gluster volume set W2K16_Rhenium network.ping-timeout 10
gluster volume set W2K16_Rhenium auth.allow \*
gluster volume set W2K16_Rhenium group virt
gluster volume set W2K16_Rhenium storage.owner-uid 36
gluster volume set W2K16_Rhenium storage.owner-gid 36
gluster volume set W2K16_Rhenium features.shard on
gluster volume set W2K16_Rhenium features.shard-block-size 256MB
gluster volume set W2K16_Rhenium cluster.data-self-heal-algorithm full
gluster volume set W2K16_Rhenium performance.low-prio-threads 32

tuned-adm profile random-io        (a profile i added in CentOS 7)


cat /usr/lib/tuned/random-io/tuned.conf
===
[main]
summary=Optimize for Gluster virtual machine storage
include=throughput-performance

[sysctl]

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2


Any more optimization to add to this?


Guy

--
Guy Boisvert, ing.
IngTegration inc.
http://www.ingtegration.com
https://www.linkedin.com/in/guy-boisvert-8990487

AVIS DE CONFIDENTIALITE : ce message peut contenir des
renseignements confidentiels appartenant exclusivement a
IngTegration Inc. ou a ses filiales. Si vous n'etes pas
le destinataire indique ou prevu dans ce  message (ou
responsable de livrer ce message a la personne indiquee ou
prevue) ou si vous pensez que ce message vous a ete adresse
par erreur, vous ne pouvez pas utiliser ou reproduire ce
message, ni le livrer a quelqu'un d'autre. Dans ce cas, vous
devez le detruire et vous etes prie d'avertir l'expediteur
en repondant au courriel.

CONFIDENTIALITY NOTICE : Proprietary/Confidential Information
belonging to IngTegration Inc. and its affiliates may be
contained in this message. If you are not a recipient
indicated or intended in this message (or responsible for
delivery of this message to such person), or you think for
any reason that this message may have been addressed to you
in error, you may not use or copy or 

Re: [Gluster-users] Brick Reboot => VMs slowdown, client crashes

2019-08-23 Thread Carl Sirotic

However,

I must have misunderstood the whole concept of gluster.

In a replica 3, for me, it's completely unacceptable, regardless of the 
options, that all my VMs go down when I reboot one node.


The whole purpose of having a full 3 copy of my data on the fly is 
suposed to be this.


I am in the process of sharding every file.

But even if the healing time would be longer, I would still expect a 
non-sharded replica 3 brick with vm boot disk, to not go down if I 
reboot one of its copy.



I am not very impressed by gluster so far.

Carl

On 2019-08-19 4:15 p.m., Darrell Budic wrote:
/var/lib/glusterd/groups/virt is a good start for ideas, notably some 
thread settings and choose-local=off to improve read performance. If 
you don’t have at least 10 cores on your servers, you may want to 
lower the recommended shd-max-threads=8 to no more than half your CPU 
cores to keep healing from swamping out regular work.


It’s also starting to depend on what your backing store and networking 
setup are, so you’re going to want to test changes and find what works 
best for your setup.


In addition to the virt group settings, I use these on most of my 
volumes, SSD or HDD backed, with the default 64M shard size:


performance.io -thread-count: 32# seemed good 
for my system, particularly a ZFS backed volume with lots of spindles

client.event-threads: 8
cluster.data-self-heal-algorithm: full# 10G networking, uses more 
net/less cpu to heal. probably don’t use this for 1G networking?

performance.stat-prefetch: on
cluster.read-hash-mode: 3# distribute reads to least loaded server (by 
read queue depth)


and these two only on my HDD backed volume:

performance.cache-size: 1G
performance.write-behind-window-size: 64MB

but I suspect these two need another round or six of tuning to tell if 
they are making a difference.


I use the throughput-performance tuned profile on my servers, so you 
should be in good shape there.


On Aug 19, 2019, at 12:22 PM, Guy Boisvert 
> wrote:


On 2019-08-19 12:08 p.m., Darrell Budic wrote:
You also need to make sure your volume is setup properly for best 
performance. Did you apply the gluster virt group to your volumes, 
or at least features.shard = on on your VM volume?


That's what we did here:


gluster volume set W2K16_Rhenium cluster.quorum-type auto
gluster volume set W2K16_Rhenium network.ping-timeout 10
gluster volume set W2K16_Rhenium auth.allow \*
gluster volume set W2K16_Rhenium group virt
gluster volume set W2K16_Rhenium storage.owner-uid 36
gluster volume set W2K16_Rhenium storage.owner-gid 36
gluster volume set W2K16_Rhenium features.shard on
gluster volume set W2K16_Rhenium features.shard-block-size 256MB
gluster volume set W2K16_Rhenium cluster.data-self-heal-algorithm full
gluster volume set W2K16_Rhenium performance.low-prio-threads 32

tuned-adm profile random-io        (a profile i added in CentOS 7)


cat /usr/lib/tuned/random-io/tuned.conf
===
[main]
summary=Optimize for Gluster virtual machine storage
include=throughput-performance

[sysctl]

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2


Any more optimization to add to this?


Guy

--
Guy Boisvert, ing.
IngTegration inc.
http://www.ingtegration.com
https://www.linkedin.com/in/guy-boisvert-8990487

AVIS DE CONFIDENTIALITE : ce message peut contenir des
renseignements confidentiels appartenant exclusivement a
IngTegration Inc. ou a ses filiales. Si vous n'etes pas
le destinataire indique ou prevu dans ce  message (ou
responsable de livrer ce message a la personne indiquee ou
prevue) ou si vous pensez que ce message vous a ete adresse
par erreur, vous ne pouvez pas utiliser ou reproduire ce
message, ni le livrer a quelqu'un d'autre. Dans ce cas, vous
devez le detruire et vous etes prie d'avertir l'expediteur
en repondant au courriel.

CONFIDENTIALITY NOTICE : Proprietary/Confidential Information
belonging to IngTegration Inc. and its affiliates may be
contained in this message. If you are not a recipient
indicated or intended in this message (or responsible for
delivery of this message to such person), or you think for
any reason that this message may have been addressed to you
in error, you may not use or copy or deliver this message to
anyone else. In such case, you should destroy this message
and are asked to notify the sender by reply email.



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] split brain heal - socket not connected

2019-08-23 Thread Benedikt Kaleß
Hi,

unfortunatelly I have a directory in split-brain state.

If I do a

gluster volume heal  split-brain source-brick
gluster:/glusterfs/brick

I get a

socket not connected.

How can I manually heal that directory?

Best regards and thanks in advance

Bene

-- 
forumZFD
Entschieden für Frieden|Committed to Peace

Benedikt Kaleß
Leiter Team IT|Head team IT

Forum Ziviler Friedensdienst e.V.|Forum Civil Peace Service
Am Kölner Brett 8 | 50825 Köln | Germany  

Tel 0221 91273233 | Fax 0221 91273299 | 
http://www.forumZFD.de 

Vorstand nach § 26 BGB, einzelvertretungsberechtigt|Executive Board:
Oliver Knabe (Vorsitz|Chair), Sonja Wiekenberg-Mlalandle, Alexander Mauz  
VR 17651 Amtsgericht Köln

Spenden|Donations: IBAN DE37 3702 0500 0008 2401 01 BIC BFSWDE33XXX 

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users