On 2020-03-27 05:28, Christian Reiss wrote:
Hey Alex,
you too, thanks for writing.
I'm on 64mb as per default for ovirt. We tried no sharding, 128mb
sharding, 64mb sharding (always with copying the disk). There was no
increase or decrease in disk speed in any way.
Besides losing HA capabilites,
On March 27, 2020 2:49:13 PM GMT+02:00, Jorick Astrego
wrote:
>
>On 3/24/20 7:25 PM, Alex McWhirter wrote:
>> Red hat also recommends a shard size of 512mb, it's actually the only
>> shard size they support. Also check the chunk size on the LVM thin
>> pools running the bricks, should be at least
On March 27, 2020 11:26:25 AM GMT+02:00, Christian Reiss
wrote:
>Hey Jayme,
>
>thanks for replying; sorry for the delay.
>If I am understanding this right, there is no real official way to
>enable libgfapi. If you somehow manage to get it running then you will
>lose HA capabilities, which is so
On 3/24/20 7:25 PM, Alex McWhirter wrote:
> Red hat also recommends a shard size of 512mb, it's actually the only
> shard size they support. Also check the chunk size on the LVM thin
> pools running the bricks, should be at least 2mb. Note that changing
> the shard size only applies to new VM disk
Christian,
I've been following along with interest, as I've also been trying
everything I can to improve gluster performance in my HCI cluster. My issue
is mostly latency related and my workloads are typically small file
operations which have been especially challenging.
Couple of things
1. Abou
On 3/27/20 11:01 AM, Christian Reiss wrote:
> Hey Strahil,
>
> as always: thanks!
>
> On 24/03/2020 12:23, Strahil Nikolov wrote:
>
>> performance.write-behind-window-size: 64MB (shard size)
>
> This one doubled my speed from 200mb to 400mb!!
>
> I think this is where the meat is at.
>
> -Chris.
Hey,
thanks for writing. If I go for dont choose local my speed drops
dramatically (halving). Speed between the hosts is okay (tested) but for
some odd reason the mtu is at 1500 still. I was sure I set it to
jumbo/9k. Oh well.
Not during runtime. I can hear the gluster scream if the network
Hey Strahil,
as always: thanks!
On 24/03/2020 12:23, Strahil Nikolov wrote:
Hey Chris,
What type is your VM ?
CentOS7.
Try with 'High Performance' one (there is a good RH documentation on that
topic).
I was googly-eying that as well. Will try that tonight :)
1. Check the VM disk sche
Hey Alex,
you too, thanks for writing.
I'm on 64mb as per default for ovirt. We tried no sharding, 128mb
sharding, 64mb sharding (always with copying the disk). There was no
increase or decrease in disk speed in any way.
Besides losing HA capabilites, what other caveats?
-Chris.
On 24/03/20
Hey Jayme,
thanks for replying; sorry for the delay.
If I am understanding this right, there is no real official way to
enable libgfapi. If you somehow manage to get it running then you will
lose HA capabilities, which is something we like on our production servers.
The most recent post I cou
Hey,
thanks for writing. Sorry about the delay.
On 25/03/2020 00:25, Nir Soffer wrote:
> These settings mean:
>
>> performance.strict-o-direct: on
>> network.remote-dio: enable
>
> That you are using direct I/O both on the client and server side.
I changed them to off, to no avail. Yields the s
On Mon, Mar 23, 2020 at 11:44 PM Christian Reiss
wrote:
>
> Hey folks,
>
> gluster related question. Having SSD in a RAID that can do 2 GB writes
> and Reads (actually above, but meh) in a 3-way HCI cluster connected
> with 10gbit connection things are pretty slow inside gluster.
> I have these se
I strongly believe that FUSE mount is the real reason for poor performance
in HCI and these minor gluster and other tweaks won't satisfy most seeking
i/o performance. Enabling libgfapi is probably the best option. Redhat has
recently closed bug reports related to libgfapi citing won't fix and one
c
Red hat also recommends a shard size of 512mb, it's actually the only
shard size they support. Also check the chunk size on the LVM thin pools
running the bricks, should be at least 2mb. Note that changing the shard
size only applies to new VM disks after the change. Changing the chunk
size req
On March 24, 2020 7:33:16 PM GMT+02:00, Darrell Budic
wrote:
>Christian,
>
>Adding on to Stahil’s notes, make sure you’re using jumbo MTUs on
>servers and client host nodes. Making sure you’re using appropriate
>disk schedulers on hosts and VMs is important, worth double checking
>that it’s doing
Christian,
Adding on to Stahil’s notes, make sure you’re using jumbo MTUs on servers and
client host nodes. Making sure you’re using appropriate disk schedulers on
hosts and VMs is important, worth double checking that it’s doing what you
think it is. If you are only HCI, gluster’s choose-local
On March 24, 2020 11:20:10 AM GMT+02:00, Christian Reiss
wrote:
>Hey Strahil,
>
>seems you're the go-to-guy with pretty much all my issues. I thank you
>for this and your continued support. Much appreciated.
>
>
>200mb/reads however seems like a broken config or malfunctioning
>gluster
>than re
Hey Strahil,
seems you're the go-to-guy with pretty much all my issues. I thank you
for this and your continued support. Much appreciated.
200mb/reads however seems like a broken config or malfunctioning gluster
than requiring performance tweaks. I enabled profiling so I have real
life data
On March 24, 2020 12:08:08 AM GMT+02:00, Jayme wrote:
>I too struggle with speed issues in hci. Latency is a big problem with
>writes for me especially when dealing with small file workloads. How
>are
>you testing exactly?
>
>Look into enabling libgfapi and try some comparisons with that. People
>
I too struggle with speed issues in hci. Latency is a big problem with
writes for me especially when dealing with small file workloads. How are
you testing exactly?
Look into enabling libgfapi and try some comparisons with that. People have
been saying it’s much faster, but it’s not a default opti
20 matches
Mail list logo