Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Hu Bert
fyi: we have 3 servers, each with 2 SW RAID10 used as bricks in a replicate 3 setup (so 2 volumes); the default values set by OS (debian stretch) are: /dev/md3 Array Size : 29298911232 (27941.62 GiB 30002.09 GB) /sys/block/md3/queue/read_ahead_kb : 3027 /dev/md4 Array Size : 19532607488

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Wed, Feb 13, 2019 at 11:16 AM Manoj Pillai wrote: > > > On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa > wrote: > >> >> >> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa >> wrote: >> >>> All, >>> >>> We've found perf xlators io-cache and read-ahead not adding any >>>

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Manoj Pillai
On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa wrote: > > > On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa > wrote: > >> All, >> >> We've found perf xlators io-cache and read-ahead not adding any >> performance improvement. At best read-ahead is redundant due to kernel >>

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa wrote: > All, > > We've found perf xlators io-cache and read-ahead not adding any > performance improvement. At best read-ahead is redundant due to kernel > read-ahead > One thing we are still figuring out is whether kernel read-ahead is

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Tue, Feb 12, 2019 at 11:09 PM Darrell Budic wrote: > Is there an example of a custom profile you can share for my ovirt use > case (with gfapi enabled)? > I was speaking about a group setting like "group metadata-cache". Its just that custom options one would turn on for a class of

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Darrell Budic
Is there an example of a custom profile you can share for my ovirt use case (with gfapi enabled)? Or are you just talking about the standard group settings for virt as a custom profile? > On Feb 12, 2019, at 7:22 AM, Raghavendra Gowdappa wrote: > > https://review.gluster.org/22203

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
https://review.gluster.org/22203 On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa wrote: > All, > > We've found perf xlators io-cache and read-ahead not adding any > performance improvement. At best read-ahead is redundant due to kernel > read-ahead and at worst io-cache is degrading the

[Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
All, We've found perf xlators io-cache and read-ahead not adding any performance improvement. At best read-ahead is redundant due to kernel read-ahead and at worst io-cache is degrading the performance for workloads that doesn't involve re-read. Given that VFS already have both these

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-12 Thread Nithya Balachandran
Not yet but we are discussing an interim release. It is going to take a couple of days to review the fixes so not before then. We will update on the list with dates once we decide. On Tue, 12 Feb 2019 at 11:46, Artem Russakovskii wrote: > Awesome. But is there a release schedule and an ETA for