Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-13 Thread Darrell Budic
Ah, ok, that’s what I thought. Then I have no complaints about improved defaults for the fuse case as long as the use case groups retain appropriately optimized settings. Thanks! > On Feb 12, 2019, at 11:14 PM, Raghavendra Gowdappa > wrote: > > > > On Tue, Feb 12, 2019 at 11:09 PM Darrell

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Hu Bert
fyi: we have 3 servers, each with 2 SW RAID10 used as bricks in a replicate 3 setup (so 2 volumes); the default values set by OS (debian stretch) are: /dev/md3 Array Size : 29298911232 (27941.62 GiB 30002.09 GB) /sys/block/md3/queue/read_ahead_kb : 3027 /dev/md4 Array Size : 19532607488

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Wed, Feb 13, 2019 at 11:16 AM Manoj Pillai wrote: > > > On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa > wrote: > >> >> >> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa >> wrote: >> >>> All, >>> >>> We've found perf xlators io-cache and read-ahead not adding any >>>

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Manoj Pillai
On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa wrote: > > > On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa > wrote: > >> All, >> >> We've found perf xlators io-cache and read-ahead not adding any >> performance improvement. At best read-ahead is redundant due to kernel >>

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa wrote: > All, > > We've found perf xlators io-cache and read-ahead not adding any > performance improvement. At best read-ahead is redundant due to kernel > read-ahead > One thing we are still figuring out is whether kernel read-ahead is

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Tue, Feb 12, 2019 at 11:09 PM Darrell Budic wrote: > Is there an example of a custom profile you can share for my ovirt use > case (with gfapi enabled)? > I was speaking about a group setting like "group metadata-cache". Its just that custom options one would turn on for a class of

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Darrell Budic
Is there an example of a custom profile you can share for my ovirt use case (with gfapi enabled)? Or are you just talking about the standard group settings for virt as a custom profile? > On Feb 12, 2019, at 7:22 AM, Raghavendra Gowdappa wrote: > > https://review.gluster.org/22203

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
https://review.gluster.org/22203 On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa wrote: > All, > > We've found perf xlators io-cache and read-ahead not adding any > performance improvement. At best read-ahead is redundant due to kernel > read-ahead and at worst io-cache is degrading the

[Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
All, We've found perf xlators io-cache and read-ahead not adding any performance improvement. At best read-ahead is redundant due to kernel read-ahead and at worst io-cache is degrading the performance for workloads that doesn't involve re-read. Given that VFS already have both these