fyi: we have 3 servers, each with 2 SW RAID10 used as bricks in a
replicate 3 setup (so 2 volumes); the default values set by OS (debian
stretch) are:
/dev/md3
Array Size : 29298911232 (27941.62 GiB 30002.09 GB)
/sys/block/md3/queue/read_ahead_kb : 3027
/dev/md4
Array Size : 19532607488
On Wed, Feb 13, 2019 at 11:16 AM Manoj Pillai wrote:
>
>
> On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa
> wrote:
>
>>
>>
>> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
>> wrote:
>>
>>> All,
>>>
>>> We've found perf xlators io-cache and read-ahead not adding any
>>>
On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa
wrote:
>
>
> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
> wrote:
>
>> All,
>>
>> We've found perf xlators io-cache and read-ahead not adding any
>> performance improvement. At best read-ahead is redundant due to kernel
>>
On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
wrote:
> All,
>
> We've found perf xlators io-cache and read-ahead not adding any
> performance improvement. At best read-ahead is redundant due to kernel
> read-ahead
>
One thing we are still figuring out is whether kernel read-ahead is
On Tue, Feb 12, 2019 at 11:09 PM Darrell Budic
wrote:
> Is there an example of a custom profile you can share for my ovirt use
> case (with gfapi enabled)?
>
I was speaking about a group setting like "group metadata-cache". Its just
that custom options one would turn on for a class of
Is there an example of a custom profile you can share for my ovirt use case
(with gfapi enabled)? Or are you just talking about the standard group settings
for virt as a custom profile?
> On Feb 12, 2019, at 7:22 AM, Raghavendra Gowdappa wrote:
>
> https://review.gluster.org/22203
https://review.gluster.org/22203
On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
wrote:
> All,
>
> We've found perf xlators io-cache and read-ahead not adding any
> performance improvement. At best read-ahead is redundant due to kernel
> read-ahead and at worst io-cache is degrading the
All,
We've found perf xlators io-cache and read-ahead not adding any performance
improvement. At best read-ahead is redundant due to kernel read-ahead and
at worst io-cache is degrading the performance for workloads that doesn't
involve re-read. Given that VFS already have both these
Not yet but we are discussing an interim release. It is going to take a
couple of days to review the fixes so not before then. We will update on
the list with dates once we decide.
On Tue, 12 Feb 2019 at 11:46, Artem Russakovskii
wrote:
> Awesome. But is there a release schedule and an ETA for