All,
We've found perf xlators io-cache and read-ahead not adding any performance
improvement. At best read-ahead is redundant due to kernel read-ahead and
at worst io-cache is degrading the performance for workloads that doesn't
involve re-read. Given that VFS already have both these functionaliti
https://review.gluster.org/22203
On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
wrote:
> All,
>
> We've found perf xlators io-cache and read-ahead not adding any
> performance improvement. At best read-ahead is redundant due to kernel
> read-ahead and at worst io-cache is degrading the per
Hi,
I have observed the test case ./tests/bugs/distribute/bug-1161311.t is
getting timed
out on build server at the time of running centos regression on one of my
patch https://review.gluster.org/22166
I have executed test case for i in {1..30}; do time prove -vf
./tests/bugs/distribute/bug-11613
On Tue, Feb 12, 2019 at 7:16 PM Mohit Agrawal wrote:
> Hi,
>
> I have observed the test case ./tests/bugs/distribute/bug-1161311.t is
> getting timed
>
I've seen failure of this too in some of my patches.
out on build server at the time of running centos regression on one of my
> patch https://
Hi,
After all the RAX builders are moved to AWS. We are planning to
migrate softserve application to AWS completely.
As a part of this migration, we're bringing down softserve for a few
days. So softserve will not be able to lend any more machines until we
are ready with the migrated code.
Thank
I'll take a look at this today. The logs indicate the test completed in
under 3 minutes but something seems to be holding up the cleanup.
On Tue, 12 Feb 2019 at 19:30, Raghavendra Gowdappa
wrote:
>
>
> On Tue, Feb 12, 2019 at 7:16 PM Mohit Agrawal wrote:
>
>> Hi,
>>
>> I have observed the test
On Wed, Feb 13, 2019 at 9:51 AM Nithya Balachandran
wrote:
> I'll take a look at this today. The logs indicate the test completed in
> under 3 minutes but something seems to be holding up the cleanup.
>
>
Just a look on some successful runs show output like below:
--
*17:44:49* ok 57, LINENUM:1
On Wed, Feb 13, 2019 at 9:54 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
>
>
> On Wed, Feb 13, 2019 at 9:51 AM Nithya Balachandran
> wrote:
>
>> I'll take a look at this today. The logs indicate the test completed in
>> under 3 minutes but something seems to be holding up the clea
The volume is not stopped before unmounting the bricks. I will send a fix.
On Wed, 13 Feb 2019 at 10:00, Raghavendra Gowdappa
wrote:
>
>
> On Wed, Feb 13, 2019 at 9:54 AM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>>
>>
>> On Wed, Feb 13, 2019 at 9:51 AM Nithya Balachandran
>>
On Tue, Feb 12, 2019 at 11:09 PM Darrell Budic
wrote:
> Is there an example of a custom profile you can share for my ovirt use
> case (with gfapi enabled)?
>
I was speaking about a group setting like "group metadata-cache". Its just
that custom options one would turn on for a class of applicatio
On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
wrote:
> All,
>
> We've found perf xlators io-cache and read-ahead not adding any
> performance improvement. At best read-ahead is redundant due to kernel
> read-ahead
>
One thing we are still figuring out is whether kernel read-ahead is
tunab
On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa
wrote:
>
>
> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
> wrote:
>
>> All,
>>
>> We've found perf xlators io-cache and read-ahead not adding any
>> performance improvement. At best read-ahead is redundant due to kernel
>> read-ahead
On Wed, Feb 13, 2019 at 11:16 AM Manoj Pillai wrote:
>
>
> On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa
> wrote:
>
>>
>>
>> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
>> wrote:
>>
>>> All,
>>>
>>> We've found perf xlators io-cache and read-ahead not adding any
>>> performance
13 matches
Mail list logo