On Mon, 18 Nov 2013, Adrian Chadd wrote:
Remember that for Netflix, we have a mostly non-cachable workload
(with some very specific exceptions!) and thus we churn through VM
pages at a presitidigious rate. 20gbit sec, or ~ 2.4 gigabytes a
second, or ~ 680,000 4 kilobyte pages a second. It's quit
On Mon, 18 Nov 2013, Alexander Motin wrote:
On 18.11.2013 21:11, Jeff Roberson wrote:
On Mon, 18 Nov 2013, Alexander Motin wrote:
I've created patch, based on earlier work of avg@, to add back
pressure to UMA allocation caches. The problem of physical memory or
KVA exhaustion existed there fo
Remember that for Netflix, we have a mostly non-cachable workload
(with some very specific exceptions!) and thus we churn through VM
pages at a presitidigious rate. 20gbit sec, or ~ 2.4 gigabytes a
second, or ~ 680,000 4 kilobyte pages a second. It's quite frightening
and it's only likely to increa
On 18.11.2013 21:11, Jeff Roberson wrote:
On Mon, 18 Nov 2013, Alexander Motin wrote:
I've created patch, based on earlier work of avg@, to add back
pressure to UMA allocation caches. The problem of physical memory or
KVA exhaustion existed there for many years and it is quite critical
now for i
On Mon, 18 Nov 2013, Alexander Motin wrote:
Hi.
I've created patch, based on earlier work of avg@, to add back pressure to
UMA allocation caches. The problem of physical memory or KVA exhaustion
existed there for many years and it is quite critical now for improving
systems performance while
On 18.11.2013 14:10, Adrian Chadd wrote:
On 18 November 2013 01:20, Alexander Motin wrote:
On 18.11.2013 10:41, Adrian Chadd wrote:
So, do you get any benefits from just the first one, or first two?
I don't see much reason to handle that in pieces. As I have described above,
each part has ow
On 18 November 2013 01:20, Alexander Motin wrote:
> On 18.11.2013 10:41, Adrian Chadd wrote:
>>
>> Your patch does three things:
>>
>> * adds a couple new buckets;
>
>
> These new buckets make bucket size self-tuning more soft and precise.
> Without them there are buckets for 1, 5, 13, 29, ... ite
On 18.11.2013 11:45, Luigi Rizzo wrote:
On Mon, Nov 18, 2013 at 10:20 AM, Alexander Motin mailto:m...@freebsd.org>> wrote:
On 18.11.2013 10:41, Adrian Chadd wrote:
Your patch does three things:
* adds a couple new buckets;
These new buckets make bucket size self-tu
On Mon, Nov 18, 2013 at 10:20 AM, Alexander Motin wrote:
> On 18.11.2013 10:41, Adrian Chadd wrote:
>
>> Your patch does three things:
>>
>> * adds a couple new buckets;
>>
>
> These new buckets make bucket size self-tuning more soft and precise.
> Without them there are buckets for 1, 5, 13, 29,
On 18.11.2013 10:41, Adrian Chadd wrote:
Your patch does three things:
* adds a couple new buckets;
These new buckets make bucket size self-tuning more soft and precise.
Without them there are buckets for 1, 5, 13, 29, ... items. While at
bigger sizes difference about 2x is fine, at smallest
Hi!
Your patch does three things:
* adds a couple new buckets;
* reduces some lock contention
* does the aggressive backpressure.
So, do you get any benefits from just the first one, or first two?
-adrian
On 17 November 2013 15:09, Alexander Motin wrote:
> Hi.
>
> I've created patch, base
Hi.
I've created patch, based on earlier work of avg@, to add back pressure
to UMA allocation caches. The problem of physical memory or KVA
exhaustion existed there for many years and it is quite critical now for
improving systems performance while keeping stability. Changes done in
memory al
12 matches
Mail list logo