On Fri, 2012-10-19 at 09:03 +0900, JoonSoo Kim wrote:
> Hello, Eric.
> Thank you very much for a kind comment about my question.
> I have one more question related to network subsystem.
> Please let me know what I misunderstand.
>
> 2012/10/14 Eric Dumazet :
> > In latest kernels, skb->head no
On Fri, 2012-10-19 at 09:03 +0900, JoonSoo Kim wrote:
Hello, Eric.
Thank you very much for a kind comment about my question.
I have one more question related to network subsystem.
Please let me know what I misunderstand.
2012/10/14 Eric Dumazet eric.duma...@gmail.com:
In latest kernels,
Hello, Eric.
Thank you very much for a kind comment about my question.
I have one more question related to network subsystem.
Please let me know what I misunderstand.
2012/10/14 Eric Dumazet :
> In latest kernels, skb->head no longer use kmalloc()/kfree(), so SLAB vs
> SLUB is less a concern for
Hello, Eric.
Thank you very much for a kind comment about my question.
I have one more question related to network subsystem.
Please let me know what I misunderstand.
2012/10/14 Eric Dumazet eric.duma...@gmail.com:
In latest kernels, skb-head no longer use kmalloc()/kfree(), so SLAB vs
SLUB is
On Wed, Oct 17, 2012 at 1:33 PM, Tim Bird wrote:
> On 10/17/2012 12:20 PM, Shentino wrote:
>> Potentially stupid question
>>
>> But is SLAB the one where all objects per cache have a fixed size and
>> thus you don't have any bookkeeping overhead for the actual
>> allocations?
>>
>> I remember
On Wed, Oct 17, 2012 at 5:58 PM, Tim Bird wrote:
> On 10/17/2012 12:13 PM, Eric Dumazet wrote:
>> On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
>>
>>> 8G is a small web server? The RAM budget for Linux on one of
>>> Sony's cameras was 10M. We're not merely not in the same ballpark -
>>>
On 10/17/2012 12:13 PM, Eric Dumazet wrote:
> On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
>
>> 8G is a small web server? The RAM budget for Linux on one of
>> Sony's cameras was 10M. We're not merely not in the same ballpark -
>> you're in a ballpark and I'm trimming bonsai trees... :-)
On 10/17/2012 12:20 PM, Shentino wrote:
> Potentially stupid question
>
> But is SLAB the one where all objects per cache have a fixed size and
> thus you don't have any bookkeeping overhead for the actual
> allocations?
>
> I remember something about one of the allocation mechanisms being
>
On Wed, Oct 17, 2012 at 12:13 PM, Eric Dumazet wrote:
> On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
>
>> 8G is a small web server? The RAM budget for Linux on one of
>> Sony's cameras was 10M. We're not merely not in the same ballpark -
>> you're in a ballpark and I'm trimming bonsai
On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
> 8G is a small web server? The RAM budget for Linux on one of
> Sony's cameras was 10M. We're not merely not in the same ballpark -
> you're in a ballpark and I'm trimming bonsai trees... :-)
>
Even laptops in 2012 have +4GB of ram.
(Maybe
On 10/16/2012 12:16 PM, Eric Dumazet wrote:
> On Tue, 2012-10-16 at 15:27 -0300, Ezequiel Garcia wrote:
>
>> Yes, we have some numbers:
>>
>> http://elinux.org/Kernel_dynamic_memory_analysis#Kmalloc_objects
>>
>> Are they too informal? I can add some details...
>>
>> They've been measured on a
On 10/16/2012 12:16 PM, Eric Dumazet wrote:
On Tue, 2012-10-16 at 15:27 -0300, Ezequiel Garcia wrote:
Yes, we have some numbers:
http://elinux.org/Kernel_dynamic_memory_analysis#Kmalloc_objects
Are they too informal? I can add some details...
They've been measured on a **very** minimal
On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
8G is a small web server? The RAM budget for Linux on one of
Sony's cameras was 10M. We're not merely not in the same ballpark -
you're in a ballpark and I'm trimming bonsai trees... :-)
Even laptops in 2012 have +4GB of ram.
(Maybe not
On Wed, Oct 17, 2012 at 12:13 PM, Eric Dumazet eric.duma...@gmail.com wrote:
On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
8G is a small web server? The RAM budget for Linux on one of
Sony's cameras was 10M. We're not merely not in the same ballpark -
you're in a ballpark and I'm
On 10/17/2012 12:20 PM, Shentino wrote:
Potentially stupid question
But is SLAB the one where all objects per cache have a fixed size and
thus you don't have any bookkeeping overhead for the actual
allocations?
I remember something about one of the allocation mechanisms being
designed
On 10/17/2012 12:13 PM, Eric Dumazet wrote:
On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
8G is a small web server? The RAM budget for Linux on one of
Sony's cameras was 10M. We're not merely not in the same ballpark -
you're in a ballpark and I'm trimming bonsai trees... :-)
Even
On Wed, Oct 17, 2012 at 5:58 PM, Tim Bird tim.b...@am.sony.com wrote:
On 10/17/2012 12:13 PM, Eric Dumazet wrote:
On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
8G is a small web server? The RAM budget for Linux on one of
Sony's cameras was 10M. We're not merely not in the same ballpark
On Wed, Oct 17, 2012 at 1:33 PM, Tim Bird tim.b...@am.sony.com wrote:
On 10/17/2012 12:20 PM, Shentino wrote:
Potentially stupid question
But is SLAB the one where all objects per cache have a fixed size and
thus you don't have any bookkeeping overhead for the actual
allocations?
I
On Tue, 2012-10-16 at 15:27 -0300, Ezequiel Garcia wrote:
> Yes, we have some numbers:
>
> http://elinux.org/Kernel_dynamic_memory_analysis#Kmalloc_objects
>
> Are they too informal? I can add some details...
>
> They've been measured on a **very** minimal setup, almost every option
> is
On Thu, 11 Oct 2012, Ezequiel Garcia wrote:
> * Is SLAB a proper choice? or is it just historical an never been
> re-evaluated?
> * Does the average embedded guy knows which allocator to choose
> and what's the impact on his platform?
My current ideas on this subject matter is to get to a
On Tue, 16 Oct 2012, Ezequiel Garcia wrote:
> It might be worth reminding that very small systems can use SLOB
> allocator, which does not suffer from this kind of fragmentation.
Well, I have never seen non experimental systems that use SLOB. Others
have claimed they exist.
--
To unsubscribe
On Mon, 15 Oct 2012, David Rientjes wrote:
> This type of workload that really exhibits the problem with remote freeing
> would suggest that the design of slub itself is the problem here.
There is a tradeoff here between spatial data locality and temporal
locality. Slub always frees to the queue
On Tue, Oct 16, 2012 at 3:44 PM, Tim Bird wrote:
> On 10/16/2012 11:27 AM, Ezequiel Garcia wrote:
>> On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird wrote:
>>> On 10/16/2012 05:56 AM, Eric Dumazet wrote:
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
> Now, returning to the
On 10/16/2012 11:27 AM, Ezequiel Garcia wrote:
> On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird wrote:
>> On 10/16/2012 05:56 AM, Eric Dumazet wrote:
>>> On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
>>>
Now, returning to the fragmentation. The problem with SLAB is that
its
On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird wrote:
> On 10/16/2012 05:56 AM, Eric Dumazet wrote:
>> On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
>>
>>> Now, returning to the fragmentation. The problem with SLAB is that
>>> its smaller cache available for kmalloced objects is 32 bytes;
On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird wrote:
> On 10/16/2012 05:56 AM, Eric Dumazet wrote:
>> On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
>>
>>> Now, returning to the fragmentation. The problem with SLAB is that
>>> its smaller cache available for kmalloced objects is 32 bytes;
On 10/16/2012 05:56 AM, Eric Dumazet wrote:
> On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
>
>> Now, returning to the fragmentation. The problem with SLAB is that
>> its smaller cache available for kmalloced objects is 32 bytes;
>> while SLUB allows 8, 16, 24 ...
>>
>> Perhaps adding
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
> Now, returning to the fragmentation. The problem with SLAB is that
> its smaller cache available for kmalloced objects is 32 bytes;
> while SLUB allows 8, 16, 24 ...
>
> Perhaps adding smaller caches to SLAB might make sense?
> Is there
David,
On Mon, Oct 15, 2012 at 9:46 PM, David Rientjes wrote:
> On Sat, 13 Oct 2012, Ezequiel Garcia wrote:
>
>> But SLAB suffers from a lot more internal fragmentation than SLUB,
>> which I guess is a known fact. So memory-constrained devices
>> would waste more memory by using SLAB.
>
> Even
On Tue, 2012-10-16 at 10:28 +0900, JoonSoo Kim wrote:
> Hello, Eric.
>
> 2012/10/14 Eric Dumazet :
> > SLUB was really bad in the common workload you describe (allocations
> > done by one cpu, freeing done by other cpus), because all kfree() hit
> > the slow path and cpus contend in __slab_free()
On Tue, 2012-10-16 at 10:28 +0900, JoonSoo Kim wrote:
Hello, Eric.
2012/10/14 Eric Dumazet eric.duma...@gmail.com:
SLUB was really bad in the common workload you describe (allocations
done by one cpu, freeing done by other cpus), because all kfree() hit
the slow path and cpus contend in
David,
On Mon, Oct 15, 2012 at 9:46 PM, David Rientjes rient...@google.com wrote:
On Sat, 13 Oct 2012, Ezequiel Garcia wrote:
But SLAB suffers from a lot more internal fragmentation than SLUB,
which I guess is a known fact. So memory-constrained devices
would waste more memory by using SLAB.
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
Now, returning to the fragmentation. The problem with SLAB is that
its smaller cache available for kmalloced objects is 32 bytes;
while SLUB allows 8, 16, 24 ...
Perhaps adding smaller caches to SLAB might make sense?
Is there any
On 10/16/2012 05:56 AM, Eric Dumazet wrote:
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
Now, returning to the fragmentation. The problem with SLAB is that
its smaller cache available for kmalloced objects is 32 bytes;
while SLUB allows 8, 16, 24 ...
Perhaps adding smaller
On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird tim.b...@am.sony.com wrote:
On 10/16/2012 05:56 AM, Eric Dumazet wrote:
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
Now, returning to the fragmentation. The problem with SLAB is that
its smaller cache available for kmalloced objects is
On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird tim.b...@am.sony.com wrote:
On 10/16/2012 05:56 AM, Eric Dumazet wrote:
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
Now, returning to the fragmentation. The problem with SLAB is that
its smaller cache available for kmalloced objects is
On 10/16/2012 11:27 AM, Ezequiel Garcia wrote:
On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird tim.b...@am.sony.com wrote:
On 10/16/2012 05:56 AM, Eric Dumazet wrote:
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
Now, returning to the fragmentation. The problem with SLAB is that
its
On Tue, Oct 16, 2012 at 3:44 PM, Tim Bird tim.b...@am.sony.com wrote:
On 10/16/2012 11:27 AM, Ezequiel Garcia wrote:
On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird tim.b...@am.sony.com wrote:
On 10/16/2012 05:56 AM, Eric Dumazet wrote:
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
On Mon, 15 Oct 2012, David Rientjes wrote:
This type of workload that really exhibits the problem with remote freeing
would suggest that the design of slub itself is the problem here.
There is a tradeoff here between spatial data locality and temporal
locality. Slub always frees to the queue
On Tue, 16 Oct 2012, Ezequiel Garcia wrote:
It might be worth reminding that very small systems can use SLOB
allocator, which does not suffer from this kind of fragmentation.
Well, I have never seen non experimental systems that use SLOB. Others
have claimed they exist.
--
To unsubscribe
On Thu, 11 Oct 2012, Ezequiel Garcia wrote:
* Is SLAB a proper choice? or is it just historical an never been
re-evaluated?
* Does the average embedded guy knows which allocator to choose
and what's the impact on his platform?
My current ideas on this subject matter is to get to a point
On Tue, 2012-10-16 at 15:27 -0300, Ezequiel Garcia wrote:
Yes, we have some numbers:
http://elinux.org/Kernel_dynamic_memory_analysis#Kmalloc_objects
Are they too informal? I can add some details...
They've been measured on a **very** minimal setup, almost every option
is stripped out,
Hello, Eric.
2012/10/14 Eric Dumazet :
> SLUB was really bad in the common workload you describe (allocations
> done by one cpu, freeing done by other cpus), because all kfree() hit
> the slow path and cpus contend in __slab_free() in the loop guarded by
> cmpxchg_double_slab(). SLAB has a cache
On Sat, 13 Oct 2012, Ezequiel Garcia wrote:
> But SLAB suffers from a lot more internal fragmentation than SLUB,
> which I guess is a known fact. So memory-constrained devices
> would waste more memory by using SLAB.
Even with slub's per-cpu partial lists?
--
To unsubscribe from this list: send
On Sat, 13 Oct 2012, David Rientjes wrote:
> This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
> two 64GB machines (one client, one server), four nodes each, with thread
> counts in multiples of the number of cores. SLUB does a comparable job,
> but once we have the
On Sat, 13 Oct 2012, David Rientjes wrote:
This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
two 64GB machines (one client, one server), four nodes each, with thread
counts in multiples of the number of cores. SLUB does a comparable job,
but once we have the the
On Sat, 13 Oct 2012, Ezequiel Garcia wrote:
But SLAB suffers from a lot more internal fragmentation than SLUB,
which I guess is a known fact. So memory-constrained devices
would waste more memory by using SLAB.
Even with slub's per-cpu partial lists?
--
To unsubscribe from this list: send the
Hello, Eric.
2012/10/14 Eric Dumazet eric.duma...@gmail.com:
SLUB was really bad in the common workload you describe (allocations
done by one cpu, freeing done by other cpus), because all kfree() hit
the slow path and cpus contend in __slab_free() in the loop guarded by
cmpxchg_double_slab().
On Sat, 2012-10-13 at 02:51 -0700, David Rientjes wrote:
> On Thu, 11 Oct 2012, Andi Kleen wrote:
>
> > When did you last test? Our regressions had disappeared a few kernels
> > ago.
> >
>
> This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
> two 64GB machines (one
Hi David,
On Sat, Oct 13, 2012 at 6:54 AM, David Rientjes wrote:
> On Fri, 12 Oct 2012, Ezequiel Garcia wrote:
>
>> >> SLUB is a non-starter for us and incurs a >10% performance degradation in
>> >> netperf TCP_RR.
>> >
>>
>> Where are you seeing that?
>>
>
> In my benchmarking results.
>
>>
On Fri, 12 Oct 2012, Ezequiel Garcia wrote:
> >> SLUB is a non-starter for us and incurs a >10% performance degradation in
> >> netperf TCP_RR.
> >
>
> Where are you seeing that?
>
In my benchmarking results.
> Notice that many defconfigs are for embedded devices,
> and many of them say "use
On Thu, 11 Oct 2012, Andi Kleen wrote:
> When did you last test? Our regressions had disappeared a few kernels
> ago.
>
This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
two 64GB machines (one client, one server), four nodes each, with thread
counts in multiples of
On Thu, 11 Oct 2012, Andi Kleen wrote:
When did you last test? Our regressions had disappeared a few kernels
ago.
This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
two 64GB machines (one client, one server), four nodes each, with thread
counts in multiples of the
On Fri, 12 Oct 2012, Ezequiel Garcia wrote:
SLUB is a non-starter for us and incurs a 10% performance degradation in
netperf TCP_RR.
Where are you seeing that?
In my benchmarking results.
Notice that many defconfigs are for embedded devices,
and many of them say use SLAB; I wonder
Hi David,
On Sat, Oct 13, 2012 at 6:54 AM, David Rientjes rient...@google.com wrote:
On Fri, 12 Oct 2012, Ezequiel Garcia wrote:
SLUB is a non-starter for us and incurs a 10% performance degradation in
netperf TCP_RR.
Where are you seeing that?
In my benchmarking results.
Notice
On Sat, 2012-10-13 at 02:51 -0700, David Rientjes wrote:
On Thu, 11 Oct 2012, Andi Kleen wrote:
When did you last test? Our regressions had disappeared a few kernels
ago.
This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
two 64GB machines (one client, one
Hi,
On Thu, Oct 11, 2012 at 8:10 PM, Andi Kleen wrote:
> David Rientjes writes:
>
>> On Thu, 11 Oct 2012, Andi Kleen wrote:
>>
>>> > While I've always thought SLUB was the default and recommended allocator,
>>> > I'm surprise to find that it's not always the case:
>>>
>>> iirc the main
Hi,
On Thu, Oct 11, 2012 at 8:10 PM, Andi Kleen a...@firstfloor.org wrote:
David Rientjes rient...@google.com writes:
On Thu, 11 Oct 2012, Andi Kleen wrote:
While I've always thought SLUB was the default and recommended allocator,
I'm surprise to find that it's not always the case:
iirc
David Rientjes writes:
> On Thu, 11 Oct 2012, Andi Kleen wrote:
>
>> > While I've always thought SLUB was the default and recommended allocator,
>> > I'm surprise to find that it's not always the case:
>>
>> iirc the main performance reasons for slab over slub have mostly
>> disappeared, so in
On Thu, 11 Oct 2012, Andi Kleen wrote:
> > While I've always thought SLUB was the default and recommended allocator,
> > I'm surprise to find that it's not always the case:
>
> iirc the main performance reasons for slab over slub have mostly
> disappeared, so in theory slab could be finally
Ezequiel Garcia writes:
> Hello,
>
> While I've always thought SLUB was the default and recommended allocator,
> I'm surprise to find that it's not always the case:
iirc the main performance reasons for slab over slub have mostly
disappeared, so in theory slab could be finally deprecated now.
Ezequiel Garcia elezegar...@gmail.com writes:
Hello,
While I've always thought SLUB was the default and recommended allocator,
I'm surprise to find that it's not always the case:
iirc the main performance reasons for slab over slub have mostly
disappeared, so in theory slab could be finally
On Thu, 11 Oct 2012, Andi Kleen wrote:
While I've always thought SLUB was the default and recommended allocator,
I'm surprise to find that it's not always the case:
iirc the main performance reasons for slab over slub have mostly
disappeared, so in theory slab could be finally deprecated
David Rientjes rient...@google.com writes:
On Thu, 11 Oct 2012, Andi Kleen wrote:
While I've always thought SLUB was the default and recommended allocator,
I'm surprise to find that it's not always the case:
iirc the main performance reasons for slab over slub have mostly
disappeared,
64 matches
Mail list logo