Re: [PATCH mm] slab: implement bulking for SLAB allocator
On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote: > Also notice how well bulking maintains the performance when the bulk > size increases (which is a soar spot for the slub allocator). Well you are not actually completing the free action in SLAB. This is simply queueing the item to be freed later. Also was this test done on a NUMA system? Alien caches at some point come into the picture. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH mm] slab: implement bulking for SLAB allocator
On Tue, 8 Sep 2015 10:22:32 -0500 (CDT) Christoph Lameterwrote: > On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote: > > > Also notice how well bulking maintains the performance when the bulk > > size increases (which is a soar spot for the slub allocator). > > Well you are not actually completing the free action in SLAB. This is > simply queueing the item to be freed later. Also was this test done on a > NUMA system? Alien caches at some point come into the picture. This test was a single CPU benchmark with no congestion or concurrency. But the code was compiled with CONFIG_NUMA=y. I don't know the slAb code very well, but the kmem_cache_node->list_lock looks like a scalability issue. I guess that is what you are referring to ;-) -- Best regards, Jesper Dangaard Brouer MSc.CS, Sr. Network Kernel Developer at Red Hat Author of http://www.iptv-analyzer.org LinkedIn: http://www.linkedin.com/in/brouer -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH mm] slab: implement bulking for SLAB allocator
On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote: > This test was a single CPU benchmark with no congestion or concurrency. > But the code was compiled with CONFIG_NUMA=y. > > I don't know the slAb code very well, but the kmem_cache_node->list_lock > looks like a scalability issue. I guess that is what you are referring > to ;-) That lock can be mitigated like in SLUB by increasing per cpu resources. The problem in SLAB is the categorization of objects on free as to which node they came from and the use of arrays of pointers to avoid freeing the object to the object tracking metadata structures in the slab page. The arrays of pointers have to be replicated for each node, each slab and each processor. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html