On 04/11/2014 08:07 PM, Christoph Lameter wrote:
> On Thu, 3 Apr 2014, Vladimir Davydov wrote:
>
>> --- a/include/linux/slab.h
>> +++ b/include/linux/slab.h
>> @@ -358,16 +358,7 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
>> #include
>> #endif
>>
>> -static __always_inline void *
>>
On Thu, 3 Apr 2014, Vladimir Davydov wrote:
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -358,16 +358,7 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
> #include
> #endif
>
> -static __always_inline void *
> -kmalloc_order(size_t size, gfp_t flags, unsigned int order)
>
On Thu, 3 Apr 2014, Vladimir Davydov wrote:
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -358,16 +358,7 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
#include linux/slub_def.h
#endif
-static __always_inline void *
-kmalloc_order(size_t size, gfp_t flags, unsigned int
On 04/11/2014 08:07 PM, Christoph Lameter wrote:
On Thu, 3 Apr 2014, Vladimir Davydov wrote:
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -358,16 +358,7 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
#include linux/slub_def.h
#endif
-static __always_inline void *
On Thu, 3 Apr 2014 19:05:59 +0400 Vladimir Davydov
wrote:
> Currently to allocate a page that should be charged to kmemcg (e.g.
> threadinfo), we pass __GFP_KMEMCG flag to the page allocator. The page
> allocated is then to be freed by free_memcg_kmem_pages. Apart from
> looking asymmetrical,
On Thu, 3 Apr 2014 19:05:59 +0400 Vladimir Davydov vdavy...@parallels.com
wrote:
Currently to allocate a page that should be charged to kmemcg (e.g.
threadinfo), we pass __GFP_KMEMCG flag to the page allocator. The page
allocated is then to be freed by free_memcg_kmem_pages. Apart from
Currently to allocate a page that should be charged to kmemcg (e.g.
threadinfo), we pass __GFP_KMEMCG flag to the page allocator. The page
allocated is then to be freed by free_memcg_kmem_pages. Apart from
looking asymmetrical, this also requires intrusion to the general
allocation path. So let's
Currently to allocate a page that should be charged to kmemcg (e.g.
threadinfo), we pass __GFP_KMEMCG flag to the page allocator. The page
allocated is then to be freed by free_memcg_kmem_pages. Apart from
looking asymmetrical, this also requires intrusion to the general
allocation path. So let's
8 matches
Mail list logo