Hello, Andrew.
2012/10/31 Andrew Morton a...@linux-foundation.org:
On Mon, 29 Oct 2012 04:12:53 +0900
Joonsoo Kim js1...@gmail.com wrote:
The pool_lock protects the page_address_pool from concurrent access.
But, access to the page_address_pool is already protected by kmap_lock.
So remove
We can find free page_address_map instance without the page_address_pool.
So remove it.
Cc: Mel Gorman m...@csn.ul.ie
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Joonsoo Kim js1...@gmail.com
Reviewed-by: Minchan Kim minc...@kernel.org
diff --git a/mm/highmem.c b/mm/highmem.c
index
In flush_all_zero_pkmaps(), we have an index of the pkmap associated the page.
Using this index, we can simply get virtual address of the page.
So change it.
Cc: Mel Gorman m...@csn.ul.ie
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Joonsoo Kim js1...@gmail.com
Reviewed-by: Minchan
change.
[2-3] are for clean-up and optimization.
These eliminate an useless lock opearation and list management.
[4-5] is for optimization related to flush_all_zero_pkmaps().
Joonsoo Kim (5):
mm, highmem: use PKMAP_NR() to calculate an index of pkmap
mm, highmem: remove useless pool_lock
mm
To calculate an index of pkmap, using PKMAP_NR() is more understandable
and maintainable, So change it.
Cc: Mel Gorman m...@csn.ul.ie
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Joonsoo Kim js1...@gmail.com
Reviewed-by: Minchan Kim minc...@kernel.org
diff --git a/mm/highmem.c b/mm
: Peter Zijlstra a.p.zijls...@chello.nl
Cc: Minchan Kim minc...@kernel.org
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index ef788b5..97ad208 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -32,6 +32,7 @@ static
The pool_lock protects the page_address_pool from concurrent access.
But, access to the page_address_pool is already protected by kmap_lock.
So remove it.
Cc: Mel Gorman m...@csn.ul.ie
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Joonsoo Kim js1...@gmail.com
Reviewed-by: Minchan Kim
Hello, Andrew.
2012/10/29 JoonSoo Kim js1...@gmail.com:
Hi, Minchan.
2012/10/29 Minchan Kim minc...@kernel.org:
Hi Joonsoo,
On Mon, Oct 29, 2012 at 04:12:51AM +0900, Joonsoo Kim wrote:
This patchset clean-up and optimize highmem related code.
[1] is just clean-up and doesn't introduce
Hello, Minchan.
2012/11/1 Minchan Kim minc...@kernel.org:
On Thu, Nov 01, 2012 at 01:56:36AM +0900, Joonsoo Kim wrote:
In current code, after flush_all_zero_pkmaps() is invoked,
then re-iterate all pkmaps. It can be optimized if flush_all_zero_pkmaps()
return index of first flushed entry
Hello, Glauber.
2012/11/2 Glauber Costa glom...@parallels.com:
On 11/02/2012 04:04 AM, Andrew Morton wrote:
On Thu, 1 Nov 2012 16:07:16 +0400
Glauber Costa glom...@parallels.com wrote:
Hi,
This work introduces the kernel memory controller for memcg. Unlike previous
submissions, this
data bss dec hex filename
100226271443136 5722112 171878751064423 vmlinux
Cc: Christoph Lameter c...@linux.com
Signed-off-by: Joonsoo Kim js1...@gmail.com
---
With Christoph's patchset(common kmalloc caches:
'[15/15] Common Kmalloc cache determination') which is not merged
kmalloc() and kmalloc_node() of the SLUB isn't inlined when @flags = __GFP_DMA.
This patch optimize this case,
so when @flags = __GFP_DMA, it will be inlined into generic code.
Cc: Christoph Lameter c...@linux.com
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/include/linux/slub_def.h
This patchset do minor cleanup for workqueue code.
First patch makes minor behavior change, however, it is trivial.
Others doesn't makes any functional difference.
These are based on v3.7-rc1
Joonsoo Kim (3):
workqueue: optimize mod_delayed_work_on() when @delay == 0
workqueue: trivial fix
After try_to_grab_pending(), __queue_delayed_work() is invoked
in mod_delayed_work_on(). When @delay == 0, we can call __queue_work()
directly in order to avoid setting useless timer.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d951daa
Commit 63d95a91 ('workqueue: use @pool instead of @gcwq or @cpu where
applicable') changes an approach to access nr_running.
Thus, wq_worker_waking_up() doesn't use @cpu anymore.
Remove it and remove comment related to it.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/kernel/sched
Return type of work_busy() is unsigned int.
There is return statement returning boolean value, 'false' in work_busy().
It is not problem, because 'false' may be treated '0'.
However, fixing it would make code robust.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/kernel/workqueue.c b
Hello, Glauber.
2012/10/23 Glauber Costa glom...@parallels.com:
On 10/22/2012 06:45 PM, Christoph Lameter wrote:
On Mon, 22 Oct 2012, Glauber Costa wrote:
+ * kmem_cache_free - Deallocate an object
+ * @cachep: The cache the allocation was from.
+ * @objp: The previously allocated object.
2012/10/21 Tejun Heo t...@kernel.org:
On Sun, Oct 21, 2012 at 01:30:07AM +0900, Joonsoo Kim wrote:
Commit 63d95a91 ('workqueue: use @pool instead of @gcwq or @cpu where
applicable') changes an approach to access nr_running.
Thus, wq_worker_waking_up() doesn't use @cpu anymore.
Remove
2012/10/22 Christoph Lameter c...@linux.com:
On Sun, 21 Oct 2012, Joonsoo Kim wrote:
kmalloc() and kmalloc_node() of the SLUB isn't inlined when @flags =
__GFP_DMA.
This patch optimize this case,
so when @flags = __GFP_DMA, it will be inlined into generic code.
__GFP_DMA is a rarely used
2012/10/23 Glauber Costa glom...@parallels.com:
On 10/23/2012 12:07 PM, Glauber Costa wrote:
On 10/23/2012 04:48 AM, JoonSoo Kim wrote:
Hello, Glauber.
2012/10/23 Glauber Costa glom...@parallels.com:
On 10/22/2012 06:45 PM, Christoph Lameter wrote:
On Mon, 22 Oct 2012, Glauber Costa wrote
Hi, Eric.
2012/10/23 Eric Dumazet eric.duma...@gmail.com:
On Tue, 2012-10-23 at 11:29 +0900, JoonSoo Kim wrote:
2012/10/22 Christoph Lameter c...@linux.com:
On Sun, 21 Oct 2012, Joonsoo Kim wrote:
kmalloc() and kmalloc_node() of the SLUB isn't inlined when @flags =
__GFP_DMA
Hi, Glauber.
2012/10/19 Glauber Costa glom...@parallels.com:
For the kmem slab controller, we need to record some extra
information in the kmem_cache structure.
Signed-off-by: Glauber Costa glom...@parallels.com
Signed-off-by: Suleiman Souhlal sulei...@google.com
CC: Christoph Lameter
2012/10/19 Glauber Costa glom...@parallels.com:
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6a1e096..59f6d54 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -339,6 +339,12 @@ struct mem_cgroup {
#if defined(CONFIG_MEMCG_KMEM) defined(CONFIG_INET)
struct
2012/10/24 Glauber Costa glom...@parallels.com:
On 10/24/2012 06:29 PM, Christoph Lameter wrote:
On Wed, 24 Oct 2012, Glauber Costa wrote:
Because of that, we either have to move all the entry points to the
mm/slab.h and rely heavily on the pre-processor, or include all .c files
in here.
2012/10/19 Glauber Costa glom...@parallels.com:
@@ -2930,9 +2937,188 @@ int memcg_register_cache(struct mem_cgroup *memcg,
struct kmem_cache *s)
void memcg_release_cache(struct kmem_cache *s)
{
+ struct kmem_cache *root;
+ int id = memcg_css_id(s-memcg_params-memcg);
+
+
for checking this.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index a1135c6..1a65132 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -739,8 +739,10 @@ void wq_worker_waking_up(struct task_struct *task,
unsigned int cpu
2012/10/19 Joonsoo Kim js1...@gmail.com:
This patchset introduces setup_timer_deferrable() macro.
Using it makes code simple and understandable.
This patchset doesn't make any functional difference.
It is just for clean-up.
It is based on v3.7-rc1
Joonsoo Kim (2):
timer: add
2012/10/25 Christoph Lameter c...@linux.com:
On Wed, 24 Oct 2012, Pekka Enberg wrote:
So I hate this patch with a passion. We don't have any fastpaths in
mm/slab_common.c nor should we. Those should be allocator specific.
I have similar thoughts on the issue. Lets keep the fast paths
To calculate an index of pkmap, using PKMAP_NR() is more understandable
and maintainable, So change it.
Cc: Mel Gorman mgor...@suse.de
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/mm/highmem.c b/mm/highmem.c
index d517cd1..b3b3d68 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -99,7
We can find free page_address_map instance without the page_address_pool.
So remove it.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/mm/highmem.c b/mm/highmem.c
index 017bad1..731cf9a 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -323,11 +323,7 @@ struct page_address_map
of flush_all_zero_pkmaps()
and return index of last flushed entry.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index ef788b5..0683869 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -32,6 +32,7 @@ static inline void
The pool_lock protects the page_address_pool from concurrent access.
But, access to the page_address_pool is already protected by kmap_lock.
So remove it.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/mm/highmem.c b/mm/highmem.c
index b3b3d68..017bad1 100644
--- a/mm/highmem.c
+++ b
().
Joonsoo Kim (5):
mm, highmem: use PKMAP_NR() to calculate an index of pkmap
mm, highmem: remove useless pool_lock
mm, highmem: remove page_address_pool list
mm, highmem: makes flush_all_zero_pkmaps() return index of last
flushed entry
mm, highmem: get virtual address of the page using
In flush_all_zero_pkmaps(), we have an index of the pkmap associated the page.
Using this index, we can simply get virtual address of the page.
So change it.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/mm/highmem.c b/mm/highmem.c
index 65beb9a..1417f4f 100644
--- a/mm/highmem.c
2012/10/29 Minchan Kim minc...@kernel.org:
On Mon, Oct 29, 2012 at 04:12:55AM +0900, Joonsoo Kim wrote:
In current code, after flush_all_zero_pkmaps() is invoked,
then re-iterate all pkmaps. It can be optimized if flush_all_zero_pkmaps()
return index of flushed entry. With this index,
we can
Hi, Minchan.
2012/10/29 Minchan Kim minc...@kernel.org:
Hi Joonsoo,
On Mon, Oct 29, 2012 at 04:12:51AM +0900, Joonsoo Kim wrote:
This patchset clean-up and optimize highmem related code.
[1] is just clean-up and doesn't introduce any functional change.
[2-3] are for clean-up
Without defining ARCH=arm, building a perf for Android ARM will be failed,
because it needs architecture specific files.
So add related information to a document.
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Irina Tirdea irina.tir...@intel.com
Cc: David Ahern dsah...@gmail.com
Cc: Ingo Molnar
commit 099a19d9('allow limited allocation before slab is online') changes a
method
allocating a chunk from kzalloc to pcpu_mem_alloc.
But, it missed changing matched free operation.
It may not be a problem for now, but fix it for consistency.
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc
Hi, Glauber.
2012/10/19 Glauber Costa glom...@parallels.com:
We are able to match a cache allocation to a particular memcg. If the
task doesn't change groups during the allocation itself - a rare event,
this will give us a good picture about who is the first group to touch a
cache page.
2012/10/19 Glauber Costa glom...@parallels.com:
+void kmem_cache_destroy_memcg_children(struct kmem_cache *s)
+{
+ struct kmem_cache *c;
+ int i;
+
+ if (!s-memcg_params)
+ return;
+ if (!s-memcg_params-is_root_cache)
+ return;
+
+
There is no implementation of bootmeme_arch_preferred_node() and
call for this function will makes compile-error.
So, remove it.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/mm/bootmem.c b/mm/bootmem.c
index 434be4a..6f62c03e 100644
--- a/mm/bootmem.c
+++ b/mm/bootmem.c
@@ -589,19
The name of function is not suitable for now.
And removing function and inlining it's code to each call sites
makes code more understandable.
Additionally, we shouldn't do allocation from bootmem
when slab_is_available(), so directly return kmalloc*'s return value.
Signed-off-by: Joonsoo Kim js1
Now, there is no code for CONFIG_HAVE_ARCH_BOOTMEM.
So remove it.
Cc: Haavard Skinnemoen hskinnem...@gmail.com
Cc: Hans-Christian Egtvedt egtv...@samfundet.no
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/arch/avr32/Kconfig b/arch/avr32/Kconfig
index 06e73bf..c2bbc9a 100644
--- a/arch
-foundation.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/arch/powerpc/platforms/cell/celleb_pci.c
b/arch/powerpc/platforms/cell/celleb_pci.c
index abc8af4..1735681 100644
--- a/arch/powerpc/platforms/cell/celleb_pci.c
+++ b/arch/powerpc
commit ea96025a('Don't use alloc_bootmem() in init_IRQ() path')
changed alloc_bootmem() to kzalloc(),
but missed to change free_bootmem() to kfree().
So correct it.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/arch/powerpc/platforms/82xx/pq2ads-pci-pic.c
b/arch/powerpc/platforms
whether this vma is for hughtlb, so correct it
according to this purpose.
Cc: Alex Shi alex@intel.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@redhat.com
Cc: H. Peter Anvin h...@zytor.com
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/arch/x86/mm/tlb.c b/arch/x86
2012/11/3 Minchan Kim minc...@kernel.org:
Hi Joonsoo,
On Sat, Nov 03, 2012 at 04:07:25AM +0900, JoonSoo Kim wrote:
Hello, Minchan.
2012/11/1 Minchan Kim minc...@kernel.org:
On Thu, Nov 01, 2012 at 01:56:36AM +0900, Joonsoo Kim wrote:
In current code, after flush_all_zero_pkmaps
Hi, Andrew.
2012/11/13 Andrew Morton a...@linux-foundation.org:
On Tue, 13 Nov 2012 01:31:55 +0900
Joonsoo Kim js1...@gmail.com wrote:
It is somehow strange that alloc_bootmem return virtual address
and free_bootmem require physical address.
Anyway, free_bootmem()'s first parameter should
2012/11/13 Minchan Kim minc...@kernel.org:
On Tue, Nov 13, 2012 at 09:30:57AM +0900, JoonSoo Kim wrote:
2012/11/3 Minchan Kim minc...@kernel.org:
Hi Joonsoo,
On Sat, Nov 03, 2012 at 04:07:25AM +0900, JoonSoo Kim wrote:
Hello, Minchan.
2012/11/1 Minchan Kim minc...@kernel.org
Hello, Eric.
2012/10/14 Eric Dumazet eric.duma...@gmail.com:
SLUB was really bad in the common workload you describe (allocations
done by one cpu, freeing done by other cpus), because all kfree() hit
the slow path and cpus contend in __slab_free() in the loop guarded by
cmpxchg_double_slab().
Now, we have a handy macro for initializing deferrable timer.
Using it makes code clean and easy to understand.
Additionally, in some driver codes, use setup_timer() instead of init_timer().
This patch doesn't make any functional difference.
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Len
.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/include/linux/timer.h b/include/linux/timer.h
index 8c5a197..5950276 100644
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -151,6 +151,8 @@ static inline void init_timer_on_stack_key(struct
timer_list *timer,
#define setup_timer
This patchset introduces setup_timer_deferrable() macro.
Using it makes code simple and understandable.
This patchset doesn't make any functional difference.
It is just for clean-up.
It is based on v3.7-rc1
Joonsoo Kim (2):
timer: add setup_timer_deferrable() macro
timer: use new
Hello, Eric.
Thank you very much for a kind comment about my question.
I have one more question related to network subsystem.
Please let me know what I misunderstand.
2012/10/14 Eric Dumazet eric.duma...@gmail.com:
In latest kernels, skb-head no longer use kmalloc()/kfree(), so SLAB vs
SLUB is
migrate_pages() should return number of pages not migrated or error code.
When unmap_and_move return -EAGAIN, outer loop is re-execution without
initialising nr_failed. This makes nr_failed over-counted.
So this patch correct it by initialising nr_failed in outer loop.
Signed-off-by: Joonsoo Kim
Additionally, Correct comment above do_migrate_pages()
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Sasha Levin levinsasha...@gmail.com
Cc: Christoph Lameter c...@linux.com
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1d771e4..f7df271 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -948,7 +948,7
migrate_pages() would return positive value in some failure case,
so 'ret 0 ? 0 : ret' may be wrong.
This fix it and remove one dead statement.
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Marek Szyprowski m.szyprow...@samsung.com
Cc: Minchan Kim minc
move_pages() syscall may return success in case that
do_move_page_to_node_array return positive value which means migration failed.
This patch changes return value of do_move_page_to_node_array
for not returning positive value. It can fix the problem.
Signed-off-by: Joonsoo Kim js1...@gmail.com
2012/7/17 Christoph Lameter c...@linux.com:
On Tue, 17 Jul 2012, Joonsoo Kim wrote:
migrate_pages() should return number of pages not migrated or error code.
When unmap_and_move return -EAGAIN, outer loop is re-execution without
initialising nr_failed. This makes nr_failed over-counted
2012/7/17 Michal Nazarewicz min...@tlen.pl:
Acked-by: Michal Nazarewicz min...@mina86.com
Thanks.
Actually, it makes me wonder if there is any code that uses this
information. If not, it would be best in my opinion to make it return
zero or negative error code, but that would have to be
2012/7/17 Michal Nazarewicz min...@tlen.pl:
Joonsoo Kim js1...@gmail.com writes:
do_migrate_pages() can return the number of pages not migrated.
Because migrate_pages() syscall return this value directly,
migrate_pages() syscall may return the number of pages not migrated.
In fail case
2012/7/17 Michal Nazarewicz min...@mina86.com:
Joonsoo Kim js1...@gmail.com writes:
migrate_pages() would return positive value in some failure case,
so 'ret 0 ? 0 : ret' may be wrong.
This fix it and remove one dead statement.
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Michal
() is identical case as migrate_pages()
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Christoph Lameter c...@linux.com
Acked-by: Christoph Lameter c...@linux.com
Acked-by: Michal Nazarewicz min...@mina86.com
diff --git a/mm/migrate.c b/mm/migrate.c
index be26d5c..f495c58 100644
--- a/mm/migrate.c
+++ b
Additionally, Correct comment above do_migrate_pages()
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Sasha Levin levinsasha...@gmail.com
Cc: Christoph Lameter c...@linux.com
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1d771e4..0732729 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -948,7
move_pages() syscall may return success in case that
do_move_page_to_node_array return positive value which means migration failed.
This patch changes return value of do_move_page_to_node_array
for not returning positive value. It can fix the problem.
Signed-off-by: Joonsoo Kim js1...@gmail.com
migrate_pages() would return positive value in some failure case,
so 'ret 0 ? 0 : ret' may be wrong.
This fix it and remove one dead statement.
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Marek Szyprowski m.szyprow...@samsung.com
Cc: Minchan Kim minc
2012/7/17 Christoph Lameter c...@linux.com:
On Tue, 17 Jul 2012, Joonsoo Kim wrote:
@@ -1382,6 +1382,8 @@ SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned
long, maxnode,
err = do_migrate_pages(mm, old, new,
capable(CAP_SYS_NICE) ? MPOL_MF_MOVE_ALL : MPOL_MF_MOVE
2012/7/17 Michal Nazarewicz min...@mina86.com:
On Tue, 17 Jul 2012 14:33:34 +0200, Joonsoo Kim js1...@gmail.com wrote:
migrate_pages() would return positive value in some failure case,
so 'ret 0 ? 0 : ret' may be wrong.
This fix it and remove one dead statement.
How about the following
++tries == 5 never being checked. This in turn means that at the end
of the function, ret may have a positive value, which should be treated
as an error.
This patch changes __alloc_contig_migrate_range() so that the return
statement converts positive ret value into -EBUSY error.
Signed-off-by: Joonsoo
with MIGRATE_SYNC.
So change it.
Additionally, there is mismatch between type of argument and function
declaration for migrate_pages(). So fix this simple case, too.
Signed-off-by: Joonsoo Kim js1...@gmail.com
Cc: Christoph Lameter c...@linux.com
Cc: Mel Gorman mgor...@suse.de
diff --git a/mm
On Tue, Mar 26, 2013 at 03:01:34PM +0900, Joonsoo Kim wrote:
Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, there are some missing parts for this feature to work properly.
This patchset correct these things and make load_balance() robust.
Others are related
Hello, Oskar.
On Thu, Apr 04, 2013 at 02:51:26PM +0200, Oskar Andero wrote:
From: Toby Collett toby.coll...@sonymobile.com
The symbol lookup can take a long time and kprobes is
initialised very early in boot, so delay symbol lookup
until the blacklist is first used.
Cc: Masami Hiramatsu
On Thu, Apr 04, 2013 at 01:53:25PM +, Christoph Lameter wrote:
On Thu, 4 Apr 2013, Joonsoo Kim wrote:
Pekka alreay applied it.
Do we need update?
Well I thought the passing of the count via lru.next would be something
worthwhile to pick up.
--
To unsubscribe from this list: send
Hello, Preeti.
On Thu, Apr 04, 2013 at 12:18:32PM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
On 04/04/2013 06:12 AM, Joonsoo Kim wrote:
Hello, Preeti.
So, how about extending a sched_period with rq-nr_running, instead of
cfs_rq-nr_running? It is my quick thought and I think that we
to count these pages for this task's reclaimed_slab.
Cc: Christoph Lameter c...@linux-foundation.org
Cc: Pekka Enberg penb...@kernel.org
Cc: Matt Mackall m...@selenic.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slub.c b/mm/slub.c
index 4aec537..16fd2d5 100644
--- a/mm/slub.c
to count these pages for this task's reclaimed_slab.
Cc: Christoph Lameter c...@linux-foundation.org
Cc: Pekka Enberg penb...@kernel.org
Cc: Matt Mackall m...@selenic.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slab.c b/mm/slab.c
index 856e4a1..4d94bcb 100644
--- a/mm/slab.c
In shrink_(in)active_list(), we can fail to put into lru, and these pages
are reclaimed accidentally. Currently, these pages are not counted
for sc-nr_reclaimed, but with this information, we can stop to reclaim
earlier, so can reduce overhead of reclaim.
Signed-off-by: Joonsoo Kim iamjoonsoo
Hello, Mel.
Sorry for too late question.
On Sun, Mar 17, 2013 at 01:04:14PM +, Mel Gorman wrote:
If kswaps fails to make progress but continues to shrink slab then it'll
either discard all of slab or consume CPU uselessly scanning shrinkers.
This patch causes kswapd to only call the
Hello, Mel.
On Tue, Apr 09, 2013 at 12:13:59PM +0100, Mel Gorman wrote:
On Tue, Apr 09, 2013 at 03:53:25PM +0900, Joonsoo Kim wrote:
Hello, Mel.
Sorry for too late question.
No need to apologise at all.
On Sun, Mar 17, 2013 at 01:04:14PM +, Mel Gorman wrote:
If kswaps fails
Hello, Dave.
On Wed, Apr 10, 2013 at 11:07:34AM +1000, Dave Chinner wrote:
On Tue, Apr 09, 2013 at 12:13:59PM +0100, Mel Gorman wrote:
On Tue, Apr 09, 2013 at 03:53:25PM +0900, Joonsoo Kim wrote:
I think that outside of zone loop is better place to run shrink_slab(),
because
Hello, Christoph.
On Tue, Apr 09, 2013 at 02:28:06PM +, Christoph Lameter wrote:
On Tue, 9 Apr 2013, Joonsoo Kim wrote:
Currently, freed pages via rcu is not counted for reclaimed_slab, because
it is freed in rcu context, not current task context. But, this free is
initiated
Hello, Minchan.
On Tue, Apr 09, 2013 at 02:55:14PM +0900, Minchan Kim wrote:
Hello Joonsoo,
On Tue, Apr 09, 2013 at 10:21:16AM +0900, Joonsoo Kim wrote:
In shrink_(in)active_list(), we can fail to put into lru, and these pages
are reclaimed accidentally. Currently, these pages
On Wed, Apr 10, 2013 at 09:31:10AM +0300, Pekka Enberg wrote:
On Mon, Apr 8, 2013 at 3:32 PM, Steven Rostedt rost...@goodmis.org wrote:
Index: linux/mm/slub.c
===
--- linux.orig/mm/slub.c2013-03-28 12:14:26.958358688
2013/4/10 Christoph Lameter c...@linux.com:
On Wed, 10 Apr 2013, Joonsoo Kim wrote:
Hello, Christoph.
On Tue, Apr 09, 2013 at 02:28:06PM +, Christoph Lameter wrote:
On Tue, 9 Apr 2013, Joonsoo Kim wrote:
Currently, freed pages via rcu is not counted for reclaimed_slab, because
Hello, Preeti.
On Fri, Mar 29, 2013 at 12:42:53PM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
On 03/28/2013 01:28 PM, Joonsoo Kim wrote:
Following-up upper se in sched_slice() should not be done,
because sched_slice() is used for checking that resched is needed
whithin *this* cfs_rq
Hello Preeti.
On Fri, Mar 29, 2013 at 05:05:37PM +0530, Preeti U Murthy wrote:
Hi Joonsoo
On 03/28/2013 01:28 PM, Joonsoo Kim wrote:
sched_slice() compute ideal runtime slice. If there are many tasks
in cfs_rq, period for this cfs_rq is extended to guarantee that each task
has time
Hello, Peter.
On Fri, Mar 29, 2013 at 12:45:14PM +0100, Peter Zijlstra wrote:
On Thu, 2013-03-28 at 16:58 +0900, Joonsoo Kim wrote:
There is not enough reason to place this checking at
update_sg_lb_stats(),
except saving one iteration for sched_group_cpus. But with this
change,
we can
On Fri, Mar 29, 2013 at 12:58:26PM +0100, Peter Zijlstra wrote:
On Thu, 2013-03-28 at 16:58 +0900, Joonsoo Kim wrote:
+static int should_we_balance(struct lb_env *env)
+{
+ struct sched_group *sg = env-sd-groups;
+ int cpu, balance_cpu = -1
...@hitachi.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
I fotgot to add lkml.
Sorry for noise.
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index e35be53..5e90092 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -101,6 +101,7 @@ static struct kprobe_blackpoint
Hello, Russell.
On Mon, Mar 25, 2013 at 09:48:16AM +, Russell King - ARM Linux wrote:
On Mon, Mar 25, 2013 at 01:11:13PM +0900, Joonsoo Kim wrote:
nobootmem use max_low_pfn for computing boundary in free_all_bootmem()
So we need proper value to max_low_pfn.
But, there is some
Hello, Christoph.
On Mon, Apr 01, 2013 at 03:33:23PM +, Christoph Lameter wrote:
Subject: slub: Fix object counts in acquire_slab V2
It seems that we were overallocating objects from the slab queues
since get_partial_node() assumed that page-inuse was undisturbed by
acquire_slab(). Save
Hello, Christoph.
On Mon, Apr 01, 2013 at 03:32:43PM +, Christoph Lameter wrote:
On Thu, 28 Mar 2013, Paul Gortmaker wrote:
Index: linux/init/Kconfig
===
--- linux.orig/init/Kconfig 2013-03-28 12:14:26.958358688
Hello, Preeti.
On Mon, Apr 01, 2013 at 12:15:50PM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
On 04/01/2013 10:39 AM, Joonsoo Kim wrote:
Hello Preeti.
So we should limit this possible weird situation.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c
Hello, Preeti.
On Mon, Apr 01, 2013 at 12:36:52PM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
On 04/01/2013 09:38 AM, Joonsoo Kim wrote:
Hello, Preeti.
Ideally the children's cpu share must add upto the parent's share.
I don't think so.
We should schedule out the parent
Hello, Preeti.
On Tue, Apr 02, 2013 at 10:25:23AM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
On 04/02/2013 07:55 AM, Joonsoo Kim wrote:
Hello, Preeti.
On Mon, Apr 01, 2013 at 12:36:52PM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
On 04/01/2013 09:38 AM, Joonsoo Kim wrote:
Hello
Hello, Mike.
On Tue, Apr 02, 2013 at 04:35:26AM +0200, Mike Galbraith wrote:
On Tue, 2013-04-02 at 11:25 +0900, Joonsoo Kim wrote:
Hello, Preeti.
On Mon, Apr 01, 2013 at 12:36:52PM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
On 04/01/2013 09:38 AM, Joonsoo Kim wrote:
Hello
Hello, Peter.
On Tue, Apr 02, 2013 at 10:10:06AM +0200, Peter Zijlstra wrote:
On Thu, 2013-03-28 at 16:58 +0900, Joonsoo Kim wrote:
Now checking that this cpu is appropriate to balance is embedded into
update_sg_lb_stats() and this checking has no direct relationship to
this
function
Hello, Preeti.
On Tue, Apr 02, 2013 at 11:02:43PM +0530, Preeti U Murthy wrote:
Hi Joonsoo,
I think that it is real problem that sysctl_sched_min_granularity is not
guaranteed for each task.
Instead of this patch, how about considering low bound?
if (slice
Hello, Peter.
On Tue, Apr 02, 2013 at 12:29:42PM +0200, Peter Zijlstra wrote:
On Tue, 2013-04-02 at 12:00 +0200, Peter Zijlstra wrote:
On Tue, 2013-04-02 at 18:50 +0900, Joonsoo Kim wrote:
It seems that there is some misunderstanding about this patch.
In this patch, we don't iterate
Hello, Christoph.
On Tue, Apr 02, 2013 at 07:25:20PM +, Christoph Lameter wrote:
On Tue, 2 Apr 2013, Joonsoo Kim wrote:
We need one more fix for correctness.
When available is assigned by put_cpu_partial, it doesn't count cpu slab's
objects.
Please reference my old patch
1 - 100 of 4538 matches
Mail list logo