decisions at those places.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h | 11 +
mm/page_alloc.c| 62 +++-
2 files changed, 72 insertions(+), 1 deletion(-)
diff --git a/include/linux
fls() indexes the bits starting with 1, ie., from 1 to BITS_PER_LONG
whereas __fls() uses a zero-based indexing scheme (0 to BITS_PER_LONG - 1).
Add comments to document this important difference.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
arch/x86/include/asm/bitops.h
.
Increasing region number--
Direction of allocation------Direction of reclaim/compaction
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 154 +--
1
the
boundaries of zone memory regions and counters to track the number of free
pageblocks within each region.
Also, fixup the references to the freelist's list_head inside struct free_area.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h | 17
of the buddy page and use
it while merging the buddies.
Also, set the freepage migratetype of the buddy to the new one.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b
that, and use it to keep the fastpath of page allocation almost as
fast as it would have been without memory regions.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mm.h | 14 +++
include/linux/mmzone.h |6 +
mm/page_alloc.c| 62
on tracking this info accurately,
as outlined above).
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 398b62c..b4b1275 100644
--- a/mm/page_alloc.c
+++ b/mm
The page allocator can make smarter decisions to influence memory power
management, if we track the per-region memory allocations closely.
So add the necessary support to accurately track allocations on a per-region
basis.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
memory allocation
decisions at the page-allocator level and understand the extent to
which they help in consolidation.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/vmstat.c | 86 ++-
1 file changed, 84 insertions
memory region accurately, we
should be able to observe the new page allocator behavior to a reasonable
degree of accuracy.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/vmstat.c | 34 ++
1 file changed, 30 insertions(+), 4 deletions
the sorting.
One of the other main advantages of this O(log n) design is that it can
support large amounts of RAM (upto 2 TB and beyond) quite effortlessly.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h |2 +
mm/page_alloc.c| 144
to
satisfy that allocation request.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 44 ++--
1 file changed, 34 insertions(+), 10 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6e711b9..0cc2a3e 100644
-free statistics properly.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 46 ++
1 file changed, 46 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 905360c..b66ddff 100644
--- a/mm
Memory Region
and Kernel Allocator Allocator
Since the region allocator is supposed to function as a backend to the
page allocator, we implement it on a per-zone basis (since the page-allocator
is also per-zone).
Signed-off-by: Srivatsa S. Bhat srivatsa.b
freelists in one shot. Add this support, and also
take care to update the nr-free statistics properly.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 55 +++
1 file changed, 55 insertions(+)
diff --git
lower numbered regions while allocating regions to the page allocator.
To do this efficiently, add a bitmap to represent the regions in the region
allocator, and use bitmap operations to manage these regions and to pick the
lowest numbered free region efficiently.
Signed-off-by: Srivatsa S. Bhat
the pages belonging to that region.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 20
1 file changed, 20 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5227ac3..d407caf 100644
--- a/mm/page_alloc.c
+++ b/mm
from the region allocator, the latter picks a
free region and always allocates all the freepages belonging to that entire
region.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 23 +++
1 file changed, 23 insertions(+)
diff --git a/mm
.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 45 ++---
1 file changed, 34 insertions(+), 11 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5b58e7d..78ae8f6 100644
--- a/mm/page_alloc.c
+++ b
.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/vmstat.c |8
1 file changed, 8 insertions(+)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 924babc..8cb7a10 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -829,6 +829,8 @@ static void frag_show_print(struct seq_file *m
, as a
precursor to benchmarking the performance).
The check to see if a page given as input to del_from_freelist() indeed
belongs to that freelist, is one such very expensive check. Drop it.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |2 ++
1 file changed
assumptions.)
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 42 +-
1 file changed, 41 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9be946e..b8af5a2 100644
--- a/mm/page_alloc.c
, so that one can
quickly evaluate the benefits of the overall design without getting
bogged down by too many corner cases and constraints. Of course future
implementations will handle more scenarios and will have reduced dependence
on such simplifying assumptions.)
Signed-off-by: Srivatsa S. Bhat
from that particular region.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 40
1 file changed, 24 insertions(+), 16 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3f49ca8..fc530ff 100644
--- a/mm
whether the freepage resides in the region allocator or the buddy freelists.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 31 +++
1 file changed, 31 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a62730b
freepage movement, we first move all the pages of that
region from the region allocator to the MIGRATE_MOVABLE buddy freelist
and then move the requested page(s) from MIGRATE_MOVABLE to the required
migratetype.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c
that upon freeing the pages or during buddy expansion,
the pages are added back to the freelists of the migratetype for which
the pages were originally requested from the region allocator.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |3 +++
1 file
it fragments the ownership of memory segments.
So never change the ownership of pageblocks during freepage stealing.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 36 ++--
1 file changed, 10 insertions(+), 26 deletions(-)
diff
chances
of avoiding fallbacks to other migratetypes.
So, don't return all free memory regions (in the page allocator) to the
region allocator. Keep atleast one region as a cache, for future use.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 16
, since it doesn't
have to keep track of memory in smaller chunks than a memory region.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e303351
On 08/30/2013 08:57 PM, Dave Hansen wrote:
On 08/30/2013 06:13 AM, Srivatsa S. Bhat wrote:
Overview of Memory Power Management and its implications to the Linux MM
Today, we are increasingly seeing computer systems
On 09/02/2013 11:50 AM, Yasuaki Ishimatsu wrote:
(2013/08/30 22:15), Srivatsa S. Bhat wrote:
Initialize the node's memory-regions structures with the information
about
the region-boundaries, at boot time.
Based-on-patch-by: Ankita Garg gargank...@gmail.com
Signed-off-by: Srivatsa S. Bhat
On 09/03/2013 11:26 AM, Yasuaki Ishimatsu wrote:
(2013/08/30 22:15), Srivatsa S. Bhat wrote:
Given a page, we would like to have an efficient mechanism to find out
the node memory region and the zone memory region to which it belongs.
Since the node is assumed to be divided into equal-sized
On 09/03/2013 12:08 PM, Yasuaki Ishimatsu wrote:
(2013/08/30 22:16), Srivatsa S. Bhat wrote:
Due to the region-wise ordering of the pages in the buddy allocator's
free lists, whenever we want to delete a free pageblock from a free list
(for ex: when moving blocks of pages from one list
can be fixed by using get/put_online_cpus()
instead of your second patch[2].
[1]. https://patchwork.kernel.org/patch/2852463/
[2]. https://patchwork.kernel.org/patch/2852464/
[3]. https://patchwork.kernel.org/patch/2795771/
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line
@@ static ssize_t store_scaling_governor(struct cpufreq_policy
*policy,
policy-user_policy.policy = policy-policy;
policy-user_policy.governor = policy-governor;
+out:
+ put_online_cpus();
+
if (ret)
return ret;
else
Regards,
Srivatsa S. Bhat
to see how in-kernel preemption is dealt with, by using
PREEMPT_ACTIVE. That would clarify why there is no bug here.
Regards,
Srivatsa S. Bhat
---
kernel/time/alarmtimer.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/time/alarmtimer.c b/kernel/time
,
Srivatsa S. Bhat
drivers/cpufreq/cpufreq_conservative.c |2 +-
drivers/cpufreq/cpufreq_governor.c |2 --
drivers/cpufreq/cpufreq_ondemand.c |2 +-
3 files changed, 2 insertions(+), 4 deletions(-)
diff --git
On 09/20/2013 09:49 AM, Viresh Kumar wrote:
On 19 September 2013 23:41, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
But there was no code to set the per-cpu values to -1 to begin with. Since
the per-cpu variable was defined as static, it would have been initialized
to zero. Thus
the cpufreq-driver is registered.. or, is such a
situation possible with cpufreq_disabled()?
Regards,
Srivatsa S. Bhat
---
drivers/cpufreq/cpufreq.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 82ecbe3..db004a8 100644
...
Regards,
Srivatsa S. Bhat
if I comment out this one, I get the same thing in the frambuffer
driver instead, which is at module_init().
I don't have a trace so it's not like I know exactly what happened
before this point,
but the dmesg up to here reads:
Linux version 3.11.0-rc4-00024
On 09/20/2013 10:24 PM, Viresh Kumar wrote:
On 20 September 2013 20:46, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
I think show() and store() also suffer
from a similar fate. So do you think we need to add these checks there as
well?
I'm not sure, since I can't think
registered.
Aha, so exactly what I suspected in my first mail..
Yep, your analysis was perfect right from the beginning :-)
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info
of cpufreq_get().
Otherwise call to lock_policy_rwsem_read() might hit BUG_ON(!policy).
Reported-and-Tested-by: Linus Walleij linus.wall...@linaro.org
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
---
Reviewed-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Regards,
Srivatsa S
/spec.htm
Section 5.2.21 Memory Power State Table (MPST)
[7]. Prototype implementation of parsing of ACPI 5.0 MPST tables, by Srinivas
Pandruvada.
https://lkml.org/lkml/2013/4/18/349
Srivatsa S. Bhat (40):
mm: Introduce memory regions data-structure to capture region boundaries
Initialize the node's memory-regions structures with the information about
the region-boundaries, at boot time.
Based-on-patch-by: Ankita Garg gargank...@gmail.com
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mm.h |4
mm/page_alloc.c| 28
-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index bd791e4..d3288b0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -35,6 +35,8
region to
which a given page belongs.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mm.h | 24
include/linux/mmzone.h |7 +++
mm/page_alloc.c| 22 ++
3 files changed, 53 insertions(+)
diff --git
decisions at those places.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h | 11 +
mm/page_alloc.c| 62 +++-
2 files changed, 72 insertions(+), 1 deletion(-)
diff --git a/include/linux
the
boundaries of zone memory regions and counters to track the number of free
pageblocks within each region.
Also, fixup the references to the freelist's list_head inside struct free_area.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h | 17
.
Increasing region number--
Direction of allocation------Direction of reclaim/compaction
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 154 +--
1
on tracking this info accurately,
as outlined above).
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d48eb04..e31daf4 100644
--- a/mm/page_alloc.c
+++ b/mm
of the buddy page and use
it while merging the buddies.
Also, set the freepage migratetype of the buddy to the new one.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b
that, and use it to keep the fastpath of page allocation almost as
fast as it would have been without memory regions.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mm.h | 14 +++
include/linux/mmzone.h |6 +
mm/page_alloc.c| 62
The page allocator can make smarter decisions to influence memory power
management, if we track the per-region memory allocations closely.
So add the necessary support to accurately track allocations on a per-region
basis.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
fls() indexes the bits starting with 1, ie., from 1 to BITS_PER_LONG
whereas __fls() uses a zero-based indexing scheme (0 to BITS_PER_LONG - 1).
Add comments to document this important difference.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
arch/x86/include/asm/bitops.h
the sorting.
One of the other main advantages of this O(log n) design is that it can
support large amounts of RAM (upto 2 TB and beyond) quite effortlessly.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h |2 +
mm/page_alloc.c| 142
memory region accurately, we
should be able to observe the new page allocator behavior to a reasonable
degree of accuracy.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/vmstat.c | 34 ++
1 file changed, 30 insertions(+), 4 deletions
Memory Region
and Kernel Allocator Allocator
Since the region allocator is supposed to function as a backend to the
page allocator, we implement it on a per-zone basis (since the page-allocator
is also per-zone).
Signed-off-by: Srivatsa S. Bhat srivatsa.b
memory allocation
decisions at the page-allocator level and understand the extent to
which they help in consolidation.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/vmstat.c | 86 ++-
1 file changed, 84 insertions
to
satisfy that allocation request.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 44 ++--
1 file changed, 34 insertions(+), 10 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fbaa2dc..dc02a80 100644
freelists in one shot. Add this support, and also
take care to update the nr-free statistics properly.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 55 +++
1 file changed, 55 insertions(+)
diff --git
-free statistics properly.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 46 ++
1 file changed, 46 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 876c231..c3a2cda 100644
--- a/mm
from the region allocator, the latter picks a
free region and always allocates all the freepages belonging to that entire
region.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 23 +++
1 file changed, 23 insertions(+)
diff --git a/mm
the pages belonging to that region.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 20
1 file changed, 20 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d96746e..c727bba 100644
--- a/mm/page_alloc.c
+++ b/mm
, so that one can
quickly evaluate the benefits of the overall design without getting
bogged down by too many corner cases and constraints. Of course future
implementations will handle more scenarios and will have reduced dependence
on such simplifying assumptions.)
Signed-off-by: Srivatsa S. Bhat
assumptions.)
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 42 +-
1 file changed, 41 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 178f210..d08bc91 100644
--- a/mm/page_alloc.c
.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/vmstat.c |8
1 file changed, 8 insertions(+)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index bb44d30..4dc103e 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -868,6 +868,8 @@ static void frag_show_print(struct seq_file *m
, as a
precursor to benchmarking the performance).
The check to see if a page given as input to del_from_freelist() indeed
belongs to that freelist, is one such very expensive check. Drop it.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |2 ++
1 file changed
lower numbered regions while allocating regions to the page allocator.
To do this efficiently, add a bitmap to represent the regions in the region
allocator, and use bitmap operations to manage these regions and to pick the
lowest numbered free region efficiently.
Signed-off-by: Srivatsa S. Bhat
.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 45 ++---
1 file changed, 34 insertions(+), 11 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d71d671..ee6c098 100644
--- a/mm/page_alloc.c
+++ b
whether the freepage resides in the region allocator or the buddy freelists.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 31 +++
1 file changed, 31 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ca7b959
from that particular region.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 40
1 file changed, 24 insertions(+), 16 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ac04b45..ed5298c 100644
--- a/mm
freepage movement, we first move all the pages of that
region from the region allocator to the MIGRATE_MOVABLE buddy freelist
and then move the requested page(s) from MIGRATE_MOVABLE to the required
migratetype.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c
that upon freeing the pages or during buddy expansion,
the pages are added back to the freelists of the migratetype for which
the pages were originally requested from the region allocator.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |3 +++
1 file
, since it doesn't
have to keep track of memory in smaller chunks than a memory region.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fd32533
chances
of avoiding fallbacks to other migratetypes.
So, don't return all free memory regions (in the page allocator) to the
region allocator. Keep atleast one region as a cache, for future use.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 16
it fragments the ownership of memory segments.
So never change the ownership of pageblocks during freepage stealing.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 36 ++--
1 file changed, 10 insertions(+), 26 deletions(-)
diff
on the
fast buddy allocator itself. But we are careful to abort the compaction run
when the buddy allocator starts giving free pages in this region itself or
higher regions (because in that case, if we proceed, it would be defeating
the purpose of the entire effort).
Signed-off-by: Srivatsa S. Bhat
.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/compaction.c | 81 +++
mm/internal.h | 40 +++
mm/page_alloc.c | 51 +--
3 files changed, 134 insertions(+), 38
to the kthread.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/compaction.c | 26 ++
mm/internal.h |3 +++
2 files changed, 29 insertions(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index 0511eae..b56be89 100644
--- a/mm/compaction.c
+++ b/mm
. Apart from them, also perform the same eligibility checks
that the region-evacuator employs, to avoid useless wakeups of kmempowerd.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
mm/page_alloc.c | 38 --
1 file changed, 36 insertions(+), 2
(no bits set, so no more work to do).
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h | 10 ++
mm/compaction.c| 80
2 files changed, 90 insertions(+)
diff --git a/include/linux/mmzone.h
extract the definitions related to kthread-work/worker from kthread.h into
a new header-file named kthread-work.h (which doesn't include sched.h), so that
it can be easily included inside mmzone.h when required.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux
and might not be justifiable. Also,
compacting region 0 would be pointless, since that is the target of all our
compaction runs. Add these checks in the region-evacuator.
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
include/linux/mmzone.h |2 ++
mm/compaction.c
On 09/26/2013 05:10 AM, Andrew Morton wrote:
On Thu, 26 Sep 2013 04:56:32 +0530 Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Experimental Results:
Test setup:
--
x86 Sandybridge dual-socket quad core HT-enabled machine, with 128GB RAM.
Memory
an excellent consolidation ratio,
without hurting performance too much. Going forward, I'll work on getting the
power-measurements as well on the powerpc platform that I have.
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
this patchset was designed only for content-losing/full-poweroff
type of scenarios).
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
.
:-)
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
and whatever-it-is-this-patchset-was-designed-for?
Arjan, are you referring to the fact that Intel/SNB systems can exploit
memory self-refresh only when the entire system goes idle? Is that why this
patchset won't turn out to be that useful on those platforms?
Regards,
Srivatsa S. Bhat
On 09/26/2013 09:28 PM, Arjan van de Ven wrote:
On 9/26/2013 6:42 AM, Srivatsa S. Bhat wrote:
On 09/26/2013 08:29 AM, Andrew Morton wrote:
On Thu, 26 Sep 2013 03:50:16 +0200 Andi Kleen a...@firstfloor.org
wrote:
On Wed, Sep 25, 2013 at 06:21:29PM -0700, Andrew Morton wrote:
On Wed, 25 Sep
, in this
patchset,
everything (all the allocation/reference shaping) is done _within_ the
NUMA boundary, assuming that the memory regions are subsets of a NUMA
node.
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
On 09/27/2013 03:46 AM, Dave Hansen wrote:
On 09/25/2013 04:14 PM, Srivatsa S. Bhat wrote:
@@ -605,16 +713,22 @@ static inline void __free_one_page(struct page *page,
buddy_idx = __find_buddy_index(combined_idx, order + 1);
higher_buddy = higher_page + (buddy_idx
? :)
hmm, right ;)
This patch fixes the issue for me - the system has been idle for more
than an hour now without any problems (earlier, i used to get the traces
within 5 minutes of idle time).
Thanks a lot for the fix!
Tested-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Regards
On 07/17/2013 09:19 PM, Srivatsa S. Bhat wrote:
On 07/17/2013 08:57 PM, Toralf Förster wrote:
On 07/16/2013 11:32 PM, Rafael J. Wysocki wrote:
On Tuesday, July 16, 2013 05:15:14 PM Toralf Förster wrote:
[...]
sry - here again with full quote of the email :
I applied patch [1/8] on top
On 07/21/2013 03:10 PM, Toralf Förster wrote:
On 07/21/2013 10:43 AM, Srivatsa S. Bhat wrote:
On 07/17/2013 09:19 PM, Srivatsa S. Bhat wrote:
On 07/17/2013 08:57 PM, Toralf Förster wrote:
On 07/16/2013 11:32 PM, Rafael J. Wysocki wrote:
On Tuesday, July 16, 2013 05:15:14 PM Toralf Förster
Hi,
I'm seeing this on every boot.
Version: Latest mainline (commit ea45ea70b)
Regards,
Srivatsa S. Bhat
---
BUG: unable to handle kernel paging request at 882018552020
IP: [a0366b02] ip6mr_sk_done+0x32/0xb0 [ipv6]
PGD
On 07/22/2013 02:23 AM, Hannes Frederic Sowa wrote:
On Sun, Jul 21, 2013 at 11:58:13PM +0530, Srivatsa S. Bhat wrote:
I'm seeing this on every boot.
Version: Latest mainline (commit ea45ea70b)
Thanks for the report! Could you try the following patch?
That didn't seem to help :-(
Below
On 07/22/2013 02:57 AM, Hannes Frederic Sowa wrote:
On Mon, Jul 22, 2013 at 02:40:35AM +0530, Srivatsa S. Bhat wrote:
On 07/22/2013 02:23 AM, Hannes Frederic Sowa wrote:
On Sun, Jul 21, 2013 at 11:58:13PM +0530, Srivatsa S. Bhat wrote:
I'm seeing this on every boot.
Version: Latest mainline
.
Try applying the two mainline commits that I mentioned in my previous
mail, on top of 3.10.2 and check if it fixes your problem.
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo
901 - 1000 of 2690 matches
Mail list logo