Hello, Mel.
On Mon, Jul 20, 2015 at 09:00:18AM +0100, Mel Gorman wrote:
> From: Mel Gorman
>
> High-order watermark checking exists for two reasons -- kswapd high-order
> awareness and protection for high-order atomic requests. Historically we
> depended on MIGRATE_RESERVE to preserve min_free_
er occupies 150 MB memory.
Workload: hackbench -g 20 -l 1000
Average result by 10 runs (Base va Patched)
elapsed_time(s): 4.3468 vs 2.9838
compact_stall: 461.7 vs 73.6
pgmigrate_success: 28315.9 vs 7256.1
Signed-off-by: Joonsoo Kim
---
mm/slub.c | 2 ++
1 file changed, 2 insertions(+)
diff
On Thu, Jul 23, 2015 at 02:21:29PM -0700, David Rientjes wrote:
> On Thu, 23 Jul 2015, Vlastimil Babka wrote:
>
> > > When a khugepaged allocation fails for a node, it could easily kick off
> > > background compaction on that node and revisit the range later, very
> > > similar to how we can kic
On Thu, Jul 23, 2015 at 01:58:20PM -0700, David Rientjes wrote:
> On Thu, 23 Jul 2015, Joonsoo Kim wrote:
>
> > > The slub allocator does try to allocate its high-order memory with
> > > __GFP_WAIT before falling back to lower orders if possible. I would
> > >
Hello,
On Thu, Jul 09, 2015 at 02:53:27PM -0700, David Rientjes wrote:
> The slub allocator does try to allocate its high-order memory with
> __GFP_WAIT before falling back to lower orders if possible. I would think
> that this would be the greatest sign of on-demand memory compaction being
On Tue, Jul 21, 2015 at 11:27:54AM +0200, Vlastimil Babka wrote:
> On 07/08/2015 10:24 AM, Joonsoo Kim wrote:
> >On Fri, Jun 26, 2015 at 11:22:41AM +0100, Mel Gorman wrote:
> >>On Fri, Jun 26, 2015 at 11:07:47AM +0900, Joonsoo Kim wrote:
> >>
> >>The whole re
; To prevent further confusion, rename the functions to
> get/set_pcppage_migratetype() and expand their description. Since all the
> users are now in mm/page_alloc.c, move the functions there from the shared
> header.
>
> Signed-off-by: Vlastimil Babka
> Acked-by: David Rientj
tch from 2007) to catch pages on MIGRATE_ISOLATE pcplists.
> However, pcplists don't contain MIGRATE_ISOLATE freepages nowadays, those are
> freed directly to free lists, so the check is obsolete. Remove it as well.
>
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
Thank
Hello, all.
On Mon, Jul 20, 2015 at 08:54:13PM +0900, Minchan Kim wrote:
> On Mon, Jul 20, 2015 at 01:27:55PM +0200, Vlastimil Babka wrote:
> > On 07/16/2015 02:06 AM, Minchan Kim wrote:
> > >On Wed, Jul 15, 2015 at 03:33:59PM +0900, Joonsoo Kim wrote:
> > >
On Thu, Jul 16, 2015 at 08:53:35AM +0900, Minchan Kim wrote:
> On Wed, Jul 15, 2015 at 03:33:58PM +0900, Joonsoo Kim wrote:
> > When I tested my new patches, I found that page pointer which is used
> > for setting page_owner information is changed. This is because page
> > p
[ 6175.086339] [] ? syscall_trace_leave+0xa5/0x120
[ 6175.087389] [] system_call_fastpath+0x16/0x75
This patch fixes this error by moving up set_page_owner().
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm
atch fixes it by setting correct
information.
Without this patch, after kernel build workload is finished, number
of mixed pageblock is 112 among roughly 210 movable pageblocks.
But, with this fix, output shows that mixed pageblock is just 57.
Signed-off-by: Joonsoo Kim
---
include/linux/page_ow
In CMA, 1 bit in bitmap means 1 << order_per_bits pages so
size of bitmap is cma->count >> order_per_bits rather than
just cma->count. This patch fixes it.
Signed-off-by: Joonsoo Kim
---
mm/cma_debug.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git
r
CMA region.
Signed-off-by: Joonsoo Kim
---
mm/cma_debug.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/cma_debug.c b/mm/cma_debug.c
index 7621ee3..22190a7 100644
--- a/mm/cma_debug.c
+++ b/mm/cma_debug.c
@@ -170,10 +170,10 @@ static void cma_debugfs_add_one(struc
On Fri, Jun 26, 2015 at 11:22:41AM +0100, Mel Gorman wrote:
> On Fri, Jun 26, 2015 at 11:07:47AM +0900, Joonsoo Kim wrote:
> > >> > The long-term success rate of fragmentation avoidance depends on
> > >> > minimsing the number of UNMOVABLE allocation reque
2015-06-26 3:56 GMT+09:00 Vlastimil Babka :
> On 25.6.2015 20:14, Joonsoo Kim wrote:
>>> The long-term success rate of fragmentation avoidance depends on
>>> > minimsing the number of UNMOVABLE allocation requests that use a
>>> > pageblock belonging to anoth
2015-06-26 3:41 GMT+09:00 Mel Gorman :
> On Fri, Jun 26, 2015 at 03:14:39AM +0900, Joonsoo Kim wrote:
>> > It could though. Reclaim/compaction is entered for orders higher than
>> > PAGE_ALLOC_COSTLY_ORDER and when scan priority is sufficiently high.
>> > That c
2015-06-26 2:25 GMT+09:00 Mel Gorman :
> On Fri, Jun 26, 2015 at 02:11:17AM +0900, Joonsoo Kim wrote:
>> > Global state is required because there can be parallel compaction
>> > attempts. The global state requires locking to avoid two parallel
>> > compaction attempt
2015-06-25 22:35 GMT+09:00 Vlastimil Babka :
> On 06/25/2015 02:45 AM, Joonsoo Kim wrote:
>>
>> Recently, I got a report that android get slow due to order-2 page
>> allocation. With some investigation, I found that compaction usually
>> fails and many pages are reclaim
2015-06-25 20:03 GMT+09:00 Mel Gorman :
> On Thu, Jun 25, 2015 at 09:45:11AM +0900, Joonsoo Kim wrote:
>> Recently, I got a report that android get slow due to order-2 page
>> allocation. With some investigation, I found that compaction usually
>> fails and many pages are recl
-threshold
Success:44 44 42 37
Success(N): 94 92 91 80
Compaction gives us almost all possible high order page. Overhead is
highly increased, but, further patch will reduce it greatly
by adjusting depletion check with this new algorithm.
Sig
freepages on non-movable pageblock wouldn't diminish much and
wouldn't cause much fragmentation.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index dd2063b..8d1b3b5 10064
x27;t need to worry.
Please see result of "hogger-frag-movable with free memory variation".
It shows that patched version solves limitations of current compaction
algorithm and almost possible order-3 candidates can be allocated
regardless of amount of free memory.
This patchset is b
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index 9c5d43c..2d8e211 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -510,6 +510,10 @@ isolate_fail:
if (locked
n scanner limit diminished
according to this depth. It effectively reduce compaction overhead in
this situation.
Signed-off-by: Joonsoo Kim
---
include/linux/mmzone.h | 1 +
mm/compaction.c| 61 --
mm/internal.h | 1 +
3 files changed
d and this threshold is also adjusted to that change.
In this patch, only state definition is implemented. There is no action
for this new state so no functional change. But, following patch will
add some handling for this new state.
Signed-off-by: Joonsoo Kim
---
include/linux/mmzone.h | 2 +
37856052177090
compact_stall 2195 2157
compact_success247225
pgmigrate_success 439739 182366
Success:43 43
Success(N): 89 90
renamed and tracepoint outputs are changed due to
this removing.
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h| 14 +---
include/linux/mmzone.h| 3 +-
include/trace/events/compaction.h | 30 +++-
mm/compaction.c | 74
Rename check function and move one outer condition check to this function.
There is no functional change.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 2d8e211..dd2063b 100644
rt at begin of pageblock so it is not appropriate
to set skipbit. This patch fixes this situation that updating skip-bit
only happens when whole pageblock is really scanned.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 32 ++--
1 file changed, 18 insertions(+
of
skipped pageblock, we don't need to do this check.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 15 ++-
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 4397bf7..9c5d43c 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@
2015-06-16 21:33 GMT+09:00 Vlastimil Babka :
> On 06/16/2015 08:10 AM, Joonsoo Kim wrote:
>> On Wed, Jun 10, 2015 at 11:32:34AM +0200, Vlastimil Babka wrote:
>>> The pageblock_skip bitmap and cached scanner pfn's are two mechanisms in
>>> compaction to prevent resc
ifferences in compact_migrate_scanned and
> compact_free_scanned were lost in the noise.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik va
nt decreased by at least 15%.
>
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/major
stat compact_migrate_scanned count decreased by 15%.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: David Rientjes
> --
; explicitly
> where needed.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: David Rientjes
> ---
> mm/compaction.
) prematurely
> without also considering the condition in isolate_freepages().
>
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
M
ner.
> The special case in isolate_migratepages() introduced by 1d5bfe1ffb5b is
> removed.
>
> Suggested-by: Joonsoo Kim
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On Mon, Jun 08, 2015 at 01:55:32PM -0700, Andrew Morton wrote:
> On Fri, 5 Jun 2015 20:11:30 +0900 Sergey Senozhatsky
> wrote:
>
> > zs_destroy_pool()->destroy_handle_cache() invoked from
> > zs_create_pool() can pass a NULL ->handle_cachep pointer
> > to kmem_cache_destroy(), which will derefe
neric implementation for the rest of the objects.
>
> Signed-off-by: Christoph Lameter
> Cc: Jesper Dangaard Brouer
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> Cc: David Rientjes
> Cc: Joonsoo Kim
> Signed-off-by: Andrew Morton
> ---
>
> mm/slub.c | 27 +++
neric implementation for the rest of the objects.
>
> Signed-off-by: Christoph Lameter
> Cc: Jesper Dangaard Brouer
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> Cc: David Rientjes
> Cc: Joonsoo Kim
> Signed-off-by: Andrew Morton
> ---
>
> mm/slub.c | 27 +++
On Tue, May 12, 2015 at 11:01:48AM +0200, Vlastimil Babka wrote:
> On 04/28/2015 09:45 AM, Joonsoo Kim wrote:
> >On Mon, Apr 27, 2015 at 09:29:23AM +0100, Mel Gorman wrote:
> >>On Mon, Apr 27, 2015 at 04:23:41PM +0900, Joonsoo Kim wrote:
> >>>We already have an
On Tue, May 12, 2015 at 10:36:40AM +0200, Vlastimil Babka wrote:
> On 04/27/2015 09:23 AM, Joonsoo Kim wrote:
> >Sometimes we try to get more freepages from buddy list than how much
> >we really need, in order to refill pcp list. This may speed up following
> >allocation req
On Tue, May 12, 2015 at 09:54:51AM +0200, Vlastimil Babka wrote:
> On 05/12/2015 09:51 AM, Vlastimil Babka wrote:
> >> {
> >>struct page *page;
> >>+ bool steal_fallback;
> >>
> >>-retry_reserve:
> >>+retry:
> >>page = __rmqueue_smallest(zone, order, migratetype);
> >>
> >>if (unlik
On Tue, May 12, 2015 at 09:51:56AM +0200, Vlastimil Babka wrote:
> On 04/27/2015 09:23 AM, Joonsoo Kim wrote:
> >When we steal whole pageblock, we don't need to break highest order
> >freepage. Perhaps, there is small order freepage so we can use it.
> >
> >T
On Tue, May 05, 2015 at 11:22:59AM +0800, Hui Zhu wrote:
> Change pfn_present to pfn_valid_within according to the review of Laura.
>
> I got a issue:
> [ 214.294917] Unable to handle kernel NULL pointer dereference at virtual
> address 082a
> [ 214.303013] pgd = cc97
> [ 214.305721] [
On Mon, Apr 27, 2015 at 09:29:23AM +0100, Mel Gorman wrote:
> On Mon, Apr 27, 2015 at 04:23:41PM +0900, Joonsoo Kim wrote:
> > We already have antifragmentation policy in page allocator. It works well
> > when system memory is sufficient, but, it doesn't works well when sys
On Mon, Apr 27, 2015 at 09:08:50AM +0100, Mel Gorman wrote:
> On Mon, Apr 27, 2015 at 04:23:39PM +0900, Joonsoo Kim wrote:
> > When we steal whole pageblock, we don't need to break highest order
> > freepage. Perhaps, there is small order freepage so we can use it.
> >
Below is result of this idea.
* After
Number of blocks type (movable)
DMA32: 208.2
Number of mixed blocks (movable)
DMA32: 55.8
Result shows that non-mixed block increase by 59% in this case.
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h | 8 +++
include/linux/gfp.h|
textdata bss dec hex filename
374131440 624 394779a35 mm/page_alloc.o
372491440 624 393139991 mm/page_alloc.o
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 40 +---
1 file changed, 21 insertions(+), 19 dele
is tainted by other migratetype
allocation.
* After
Number of blocks type (movable)
DMA32: 207
Number of mixed blocks (movable)
DMA32: 111.2
This result shows that non-mixed block increase by 38% in this case.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 10 +++---
1 file changed, 7
On Thu, Apr 16, 2015 at 09:39:52AM -0400, Steven Rostedt wrote:
> On Thu, 16 Apr 2015 13:44:44 +0900
> Joonsoo Kim wrote:
>
> > There is a problem that trace events are not properly enabled with
> > boot cmdline. Problem is that if we pass "trace_event=kmem:mm_page_allo
On Fri, Apr 17, 2015 at 09:17:53AM +1000, Dave Chinner wrote:
> On Thu, Apr 16, 2015 at 10:34:13AM -0400, Johannes Weiner wrote:
> > On Thu, Apr 16, 2015 at 12:57:36PM +0900, Joonsoo Kim wrote:
> > > This causes following success rate regression of phase 1,2 on
>
On Thu, Apr 16, 2015 at 10:34:13AM -0400, Johannes Weiner wrote:
> Hi Joonsoo,
>
> On Thu, Apr 16, 2015 at 12:57:36PM +0900, Joonsoo Kim wrote:
> > Hello, Johannes.
> >
> > Ccing Vlastimil, because this patch causes some regression on
> > stress-highalloc test
t;. This patch add it.
Signed-off-by: Joonsoo Kim
---
kernel/trace/trace_events.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index db54dda..ce5b194 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/t
Hello, Johannes.
Ccing Vlastimil, because this patch causes some regression on
stress-highalloc test in mmtests and he is a expert on compaction
and would have interest on it. :)
On Fri, Nov 28, 2014 at 07:06:37PM +0300, Vladimir Davydov wrote:
> Hi Johannes,
>
> The patch generally looks good t
Hello,
On Wed, Apr 01, 2015 at 04:31:43PM +0300, Stefan Strogin wrote:
> Add trace events for cma_alloc() and cma_release().
>
> The cma_alloc tracepoint is used both for successful and failed allocations,
> in case of allocation failure pfn=-1UL is stored and printed.
>
> Signed-off-by: Stefan
2015-03-24 9:18 GMT+09:00 Namhyung Kim :
> On Tue, Mar 24, 2015 at 02:32:17AM +0900, Joonsoo Kim wrote:
>> 2015-03-23 15:30 GMT+09:00 Namhyung Kim :
>> > The perf kmem command records and analyze kernel memory allocation
>> > only for SLAB objects. This patch implem
2015-03-23 15:30 GMT+09:00 Namhyung Kim :
> The perf kmem command records and analyze kernel memory allocation
> only for SLAB objects. This patch implement a simple page allocator
> analyzer using kmem:mm_page_alloc and kmem:mm_page_free events.
>
> It adds two new options of --slab and --page.
2015-03-23 15:30 GMT+09:00 Namhyung Kim :
> Add new sort keys for page: page, order, mtype, gfp - existing
> 'bytes', 'hit' and 'callsite' sort keys also work for page. Note that
> -s/--sort option should be preceded by either of --slab or --page
> option to determine where the sort keys applies.
Hello, Namhyung.
2015-03-23 15:30 GMT+09:00 Namhyung Kim :
> Hello,
>
> Currently perf kmem command only analyzes SLAB memory allocation. And
> I'd like to introduce page allocation analysis also. Users can use
> --slab and/or --page option to select it. If none of these options
> are used, it
On Wed, Mar 18, 2015 at 03:33:02PM +0530, Aneesh Kumar K.V wrote:
>
> >
> > #ifdef CONFIG_CMA
> > +static void __init adjust_present_page_count(struct page *page, long count)
> > +{
> > + struct zone *zone = page_zone(page);
> > +
> > + zone->present_pages += count;
> > +}
> > +
>
> May be a
2015-03-19 0:21 GMT+09:00 Mark Rutland :
> Hi,
>
>> > do {
>> > tid = this_cpu_read(s->cpu_slab->tid);
>> > c = raw_cpu_ptr(s->cpu_slab);
>> > - } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));
>> > + } while (IS_ENABLED(CONFIG_PRE
2015-03-17 18:46 GMT+09:00 Aneesh Kumar K.V :
> Joonsoo Kim writes:
>
>> I passed boot test on x86, ARM32 and ARM64. I did some stress tests
>> on x86 and there is no problem. Feel free to enjoy and please give me
>> a feedback. :)
>
> Tested
>tid with READ_ONCE.
> This ensures that the value is reloaded even when the compiler would
> otherwise assume it could cache the value, and also ensures that the
> load will not be torn.
>
> Signed-off-by: Mark Rutland
> Cc: Andrew Morton
> Cc: Catalin Marinas
> Cc: Ch
here is no any need to repeat the search
> sequence, allocation job is done.
>
> Signed-off-by: Roman Pen
> Cc: Nick Piggin
> Cc: Andrew Morton
> Cc: Eric Dumazet
> Cc: Joonsoo Kim
> Cc: David Rientjes
> Cc: WANG Chao
> Cc: Fabian Frederick
> Cc: Christoph L
ned-off-by: Roman Pen
> Cc: Nick Piggin
> Cc: Zhang Yanfei
> Cc: Andrew Morton
> Cc: Eric Dumazet
> Cc: Joonsoo Kim
> Cc: David Rientjes
> Cc: WANG Chao
> Cc: Fabian Frederick
> Cc: Christoph Lameter
> Cc: Gioh Kim
> Cc: Rob Jones
> Cc: linux...@kvack.o
> get benefit and should iterate whole list to find suitable free block,
>> > because this free block is put to the tail of the list. Am I missing
>> > something?
>>
>> You are missing the fact that we occupy blocks in 2^n.
>> So in your example 4 page slots wi
2015-03-17 17:22 GMT+09:00 Roman Peniaev :
> On Tue, Mar 17, 2015 at 4:29 PM, Joonsoo Kim wrote:
>> On Tue, Mar 17, 2015 at 02:12:14PM +0900, Roman Peniaev wrote:
>>> On Tue, Mar 17, 2015 at 1:56 PM, Joonsoo Kim wrote:
>>> > On Fri, Mar 13, 2015 at 09:12:55PM +09
On Tue, Mar 17, 2015 at 02:12:14PM +0900, Roman Peniaev wrote:
> On Tue, Mar 17, 2015 at 1:56 PM, Joonsoo Kim wrote:
> > On Fri, Mar 13, 2015 at 09:12:55PM +0900, Roman Pen wrote:
> >> If suitable block can't be found, new block is allocated and put into a
> >>
On Fri, Mar 13, 2015 at 09:12:55PM +0900, Roman Pen wrote:
> If suitable block can't be found, new block is allocated and put into a head
> of a free list, so on next iteration this new block will be found first.
>
> That's bad, because old blocks in a free list will not get a chance to be
> full
; Cc: Naoya Horiguchi
> Cc: Mel Gorman
> Cc: Rik van Riel
> Cc: Yasuaki Ishimatsu
> Cc: Zhang Yanfei
> Cc: Xishi Qiu
> Cc: Vladimir Davydov
> Cc: Joonsoo Kim
> Cc: Gioh Kim
> Cc: Michal Nazarewicz
> Cc: Marek Szyprowski
> Cc: Vlastimil Babka
> Signed-off-b
On Mon, Mar 16, 2015 at 09:54:18PM -0400, Sasha Levin wrote:
> On 03/16/2015 09:43 PM, Joonsoo Kim wrote:
> > On Mon, Mar 16, 2015 at 07:06:55PM +0300, Stefan Strogin wrote:
> >> > Hi all.
> >> >
> >> > Here is the fourth version of a patch set
On Mon, Mar 16, 2015 at 07:06:55PM +0300, Stefan Strogin wrote:
> Hi all.
>
> Here is the fourth version of a patch set that adds some debugging facility
> for
> CMA.
>
> This patch set is based on next-20150316.
> It is also available on git:
> git://github.com/stefanstrogin/linux -b cmainfo-v4
Hello,
On Fri, Mar 13, 2015 at 03:47:12PM +, Mark Rutland wrote:
> Commit 9aabf810a67cd97e ("mm/slub: optimize alloc/free fastpath by
> removing preemption on/off") introduced an occasional hang for kernels
> built with CONFIG_PREEMPT && !CONFIG_SMP.
>
> The problem is the following loop the
On Thu, Mar 05, 2015 at 06:48:50PM +0100, Vlastimil Babka wrote:
> On 03/05/2015 05:53 PM, Vlastimil Babka wrote:
> > On 02/12/2015 08:32 AM, Joonsoo Kim wrote:
> >>
> >> 1) Break non-overlapped zone assumption
> >> CMA regions could be spread to all m
On Tue, Mar 03, 2015 at 01:58:46PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > Until now, reserved pages for CMA are managed altogether with normal
> > page in the same zone. This approach has numorous problems and fixing
> > them isn't easy. To fi
On Tue, Feb 17, 2015 at 09:33:27AM +0100, Michal Hocko wrote:
> On Tue 17-02-15 14:24:59, Joonsoo Kim wrote:
> > It can be possible to return NULL in parent_mem_cgroup()
> > if use_hierarchy is 0.
>
> This alone is not sufficient because the low limit is present only in
&g
On Wed, Feb 18, 2015 at 04:04:05PM -0800, Andrew Morton wrote:
> On Thu, 12 Feb 2015 16:15:05 +0900 Joonsoo Kim wrote:
>
> > Compaction has anti fragmentation algorithm. It is that freepage
> > should be more than pageblock order to finish the compaction if we don't
&
On Tue, Feb 17, 2015 at 10:46:04AM +0100, Vlastimil Babka wrote:
> On 02/12/2015 08:15 AM, Joonsoo Kim wrote:
> >Compaction has anti fragmentation algorithm. It is that freepage
> >should be more than pageblock order to finish the compaction if we don't
> >find
max_used_pages are defined as atomic_long_t so we need to use
unsigned long to keep temporary value for it rather than int
which is smaller than unsigned long in 64 bit system.
Signed-off-by: Joonsoo Kim
---
drivers/block/zram/zram_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
00 00 72 30 48 39 f7 74 1a
[ 33.608008] RIP [] mem_cgroup_low+0x40/0x90
[ 33.608008] RSP
[ 33.608008] CR2: 00b0
[ 33.608008] BUG: unable to handle kernel [ 33.653499] ---[ end trace
e264a32717ffda51 ]---
Signed-off-by: Joonsoo Kim
---
mm/memcontrol.c | 2 ++
1 file chan
On Fri, Feb 13, 2015 at 09:49:24AM -0600, Christoph Lameter wrote:
> On Fri, 13 Feb 2015, Joonsoo Kim wrote:
>
> > > + *p++ = freelist;
> > > + freelist = get_freepointer(s, freelist);
> > > + allocated++;
>
On Sat, Feb 14, 2015 at 02:02:16PM +0900, Gioh Kim wrote:
>
>
> 2015-02-12 오후 4:32에 Joonsoo Kim 이(가) 쓴 글:
> > Until now, reserved pages for CMA are managed altogether with normal
> > page in the same zone. This approach has numorous problems and fixing
> > them isn
On Fri, Feb 13, 2015 at 03:40:08PM +0900, Gioh Kim wrote:
>
> > diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> > index c8778f7..883e78d 100644
> > --- a/mm/page_isolation.c
> > +++ b/mm/page_isolation.c
> > @@ -210,8 +210,8 @@ int undo_isolate_page_range(unsigned long start_pfn,
> > uns
uous pages but each pages are refcounted.
Fixes: dbc8358c7237 ("mm/nommu: use alloc_pages_exact() rather than
its own implementation").
Reported-by: Maxime Coquelin
Tested-by: Maxime Coquelin
Signed-off-by: Joonsoo Kim
---
mm/nommu.c | 4 +---
1 file changed, 1 insertion(+), 3 delet
On Fri, Feb 13, 2015 at 09:47:59AM -0600, Christoph Lameter wrote:
> On Fri, 13 Feb 2015, Joonsoo Kim wrote:
> >
> > I also think that this implementation is slub-specific. For example,
> > in slab case, it is always better to access local cpu cache first than
> > page a
igh:528kB
> active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB
> unevictable:492kB isolated(anon):0ks
> [ 39.09] lowmem_reserve[]: 0 0
> [ 39.09] Normal: 23*4kB (U) 22*8kB (U) 24*16kB (U) 23*32kB (U)
> 23*64kB (U) 23*128kB (U) 1*256kB (U) 0*512kB 0*1024kB 0*
On Fri, Feb 13, 2015 at 01:15:41AM +0300, Stefan Strogin wrote:
> static int cma_debugfs_get(void *data, u64 *val)
> {
> unsigned long *p = data;
> @@ -125,6 +221,52 @@ static int cma_alloc_write(void *data, u64 val)
>
> DEFINE_SIMPLE_ATTRIBUTE(cma_alloc_fops, NULL, cma_alloc_write, "%ll
On Fri, Feb 13, 2015 at 01:15:42AM +0300, Stefan Strogin wrote:
> From: Dmitry Safonov
>
> Here are two functions that provide interface to compute/get used size
> and size of biggest free chunk in cma region.
> Add that information to debugfs.
>
> Signed-off-by: Dmitry Safonov
> Signed-off-by:
On Fri, Feb 13, 2015 at 01:15:41AM +0300, Stefan Strogin wrote:
> /sys/kernel/debug/cma/cma-/buffers contains a list of currently allocated
> CMA buffers for CMA region N when CONFIG_CMA_DEBUGFS is enabled.
>
> Format is:
>
> - ( kB), allocated by ()
>
>
> Signed-off-by: Stefan Strogin
> -
On Fri, Feb 13, 2015 at 01:15:40AM +0300, Stefan Strogin wrote:
> Hi all.
>
> Sorry for the long delay. Here is the second attempt to add some facility
> for debugging CMA (the first one was "mm: cma: add /proc/cmainfo" [1]).
>
> This patch set is based on v3.19 and Sasha Levin's patch set
> "mm:
_lock);
> + hlist_add_head(&mem->node, &cma->mem_head);
> + spin_unlock(&cma->mem_head_lock);
> +}
> +
> +static int cma_alloc_mem(struct cma *cma, int count)
> +{
> + struct cma_mem *mem;
> + struct page *p;
> +
> + mem =
On Thu, Feb 12, 2015 at 05:26:48PM -0500, Sasha Levin wrote:
> Provides a userspace interface to trigger a CMA release.
>
> Usage:
>
> echo [pages] > free
>
> This would provide testing/fuzzing access to the CMA release paths.
>
> Signed-off-by: Sasha Levin
>
> Signed-off-by: Sasha Levin
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Tue, Feb 10, 2015 at 01:48:06PM -0600, Christoph Lameter wrote:
> The major portions are there but there is no support yet for
> directly allocating per cpu objects. There could also be more
> sophisticated code to exploit the batch freeing.
>
> Signed-off-by: Christoph Lameter
>
> Index: lin
On Wed, Feb 11, 2015 at 12:18:07PM -0800, David Rientjes wrote:
> On Wed, 11 Feb 2015, Christoph Lameter wrote:
>
> > > This patch is referencing functions that don't exist and can do so since
> > > it's not compiled, but I think this belongs in the next patch. I also
> > > think that this partic
this situation, this patch add some code to consider zone
overlapping before adding ZONE_CMA.
pagetypeinfo_showblockcount_print() prints zone's statistics so should
consider zone overlap.
Signed-off-by: Joonsoo Kim
---
mm/vmstat.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/vms
this situation, this patch add some code to consider zone
overlapping before adding ZONE_CMA.
setup_zone_migrate_reserve() reserve some pages for specific zone so
should consider zone overlap.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
55
[3] https://lkml.org/lkml/2014/10/15/623
[4] https://lkml.org/lkml/2014/5/30/320
Joonsoo Kim (16):
mm/page_alloc: correct highmem memory statistics
mm/writeback: correct dirty page calculation for highmem
mm/highmem: make nr_free_highpages() handles all highmem zones by
itself
mm/
801 - 900 of 2325 matches
Mail list logo