Hello,
At 2015/2/2 18:20, Vlastimil Babka wrote:
> On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
>> Compaction has anti fragmentation algorithm. It is that freepage
>> should be more than pageblock order to finish the compaction if we don't
>> find any freepage in requested migratetype buddy list.
Hello Joonsoo,
At 2015/2/2 15:15, Joonsoo Kim wrote:
> This is preparation step to use page allocator's anti fragmentation logic
> in compaction. This patch just separates fallback freepage checking part
> from fallback freepage management part. Therefore, there is no functional
> change.
>
>
Hello,
At 2015/2/2 18:20, Vlastimil Babka wrote:
On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
Compaction has anti fragmentation algorithm. It is that freepage
should be more than pageblock order to finish the compaction if we don't
find any freepage in requested migratetype buddy list. This is
Hello Joonsoo,
At 2015/2/2 15:15, Joonsoo Kim wrote:
This is preparation step to use page allocator's anti fragmentation logic
in compaction. This patch just separates fallback freepage checking part
from fallback freepage management part. Therefore, there is no functional
change.
At 2015/1/30 20:34, Joonsoo Kim wrote:
> From: Joonsoo
>
> Compaction has anti fragmentation algorithm. It is that freepage
> should be more than pageblock order to finish the compaction if we don't
> find any freepage in requested migratetype buddy list. This is for
> mitigating fragmentation,
At 2015/1/30 20:34, Joonsoo Kim wrote:
> From: Joonsoo
>
> This is preparation step to use page allocator's anti fragmentation logic
> in compaction. This patch just separates steal decision part from actual
> steal behaviour part so there is no functional change.
>
> Signed-off-by: Joonsoo Kim
At 2015/1/31 16:31, Vlastimil Babka wrote:
> On 01/31/2015 08:49 AM, Zhang Yanfei wrote:
>> Hello,
>>
>> At 2015/1/30 20:34, Joonsoo Kim wrote:
>>
>> Reviewed-by: Zhang Yanfei
>>
>> IMHO, the patch making the free scanner move slower makes both scann
At 2015/1/30 20:34, Joonsoo Kim wrote:
From: Joonsoo iamjoonsoo@lge.com
Compaction has anti fragmentation algorithm. It is that freepage
should be more than pageblock order to finish the compaction if we don't
find any freepage in requested migratetype buddy list. This is for
mitigating
At 2015/1/31 16:31, Vlastimil Babka wrote:
On 01/31/2015 08:49 AM, Zhang Yanfei wrote:
Hello,
At 2015/1/30 20:34, Joonsoo Kim wrote:
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
IMHO, the patch making the free scanner move slower makes both scanners
meet further. Before this patch
At 2015/1/30 20:34, Joonsoo Kim wrote:
From: Joonsoo iamjoonsoo@lge.com
This is preparation step to use page allocator's anti fragmentation logic
in compaction. This patch just separates steal decision part from actual
steal behaviour part so there is no functional change.
uccess rate would decrease. To prevent this effect, I tested with adding
> pcp drain code on release_freepages(), but, it has no good effect.
>
> Anyway, this patch reduces waste time to isolate unneeded freepages so
> seems reasonable.
Reviewed-by: Zhang Yanfei
IMHO, the patch maki
: 28.94
>
> Cc:
> Acked-by: Vlastimil Babka
> Signed-off-by: Joonsoo Kim
Reviewed-by: Zhang Yanfei
> ---
> mm/compaction.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index b68736c..4954e19 100644
>
code on release_freepages(), but, it has no good effect.
Anyway, this patch reduces waste time to isolate unneeded freepages so
seems reasonable.
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
IMHO, the patch making the free scanner move slower makes both scanners
meet further. Before
Kim iamjoonsoo@lge.com
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
---
mm/compaction.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index b68736c..4954e19 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1173,7
he memory of the program. The percentage did not increase
> over time.
>
> With this patch, after 5 minutes of waiting khugepaged had
> collapsed 50% of the program's memory back into THPs.
>
> Signed-off-by: Ebru Akagunduz
> Reviewed-by: Rik van Riel
> Acked-by: Vlastimil
Hello
在 2015/1/28 8:27, Andrea Arcangeli 写道:
> On Tue, Jan 27, 2015 at 07:39:13PM +0200, Ebru Akagunduz wrote:
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 817a875..17d6e59 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2148,17 +2148,18 @@ static int
Hello
在 2015/1/28 8:27, Andrea Arcangeli 写道:
On Tue, Jan 27, 2015 at 07:39:13PM +0200, Ebru Akagunduz wrote:
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 817a875..17d6e59 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2148,17 +2148,18 @@ static int
of page_count() below to trylock_page() (Andrea
Arcangeli)
Changes in v3:
- Add a at-least-one-writable-pte check (Zhang Yanfei)
- Debug page count (Vlastimil Babka, Andrea Arcangeli)
- Increase read-only pte counter if pte is none (Andrea Arcangeli)
I've written down test results:
With the patch
Hello
在 2015/1/25 17:25, Vlastimil Babka 写道:
> On 23.1.2015 20:18, Andrea Arcangeli wrote:
>>> >+if (!pte_write(pteval)) {
>>> >+if (++ro > khugepaged_max_ptes_none)
>>> >+goto out_unmap;
>>> >+}
>> It's true this is maxed out at 511, so there must be
Hello
在 2015/1/25 17:25, Vlastimil Babka 写道:
On 23.1.2015 20:18, Andrea Arcangeli wrote:
+if (!pte_write(pteval)) {
+if (++ro khugepaged_max_ptes_none)
+goto out_unmap;
+}
It's true this is maxed out at 511, so there must be at least one
Hello Minchan,
How are you?
在 2015/1/19 14:55, Minchan Kim 写道:
> Hello,
>
> On Sun, Jan 18, 2015 at 04:32:59PM +0800, Hui Zhu wrote:
>> From: Hui Zhu
>>
>> The original of this patch [1] is part of Joonsoo's CMA patch series.
>> I made a patch [2] to fix the issue of this patch. Joonsoo
r bisection of potential regressions, this patch always uses the
> first zone's pfn as the pivot. That means the free scanner immediately wraps
> to the last pageblock and the operation of scanners is thus unchanged. The
> actual pivot changing is done by the next patch.
>
> Signe
>
> Signed-off-by: Vlastimil Babka
Reviewed-by: Zhang Yanfei
Should the new function be inline?
Thanks.
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: Dav
Hello,
在 2015/1/19 18:05, Vlastimil Babka 写道:
> Handling the position where compaction free scanner should restart (stored in
> cc->free_pfn) got more complex with commit e14c720efdd7 ("mm, compaction:
> remember position within pageblock in free pages scanner"). Currently the
> position is
_migratepages() introduced by 1d5bfe1ffb5b is
> removed.
>
> Suggested-by: Joonsoo Kim
> Signed-off-by: Vlastimil Babka
Reviewed-by: Zhang Yanfei
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
Hello,
在 2015/1/19 18:05, Vlastimil Babka 写道:
Handling the position where compaction free scanner should restart (stored in
cc-free_pfn) got more complex with commit e14c720efdd7 (mm, compaction:
remember position within pageblock in free pages scanner). Currently the
position is updated in
-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Should the new function be inline?
Thanks.
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya
@lge.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu
pageblock and the operation of scanners is thus unchanged. The
actual pivot changing is done by the next patch.
Signed-off-by: Vlastimil Babka vba...@suse.cz
I read through the whole patch, and you can feel free to add:
Acked-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
I agree with you
Hello Minchan,
How are you?
在 2015/1/19 14:55, Minchan Kim 写道:
Hello,
On Sun, Jan 18, 2015 at 04:32:59PM +0800, Hui Zhu wrote:
From: Hui Zhu zhu...@xiaomi.com
The original of this patch [1] is part of Joonsoo's CMA patch series.
I made a patch [2] to fix the issue of this patch. Joonsoo
in
>> the body to majord...@kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: mailto:"d...@kvack.org;> em...@kvack.org
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body
> +#endif
> +
> +#ifdef CONFIG_SLUB
> +#include
> +#endif
> +
> /*
> * State of the slab allocator.
> *
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index d319502..2088904 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -30,6 +30,1
kmem_cache_size(struct kmem_cache *s)
+{
+ return s-object_size;
+}
+
#ifdef CONFIG_DEBUG_VM
static int kmem_cache_sanity_check(const char *name, size_t size)
{
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
.
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
pageset_set_high_and_batch(zone,
> - per_cpu_ptr(zone->pageset, cpu));
> + pageset_get_values(zone, , );
> + pageset_update(zone, high, batch);
> }
> out:
> mutex_unlock(_batch_hi
page. This may make code more understandable.
>
> One more thing, I did in this patch, is that fixing freepage accounting.
> If we clear guard page and link it onto isolate buddy list, we should
> not increase freepage count.
>
> Acked-by: Vlastimil Babka
> Signed-off-by: Jo
on.h |2 +
> mm/internal.h | 5 +
> mm/page_alloc.c| 223 +-
> mm/page_isolation.c| 292
> +++-
> 4 files changed, 368 insertions(+), 154 deletions(-)
>
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
/internal.h |5 +
mm/page_alloc.c| 223 +-
mm/page_isolation.c| 292
+++-
4 files changed, 368 insertions(+), 154 deletions(-)
--
Thanks.
Zhang Yanfei
--
To unsubscribe from
.
One more thing, I did in this patch, is that fixing freepage accounting.
If we clear guard page and link it onto isolate buddy list, we should
not increase freepage count.
Acked-by: Vlastimil Babka vba...@suse.cz
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Reviewed-by: Zhang Yanfei
)
- pageset_set_high_and_batch(zone,
- per_cpu_ptr(zone-pageset, cpu));
+ pageset_get_values(zone, high, batch);
+ pageset_update(zone, high, batch);
mutex_unlock(pcp_batch_high_lock);
}
#endif
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list
f it really makes sense to check the migratetype here. This
>> check
>> doesn't add any new information to the code and make false impression that
>> this
>> function can be called for other migratetypes than CMA or MOVABLE. Even if
>> so,
>> then invalidating bh_lrus unconditionally will make more sense, IMHO.
>
> I agree. I cannot understand why alloc_contig_range has an argument of
> migratetype.
> Can the alloc_contig_range is called for other migrate type than CMA/MOVABLE?
>
> What do you think about removing the argument of migratetype and
> checking migratetype (if (migratetype == MIGRATE_CMA || migratetype ==
> MIGRATE_MOVABLE))?
>
Remove the checking only. Because gigantic page allocation used for hugetlb is
using alloc_contig_range(.. MIGRATE_MOVABLE).
Thanks.
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
emory-hotplug: sh: suitable memory should go to ZONE_MOVABLE
> memory-hotplug: powerpc: suitable memory should go to ZONE_MOVABLE
>
> arch/ia64/mm/init.c | 7 +++
> arch/powerpc/mm/mem.c | 6 ++
> arch/sh/mm/init.c | 13 -
> arch/x86/mm/init_32.c |
++
arch/x86/mm/init_64.c | 10 --
5 files changed, 35 insertions(+), 7 deletions(-)
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
allocation used for hugetlb is
using alloc_contig_range(.. MIGRATE_MOVABLE).
Thanks.
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
c: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: "H. Peter Anvin"
> Cc: x...@kernel.org
> Acked-by: Kirill A. Shutemov
> Signed-off-by: Minchan Kim
Acked-by: Zhang Yanfei
> ---
> arch/x86/include/asm/pgtable.h | 10 ++
> 1 file changed, 10 insertions(+)
&
max: 37266.00
> min: 22108.00min: 34149.00
>
> In summary, MADV_FREE is about 2 time faster than MADV_DONTNEED.
>
> Cc: Michael Kerrisk
> Cc: Linux API
> Cc: Hugh Dickins
> Cc: Johannes Weiner
> Cc: KOSAKI Motohiro
> Cc: Mel Gorman
> Cc: J
Evans j...@fb.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Acked-by: Rik van Riel r...@redhat.com
Signed-off-by: Minchan Kim minc...@kernel.org
A quick respin, looks good to me now for this !THP part. And
looks neat with the Pagewalker.
Acked-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
...@linutronix.de
Cc: Ingo Molnar mi...@redhat.com
Cc: H. Peter Anvin h...@zytor.com
Cc: x...@kernel.org
Acked-by: Kirill A. Shutemov kirill.shute...@linux.intel.com
Signed-off-by: Minchan Kim minc...@kernel.org
Acked-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
---
arch/x86/include/asm/pgtable.h | 10
f
> the range.
This should be updated because the implementation has been changed.
It also remove the page from the swapcache if it is.
Thank you for your effort!
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a mess
the implementation has been changed.
It also remove the page from the swapcache if it is.
Thank you for your effort!
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
LATE_ABORT
> return COMPACT_PARTIAL with *contended = cc.contended ==
> COMPACT_CONTENDED_LOCK (1)
> COMPACTFAIL
> if (contended_compaction && gfp_mask & __GFP_NO_KSWAPD)
> no goto nopage because contended_compaction was false by (1)
>
> __alloc_pages_direct_reclaim
On 06/23/2014 05:52 PM, Vlastimil Babka wrote:
> On 06/23/2014 07:39 AM, Zhang Yanfei wrote:
>> Hello
>>
>> On 06/21/2014 01:45 AM, Kirill A. Shutemov wrote:
>>> On Fri, Jun 20, 2014 at 05:49:31PM +0200, Vlastimil Babka wrote:
>>>> When allocating huge
> Signed-off-by: David Rientjes
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
Reviewed-by: Zhang Yanfei
> ---
> mm/compa
d gracefully.
> + *
> + * ACCESS_ONCE is used so that if the caller assigns the result into a local
> + * variable and e.g. tests it for valid range before using, the compiler
> cannot
> + * decide to remove the variable and inline the page_private(page) multiple
> + * times, p
per migrate
> page, to 2.25 free pages per migrate page, without affecting success rates.
>
> Signed-off-by: Vlastimil Babka
> Acked-by: David Rientjes
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Chri
> Cc: Rik van Riel
> Acked-by: David Rientjes
Reviewed-by: Zhang Yanfei
> ---
> mm/compaction.c | 53 +++--
> 1 file changed, 31 insertions(+), 22 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 40d
. The lock contention
> avoidance for async compaction is achieved by the periodical unlock by
> compact_unlock_should_abort() and by using trylock in
> compact_trylock_irqsave()
> and aborting when trylock fails. Sync compaction does not use trylock.
>
> Signed-off-by: Vlastimil Babk
;> - * need_resched() true during async
>> - * compaction
>> - */
>> +enum compact_contended contended; /* Signal need_sched() or lock
>> +
nc compaction.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: David Rientjes
I think this is a good clean-up to make
e,
> and DMA32 zones on both nodes were thus not considered for compaction.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
>
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Christoph Lameter c...@linux.com
Cc: Rik van Riel r...@redhat.com
Cc: David Rientjes rient...@google.com
Really good.
Reviewed-by: Zhang Yanfei zhangyan
clean-up to make code more clear.
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Only a tiny nit-pick below.
---
mm/compaction.c | 112
+---
1 file changed, 59 insertions(+), 53 deletions(-)
diff --git a/mm/compaction.c b/mm
. How long could we increase latency for temporal allocation
for HUGEPAGE_ALWAYS system?
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
rient...@google.com
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
---
mm/compaction.c | 114
1 file changed, 73 insertions(+), 41 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index e8cfac9..40da812 100644
Lameter c...@linux.com
Cc: Rik van Riel r...@redhat.com
Acked-by: David Rientjes rient...@google.com
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
---
mm/compaction.c | 53 +++--
1 file changed, 31 insertions(+), 22 deletions(-)
diff
...@ah.jp.nec.com
Cc: Christoph Lameter c...@linux.com
Cc: Rik van Riel r...@redhat.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
---
mm/compaction.c | 40 +++-
1 file changed, 31 insertions(+), 9
...@linux.com
Cc: Rik van Riel r...@redhat.com
Cc: David Rientjes rient...@google.com
Fair enough.
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
---
mm/compaction.c | 36 +++-
mm/internal.h | 16 +++-
2 files changed, 46 insertions(+), 6
Lameter c...@linux.com
Cc: Rik van Riel r...@redhat.com
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
---
mm/compaction.c | 12 +++-
mm/internal.h | 2 +-
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 32c768b
On 06/23/2014 05:52 PM, Vlastimil Babka wrote:
On 06/23/2014 07:39 AM, Zhang Yanfei wrote:
Hello
On 06/21/2014 01:45 AM, Kirill A. Shutemov wrote:
On Fri, Jun 20, 2014 at 05:49:31PM +0200, Vlastimil Babka wrote:
When allocating huge page for collapsing, khugepaged currently holds
mmap_sem
Please, move up_read() outside khugepaged_alloc_page().
>
I might be wrong. If we up_read in khugepaged_scan_pmd(), then if we round again
do the for loop to get the next vma and handle it. Does we do this without
holding
the mmap_sem in any mode?
And if the loop end, we have anoth
the for loop to get the next vma and handle it. Does we do this without
holding
the mmap_sem in any mode?
And if the loop end, we have another up_read in breakouterloop. What if we have
released the mmap_sem in collapse_huge_page()?
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line
-
> 7 files changed, 248 insertions(+), 1 deletions(-)
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-in
insertions(+), 1 deletions(-)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
.
--
Thanks.
Zhang
On 06/12/2014 11:21 AM, Joonsoo Kim wrote:
> We can remove one call sites for clear_cma_bitmap() if we first
> call it before checking error number.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Zhang Yanfei
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 1e1b017..01a0713 10
l change in DMA APIs.
>
> v2: There is no big change from v1 in mm/cma.c. Mostly renaming.
>
> Acked-by: Michal Nazarewicz
> Signed-off-by: Joonsoo Kim
Acked-by: Zhang Yanfei
>
> diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> index 00e13ce..4eac559 100644
>
trary bitmap granularity for following generalization.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Zhang Yanfei
>
> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index bc4c171..9bc9340 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/
aningful error message like what was successful zone and what is
>> new zone and failed pfn number?
>
> What I want to do in early phase of this patchset is to make cma code
> on DMA APIs similar to ppc kvm's cma code. ppc kvm's cma code already
> has this error hand
me consistently.
>
> Lastly, I add one more debug log on cma_activate_area().
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Zhang Yanfei
>
> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 83969f8..bd0bb81 100644
> --- a/drivers/base/dm
mutex_lock(>lock);
>> -bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
>> -mutex_unlock(>lock);
>> -}
>> -
>> /**
>> * dma_alloc_from_contiguous() - allocate pages from contiguous area
>> * @dev: Pointer to device for
description in first patch
in this patchset. ;-)
Yeah, not only in this patchset, I saw Joonsoo trying to unify all
kinds of things in the MM. This is great for newbies, IMO.
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
add one more debug log on cma_activate_area().
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..bd0bb81 100644
--- a/drivers/base/dma
this patchset.
Yeah, I also like the idea. After all, this patchset aims to a general CMA
management, we could improve more after this patchset. So
Acked-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
granularity for following generalization.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index bc4c171..9bc9340 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers
-by: Michal Nazarewicz min...@mina86.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 00e13ce..4eac559 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,16
On 06/12/2014 11:21 AM, Joonsoo Kim wrote:
We can remove one call sites for clear_cma_bitmap() if we first
call it before checking error number.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
diff --git a/mm/cma.c b/mm/cma.c
index
dd the
detailed function description to make it clear only.
Reviewed-by: Zhang Yanfei
>
> Acked-by: Minchan Kim
>
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More ma
per migrate
> page, to 2.25 free pages per migrate page, without affecting success rates.
>
> Signed-off-by: Vlastimil Babka
Reviewed-by: Zhang Yanfei
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
>
> it's simpler to just rely on the check done in isolate_freepages() without
> lock, and not pretend that the recheck under lock guarantees anything. It is
> just a heuristic after all.
>
> Signed-off-by: Vlastimil Babka
Reviewed-by: Zhang Yanfei
> Cc: Minchan Kim
> Cc: Mel Go
done in isolate_freepages() without
lock, and not pretend that the recheck under lock guarantees anything. It is
just a heuristic after all.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman
affecting success rates.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n
it clear only.
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Acked-by: Minchan Kim minc...@kernel.org
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On 04/21/2014 12:02 PM, Jianyu Zhan wrote:
> Hi, Yanfei,
>
> On Mon, Apr 21, 2014 at 9:00 AM, Zhang Yanfei
> wrote:
>> What should be exported?
>>
>> lru_cache_add()
>> lru_cache_add_anon()
>> lru_cache_add_file()
>>
>> It seems you onl
On 04/21/2014 12:02 PM, Jianyu Zhan wrote:
Hi, Yanfei,
On Mon, Apr 21, 2014 at 9:00 AM, Zhang Yanfei
zhangyan...@cn.fujitsu.com wrote:
What should be exported?
lru_cache_add()
lru_cache_add_anon()
lru_cache_add_file()
It seems you only export lru_cache_add_file() in the patch
__lru_cache_add(page);
> +}
> +EXPORT_SYMBOL(lru_cache_add_file);
>
> /**
> * lru_cache_add - add a page to a page list
> * @page: the page to be added to the LRU.
> + *
> + * Queue the page for addition to the LRU via pagevec. The decision on
> whether
> + *
of lru_cache_add()
+ * have the page added to the active list using mark_page_accessed().
*/
void lru_cache_add(struct page *page)
{
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo
laim, dirty bit is set so VM can swap out the page instead of
> discarding.
>
> Firstly, heavy users would be general allocators(ex, jemalloc,
> tcmalloc and hope glibc supports it) and jemalloc/tcmalloc already
> have supported the feature for other OS(ex, FreeBSD)
Reviewed-by: Zhan
end memory block id, which should always be the same as phys_index.
> So it is removed here.
>
> Signed-off-by: Li Zhong
Reviewed-by: Zhang Yanfei
Still the nitpick there.
> ---
> Documentation/memory-hotplug.txt | 125
> +++---
> drivers/b
Clear explanation and implementation!
Reviewed-by: Zhang Yanfei
On 04/11/2014 01:58 AM, Luiz Capitulino wrote:
> [Full introduction right after the changelog]
>
> Changelog
> -
>
> v3
>
> - Dropped unnecessary WARN_ON() call [Kirill]
> - Always check i
would be general allocators(ex, jemalloc,
tcmalloc and hope glibc supports it) and jemalloc/tcmalloc already
have supported the feature for other OS(ex, FreeBSD)
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
barrios@blaptop:~/benchmark/ebizzy$ lscpu
Architecture: x86_64
CPU
1 - 100 of 778 matches
Mail list logo