) prematurely
> without also considering the condition in isolate_freepages().
>
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
M
; explicitly
> where needed.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: David Rientjes
> ---
> mm/compaction.
stat compact_migrate_scanned count decreased by 15%.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: David Rientjes
> --
nt decreased by at least 15%.
>
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/major
ifferences in compact_migrate_scanned and
> compact_free_scanned were lost in the noise.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik va
2015-06-16 21:33 GMT+09:00 Vlastimil Babka :
> On 06/16/2015 08:10 AM, Joonsoo Kim wrote:
>> On Wed, Jun 10, 2015 at 11:32:34AM +0200, Vlastimil Babka wrote:
>>> The pageblock_skip bitmap and cached scanner pfn's are two mechanisms in
>>> compaction to prevent resc
rt at begin of pageblock so it is not appropriate
to set skipbit. This patch fixes this situation that updating skip-bit
only happens when whole pageblock is really scanned.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 32 ++--
1 file changed, 18 insertions(+
of
skipped pageblock, we don't need to do this check.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 15 ++-
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 4397bf7..9c5d43c 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@
renamed and tracepoint outputs are changed due to
this removing.
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h| 14 +---
include/linux/mmzone.h| 3 +-
include/trace/events/compaction.h | 30 +++-
mm/compaction.c | 74
Rename check function and move one outer condition check to this function.
There is no functional change.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 2d8e211..dd2063b 100644
37856052177090
compact_stall 2195 2157
compact_success247225
pgmigrate_success 439739 182366
Success:43 43
Success(N): 89 90
n scanner limit diminished
according to this depth. It effectively reduce compaction overhead in
this situation.
Signed-off-by: Joonsoo Kim
---
include/linux/mmzone.h | 1 +
mm/compaction.c| 61 --
mm/internal.h | 1 +
3 files changed
d and this threshold is also adjusted to that change.
In this patch, only state definition is implemented. There is no action
for this new state so no functional change. But, following patch will
add some handling for this new state.
Signed-off-by: Joonsoo Kim
---
include/linux/mmzone.h | 2 +
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index 9c5d43c..2d8e211 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -510,6 +510,10 @@ isolate_fail:
if (locked
x27;t need to worry.
Please see result of "hogger-frag-movable with free memory variation".
It shows that patched version solves limitations of current compaction
algorithm and almost possible order-3 candidates can be allocated
regardless of amount of free memory.
This patchset is b
-threshold
Success:44 44 42 37
Success(N): 94 92 91 80
Compaction gives us almost all possible high order page. Overhead is
highly increased, but, further patch will reduce it greatly
by adjusting depletion check with this new algorithm.
Sig
freepages on non-movable pageblock wouldn't diminish much and
wouldn't cause much fragmentation.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index dd2063b..8d1b3b5 10064
neric implementation for the rest of the objects.
>
> Signed-off-by: Christoph Lameter
> Cc: Jesper Dangaard Brouer
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> Cc: David Rientjes
> Cc: Joonsoo Kim
> Signed-off-by: Andrew Morton
> ---
>
> mm/slub.c | 27 +++
neric implementation for the rest of the objects.
>
> Signed-off-by: Christoph Lameter
> Cc: Jesper Dangaard Brouer
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> Cc: David Rientjes
> Cc: Joonsoo Kim
> Signed-off-by: Andrew Morton
> ---
>
> mm/slub.c | 27 +++
On Mon, Jun 08, 2015 at 01:55:32PM -0700, Andrew Morton wrote:
> On Fri, 5 Jun 2015 20:11:30 +0900 Sergey Senozhatsky
> wrote:
>
> > zs_destroy_pool()->destroy_handle_cache() invoked from
> > zs_create_pool() can pass a NULL ->handle_cachep pointer
> > to kmem_cache_destroy(), which will derefe
2018-05-23 9:02 GMT+09:00 Andrew Morton :
> On Mon, 21 May 2018 15:16:33 +0900 Joonsoo Kim wrote:
>
>> > (gdb) list *(dma_direct_alloc+0x22f)
>> > 0x573fbf is in dma_direct_alloc (../lib/dma-direct.c:104).
>> > 94
>> > 95 if (!page)
>
2018-05-23 9:07 GMT+09:00 :
>
> The patch titled
> Subject: Revert "mm/cma: manage the memory of the CMA area by using the
> ZONE_MOVABLE"
> has been added to the -mm tree. Its filename is
>
> revert-mm-cma-manage-the-memory-of-the-cma-area-by-using-the-zone_movable.patch
>
> This pat
Hello, Michal.
Sorry for a really long delay.
2017-09-14 22:24 GMT+09:00 Michal Hocko :
> [Sorry for a later reply]
>
> On Wed 06-09-17 13:35:25, Joonsoo Kim wrote:
>> From: Joonsoo Kim
>>
>> Freepage on ZONE_HIGHMEM doesn't work for kernel memory so it's no
On Fri, Mar 23, 2018 at 01:04:08PM -0700, Andrew Morton wrote:
> On Fri, 23 Mar 2018 10:33:27 +0100 Michal Hocko wrote:
>
> > On Fri 23-03-18 17:19:26, Zhaoyang Huang wrote:
> > > On Fri, Mar 23, 2018 at 4:38 PM, Michal Hocko wrote:
> > > > On Fri 23-03-18 15:57:32, Zhaoyang Huang wrote:
> > > >
ect: mm/slab.c: remove duplicated check of colour_next
>
> Remove check that offset greater than cachep->colour bacause this is
> already checked in previous lines.
>
> Link: http://lkml.kernel.org/r/877eqilr71@gmail.com
> Signed-off-by: Roman Lakeev
> Acked-by: Chr
On Wed, Apr 04, 2018 at 03:37:03PM -0700, Andrew Morton wrote:
> On Wed, 4 Apr 2018 09:31:10 +0900 Joonsoo Kim wrote:
>
> > On Fri, Mar 23, 2018 at 01:04:08PM -0700, Andrew Morton wrote:
> > > On Fri, 23 Mar 2018 10:33:27 +0100 Michal Hocko wrote:
> > >
&g
Hello,
sorry for bothering you.
2018-01-09 16:16 GMT+09:00 Joonsoo Kim :
> On Sat, Jan 06, 2018 at 05:26:31PM +0800, Ye Xiaolong wrote:
>> Hi,
>>
>> On 01/03, Joonsoo Kim wrote:
>> >Hello!
>> >
>> >On Tue, Jan 02, 2018 at 02:35:28PM +080
On Thu, Apr 05, 2018 at 09:57:53AM +0200, Michal Hocko wrote:
> On Thu 05-04-18 16:27:16, Joonsoo Kim wrote:
> > From: Joonsoo Kim
> >
> > ZONE_MOVABLE only has movable pages so we don't need to keep enough
> > freepages to avoid or deal with fragmentation. S
On Thu, Apr 05, 2018 at 05:05:39PM +0900, Joonsoo Kim wrote:
> On Thu, Apr 05, 2018 at 09:57:53AM +0200, Michal Hocko wrote:
> > On Thu 05-04-18 16:27:16, Joonsoo Kim wrote:
> > > From: Joonsoo Kim
> > >
> > > ZONE_MOVABLE only has movable pages so we don'
Hello, Mikulas.
On Tue, Apr 24, 2018 at 02:41:47PM -0400, Mikulas Patocka wrote:
>
>
> On Tue, 24 Apr 2018, Matthew Wilcox wrote:
>
> > On Tue, Apr 24, 2018 at 08:29:14AM -0400, Mikulas Patocka wrote:
> > >
> > >
> > > On Mon, 23 Apr 2018, Matthew Wilcox wrote:
> > >
> > > > On Mon, Apr 23,
2020년 5월 30일 (토) 오전 12:12, Johannes Weiner 님이 작성:
>
> On Fri, May 29, 2020 at 03:48:00PM +0900, Joonsoo Kim wrote:
> > 2020년 5월 29일 (금) 오전 2:02, Johannes Weiner 님이 작성:
> > > On Thu, May 28, 2020 at 04:16:50PM +0900, Joonsoo Kim wrote:
> > > > 2020년 5월 27일 (수
2020년 5월 29일 (금) 오후 3:50, Joonsoo Kim 님이 작성:
>
> 2020년 5월 29일 (금) 오전 4:25, Vlastimil Babka 님이 작성:
> >
> > On 5/27/20 8:44 AM, js1...@gmail.com wrote:
> > > From: Joonsoo Kim
> > >
> > > This patchset clean-up the migration target allocation functions.
2020년 6월 2일 (화) 오전 12:56, Johannes Weiner 님이 작성:
>
> On Mon, Jun 01, 2020 at 03:14:24PM +0900, Joonsoo Kim wrote:
> > 2020년 5월 30일 (토) 오전 12:12, Johannes Weiner 님이 작성:
> > >
> > > On Fri, May 29, 2020 at 03:48:00PM +0900, Joonsoo Kim wrote:
> > > > 2020년
2020년 5월 21일 (목) 오전 9:37, Roman Gushchin 님이 작성:
>
> On Mon, May 18, 2020 at 10:20:47AM +0900, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > For locality, it's better to migrate the page to the same node
> > rather than the node of the current caller
2020년 5월 21일 (목) 오전 9:43, Roman Gushchin 님이 작성:
>
> On Mon, May 18, 2020 at 10:20:49AM +0900, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > Currently, page allocation functions for migration requires some arguments.
> > More worse, in the following patch, m
2020년 5월 21일 (목) 오전 9:46, Roman Gushchin 님이 작성:
>
> On Mon, May 18, 2020 at 10:20:50AM +0900, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > There is no difference between two migration callback functions,
> > alloc_huge_page_node() and alloc_huge_page_nodemas
doesn't care about those). While using the 'mapping' name would automagically
> keep the code correct if the unions in struct page changed, such changes
> should
> be done consciously and needed changes evaluated - the comment should help
> with
> that.
>
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
will store erased objects, similarly
> to CONFIG_SLUB=y behavior.
>
> Signed-off-by: Alexander Popov
> Reviewed-by: Alexander Potapenko
Acked-by: Joonsoo Kim
ub, build with CONFIG_SLUB_DEBUG=y and
> boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
> if more focused use is desired. Also for slub, use CONFIG_STACKTRACE
> to enable printing of the allocation-time stack trace.
>
> Cc: Christoph Lameter
> Cc: Pekk
's not
> complicate things with making this optional.
>
> Signed-off-by: Liam Mark
> Signed-off-by: Georgi Djakov
> Acked-by: Vlastimil Babka
> Cc: Jonathan Corbet
Acked-by: Joonsoo Kim
This is useful. Our company already has an in-house patch to store
pid since a few years ago.
Thanks.
On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paul...@kernel.org wrote:
> > > From: "Paul E. McKenney"
> > >
> > &g
On Thu, Dec 10, 2020 at 07:42:27PM -0800, Paul E. McKenney wrote:
> On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> > On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paul...@kernel.org wrote:
>
Hello, Paul.
On Fri, Dec 04, 2020 at 04:40:52PM -0800, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> There are kernel facilities such as per-CPU reference counts that give
> error messages in generic handlers or callbacks, whose messages are
> unenlightening. In the case of per-CPU r
On Mon, Dec 07, 2020 at 09:25:54AM -0800, Paul E. McKenney wrote:
> On Mon, Dec 07, 2020 at 06:02:53PM +0900, Joonsoo Kim wrote:
> > Hello, Paul.
> >
> > On Fri, Dec 04, 2020 at 04:40:52PM -0800, paul...@kernel.org wrote:
> > > From: "Paul E. McKenney"
means that get_partial() fails and new_slab_objects() falls back to
> new_slab(), allocating new pages. This could lead to an unnecessary
> increase in memory fragmentation.
>
> Fixes: 7ced37197196 ("slub: Acquire_slab() avoid loop")
> Signed-off-by: Jann Horn
Acked-by: Joonsoo Kim
Thanks.
ub, build with CONFIG_SLUB_DEBUG=y and
> boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
> if more focused use is desired. Also for slub, use CONFIG_STACKTRACE
> to enable printing of the allocation-time stack trace.
>
> Cc: Christoph Lameter
> Cc: Pekk
Hello,
On Wed, Dec 02, 2020 at 12:23:24AM -0500, Pavel Tatashin wrote:
> When page is pinned it cannot be moved and its physical address stays
> the same until pages is unpinned.
>
> This is useful functionality to allows userland to implementation DMA
> access. For example, it is used by vfio in
On Wed, Dec 02, 2020 at 12:23:30AM -0500, Pavel Tatashin wrote:
> We do not allocate pin pages in ZONE_MOVABLE, but if pages were already
> allocated before pinning they need to migrated to a different zone.
> Currently, we migrate movable CMA pages only. Generalize the function
> that migrates CMA
On Fri, Dec 04, 2020 at 12:50:56PM -0500, Pavel Tatashin wrote:
> > > Yes, this indeed could be a problem for some configurations. I will
> > > add your comment to the commit log of one of the patches.
> >
> > It sounds like there is some inherent tension here, breaking THP's
> > when doing pin_use
On Fri, Dec 04, 2020 at 12:43:29PM -0500, Pavel Tatashin wrote:
> On Thu, Dec 3, 2020 at 11:14 PM Joonsoo Kim wrote:
> >
> > On Wed, Dec 02, 2020 at 12:23:30AM -0500, Pavel Tatashin wrote:
> > > We do not allocate pin pages in ZONE_MOVABLE, but if pages were already
> &
On Wed, Dec 30, 2020 at 02:24:12PM +0800, kernel test robot wrote:
>
> Greeting,
>
> FYI, we noticed a -2.7% regression of vm-scalability.throughput due to commit:
>
>
> commit: aae466b0052e1888edd1d7f473d4310d64936196 ("mm/swap: implement
> workingset detection for anonymous LRU")
> https://g
On Tue, Feb 18, 2014 at 10:21:10AM -0600, Christoph Lameter wrote:
> On Mon, 17 Feb 2014, Joonsoo Kim wrote:
>
> > > Why change the BAD_ALIEN_MAGIC?
> >
> > Hello, Christoph.
> >
> > BAD_ALIEN_MAGIC is only checked by slab_set_lock_classes(). We rem
ed. All we have to do is to handle this request.
This patch implements to flag up QUEUE_FLAG_DISCARD and handle this
REQ_DISCARD request. With it, we can free memory used by zram if it isn't
used.
Signed-off-by: Joonsoo Kim
---
This patch is based on master branch of linux-next tree.
diff
2014-02-24 22:36 GMT+09:00 Jerome Marchand :
> On 02/24/2014 06:51 AM, Joonsoo Kim wrote:
>> zram is ram based block device and can be used by backend of filesystem.
>> When filesystem deletes a file, it normally doesn't do anything on data
>> block of that file. It just
2014-02-25 0:15 GMT+09:00 Jerome Marchand :
> On 02/24/2014 04:02 PM, Joonsoo Kim wrote:
>> 2014-02-24 22:36 GMT+09:00 Jerome Marchand :
>>> On 02/24/2014 06:51 AM, Joonsoo Kim wrote:
>>>> zram is ram based block device and can be used by backend of filesystem.
>&
2014-02-25 1:06 GMT+09:00 Jerome Marchand :
> On 02/24/2014 04:56 PM, Joonsoo Kim wrote:
>> 2014-02-25 0:15 GMT+09:00 Jerome Marchand :
>>> On 02/24/2014 04:02 PM, Joonsoo Kim wrote:
>>>> 2014-02-24 22:36 GMT+09:00 Jerome Marchand :
>>>>> On 02/24/201
ed. All we have to do is to handle this request.
This patch implements to flag up QUEUE_FLAG_DISCARD and handle this
REQ_DISCARD request. With it, we can free memory used by zram if it isn't
used.
v2: handle unaligned case commented by Jerome
Signed-off-by: Joonsoo Kim
diff --git a/dri
ta commented by Minchan
reuse index, offset in __zram_make_request() commented by Sergey.
Signed-off-by: Joonsoo Kim
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 7631ef0..8b468d6 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@
MALLOC_MIN_SIZE, instead of
KMALLOC_SHIFT_LOW. KMALLOC_SHIFT_LOW is parsed to ilog2() on some
architecture and this ilog2() uses __builtin_constant_p() and results in
the problem. This problem would disappear by using KMALLOC_MIN_SIZE,
since it is just constant.
Tested-by: David Rientjes
Signed-off-
This patch fix this situation by using same allocation flag as original
allocation.
Reported-by: Christian Casteyde
Signed-off-by: Joonsoo Kim
diff --git a/mm/slub.c b/mm/slub.c
index 3508ede..d43b063 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1348,11 +1348,12 @@ static struct page
On Wed, Mar 12, 2014 at 01:33:18PM -0700, Andrew Morton wrote:
> On Wed, 12 Mar 2014 17:01:09 +0900 Joonsoo Kim wrote:
>
> > zram is ram based block device and can be used by backend of filesystem.
> > When filesystem deletes a file, it normally doesn't do anything on data
On Wed, Mar 12, 2014 at 08:03:03PM -0700, Andrew Morton wrote:
> On Thu, 13 Mar 2014 11:46:17 +0900 Joonsoo Kim wrote:
>
> > + while (n >= PAGE_SIZE) {
> > + /*
> > +* discard request can be too large so that the zram can
> > +
On Thu, Mar 13, 2014 at 01:40:35PM -0700, Andrew Morton wrote:
> On Thu, 13 Mar 2014 11:46:17 +0900 Joonsoo Kim wrote:
>
> > Hello, Andrew.
> >
> > I applied all your comments in below patch. :)
>
> OK, thanks. I'll grab this instead of v5 - I wasn't th
On Tue, Mar 04, 2014 at 01:16:56PM +0100, Vlastimil Babka wrote:
> On 03/04/2014 01:55 AM, Joonsoo Kim wrote:
> >On Mon, Mar 03, 2014 at 02:54:09PM +0100, Vlastimil Babka wrote:
> >>On 03/03/2014 09:22 AM, Joonsoo Kim wrote:
> >>>On Fri, Feb 28, 2014 at 03:15:00P
On Fri, Feb 28, 2014 at 03:15:00PM +0100, Vlastimil Babka wrote:
> In order to prevent race with set_pageblock_migratetype, most of calls to
> get_pageblock_migratetype have been moved under zone->lock. For the remaining
> call sites, the extra locking is undesirable, notably in free_hot_cold_page(
On Fri, Feb 28, 2014 at 03:15:01PM +0100, Vlastimil Babka wrote:
> This patch complements the addition of get_pageblock_migratetype_nolock() for
> the case where is_migrate_isolate_page() cannot be called with zone->lock
> held.
> A race with set_pageblock_migratetype() may be detected, in which c
2014-02-26 23:06 GMT+09:00 Jerome Marchand :
> On 02/26/2014 02:57 PM, Sergey Senozhatsky wrote:
>> On (02/26/14 14:44), Jerome Marchand wrote:
>>> On 02/26/2014 02:16 PM, Sergey Senozhatsky wrote:
>>>> Hello,
>>>>
>>>> On (02/26/14 14:23), J
2014-02-26 17:07 GMT+09:00 Minchan Kim :
> Hi Joonsoo,
>
> On Wed, Feb 26, 2014 at 02:23:15PM +0900, Joonsoo Kim wrote:
>> zram is ram based block device and can be used by backend of filesystem.
>> When filesystem deletes a file, it normally doesn't do anything on data
On Fri, Feb 28, 2014 at 03:15:00PM +0100, Vlastimil Babka wrote:
> In order to prevent race with set_pageblock_migratetype, most of calls to
> get_pageblock_migratetype have been moved under zone->lock. For the remaining
> call sites, the extra locking is undesirable, notably in free_hot_cold_page(
formance degradation.
>
> Using mmtests' stress-highalloc benchmark, little difference was found between
> the two solutions. The base is 3.13 with recent compaction series by myself
> and
> Joonsoo Kim applied.
>
> 3.133.133.13
>
On Mon, Mar 03, 2014 at 12:02:00PM +0100, Vlastimil Babka wrote:
> On 02/14/2014 07:53 AM, Joonsoo Kim wrote:
> > changes for v2
> > o include more experiment data in cover letter
> > o deal with vlastimil's comments mostly about commit description on 4/5
> >
>
On Mon, Mar 03, 2014 at 02:54:09PM +0100, Vlastimil Babka wrote:
> On 03/03/2014 09:22 AM, Joonsoo Kim wrote:
> >On Fri, Feb 28, 2014 at 03:15:00PM +0100, Vlastimil Babka wrote:
> >>In order to prevent race with set_pageblock_migratetype, most of calls to
> >>get_page
On Mon, Feb 10, 2014 at 01:26:34PM +, Mel Gorman wrote:
> On Fri, Feb 07, 2014 at 02:08:42PM +0900, Joonsoo Kim wrote:
> > Purpose of compaction is to get a high order page. Currently, if we find
> > high-order page while searching migration target page, we break it to
> &
On Mon, Feb 10, 2014 at 04:44:26PM -0500, Naoya Horiguchi wrote:
> This patch updates mm/pagewalk.c to make code less complex and more
> maintenable.
> The basic idea is unchanged and there's no userspace visible effect.
>
> Most of existing callback functions need access to vma to handle each en
On Wed, Jan 08, 2014 at 01:59:30PM -0800, Andrew Morton wrote:
> On Wed, 8 Jan 2014 23:37:49 +0200 Pekka Enberg wrote:
>
> > The patch looks good to me but it probably should go through Andrew's tree.
>
> yup.
>
> page_mapping() will be called quite frequently, and adding a new
> test-n-branch
e a lot of readers and few
of writers. So it fits to this situation.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index bd791e4..feaa607 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -79,6 +79,7 @@ static inline int get_pageblock_m
e region can
be moved to other migratetype freelist. It makes CMA failed over and over.
To prevent it, the buddy allocator should consider migratetype if
CMA/ISOLATE is enabled.
This patchset is aimed at fixing these problems and based on v3.13-rc7.
Thanks.
Joonsoo Kim (7):
mm/page_alloc: synchronize g
ce what we want to ensure is
that the page from cma will not go to other migratetype freelist.
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1489c301..4913829 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -903,6 +903,7 @@ struct page *__rmqueue_smallest(s
nal change.
Following patch will do further steps about this issue.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3552717..2733e0b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -257,14 +257,31 @@ struct inode;
#define page_pr
Cma pages can be allocated by not only order 0 request but also high order
request. So, we should consider to account free cma page in the both
places.
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b36aa5a..1489c301 100644
--- a/mm/page_alloc.c
+++ b/mm
this patch makes set/get_buddy_migratetype() only enabled if it is
really needed, because it has some overhead.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2733e0b..046e09f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -258,6 +258,12
be allocated by other users even though we hold the zone
lock. So removing this check.
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index d1473b2..534fb3a 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -199,9 +199,6
by
try_to_steal_freepages(). After that, CMA to this region always fail.
To prevent this, we should not merge the page on MIGRATE_(CMA|ISOLATE)
freelist.
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2548b42..ea99cee 100644
--- a/mm/page_alloc.c
+++ b/mm
On Mon, Jan 06, 2014 at 10:34:09AM +0100, Ludovic Desroches wrote:
> On Mon, Jan 06, 2014 at 09:26:48AM +0900, Joonsoo Kim wrote:
> > On Fri, Jan 03, 2014 at 03:54:04PM +0100, Ludovic Desroches wrote:
> > > Hi,
> > >
> > > On Tue, Dec 24, 2013 at
2014/1/9 Michal Nazarewicz :
> On Thu, Jan 09 2014, Joonsoo Kim wrote:
>> Third, there is the problem on buddy allocator. It doesn't consider
>> migratetype when merging buddy, so pages from cma or isolate region can
>> be moved to other migratetype freelist. It make
On Thu, Jan 09, 2014 at 09:27:20AM +, Mel Gorman wrote:
> On Thu, Jan 09, 2014 at 04:04:40PM +0900, Joonsoo Kim wrote:
> > Hello,
> >
> > I found some weaknesses on handling migratetype during code review and
> > testing CMA.
> >
> > First, we don'
On Thu, Jan 09, 2014 at 01:10:29PM -0800, Laura Abbott wrote:
> On 1/8/2014 11:04 PM, Joonsoo Kim wrote:
> >Cma pages can be allocated by not only order 0 request but also high order
> >request. So, we should consider to account free cma page in the both
> >places.
> >
use rmap_walk() in page_mkclean().
Signed-off-by: Joonsoo Kim
diff --git a/mm/rmap.c b/mm/rmap.c
index 5e78d5c..bbbc705 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -809,12 +809,13 @@ int page_referenced(struct page *page,
}
static int page_mkclean_one(struct page *page, struct vm_area_struct *
separate, because it clarify changes.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 45c9b6a..0eef8cb 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -76,8 +76,7 @@ struct page *ksm_might_need_to_copy(struct page *page,
int
().
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 58624b4..d641f6d 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -190,7 +190,7 @@ int page_referenced_one(struct page *, struct
vm_area_struct *,
int try_to_unmap(struct page *, enum
non, try_to_unmap_file
2. mechanical change to use rmap_walk() in try_to_munlock().
3. copy and paste comments.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 0eef8cb..91b9719 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -75,7 +75,6 @@ str
mpute pgoff
for unmapping huge page").
Signed-off-by: Joonsoo Kim
diff --git a/mm/rmap.c b/mm/rmap.c
index 55c8b8d..1214703 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1714,6 +1714,10 @@ static int rmap_walk_file(struct page *page, int
(*rmap_one)(struct page *,
if (!mapping)
16 100642750 mm/rmap.o
13823 7058288 228165920 mm/ksm.o
13199 7058288 2219256b0 mm/ksm.o
Thanks.
Joonsoo Kim (9):
mm/rmap: recompute pgoff for huge page
mm/rmap: factor nonlinear handling out of try_to_unmap_file()
mm/rmap: factor lock function o
this patch, I introduce 4 function pointers to
handle above differences.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 0f65686..58624b4 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -239,6 +239,12 @@ struct rmap_walk_control
-by: Joonsoo Kim
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 91b9719..3be6bb1 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -73,8 +73,6 @@ static inline void set_page_stable_node(struct page *page,
struct page *ksm_might_need_to_copy(struct page *p
of it. Therfore it is better
to factor nonlinear handling out of try_to_unmap_file() in order to
merge all kinds of rmap traverse functions easily.
Signed-off-by: Joonsoo Kim
diff --git a/mm/rmap.c b/mm/rmap.c
index 1214703..e6d532c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1422,6 +1422,79 @@ s
oring lock function for anon_lock out
of rmap_walk_anon(). It will be used in case of removing migration entry
and in default of rmap_walk_anon().
Signed-off-by: Joonsoo Kim
diff --git a/mm/rmap.c b/mm/rmap.c
index e6d532c..916f2ed 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1683,6 +1683,24 @@
On Sat, Jan 11, 2014 at 06:55:39PM -0600, Christoph Lameter wrote:
> On Sat, 11 Jan 2014, Pekka Enberg wrote:
>
> > On Sat, Jan 11, 2014 at 1:42 AM, Dave Hansen wrote:
> > > On 01/10/2014 03:39 PM, Andrew Morton wrote:
> > >>> I tested 4 cases, all of these on the "cache-cold kfree()" case. The
On Fri, Jan 10, 2014 at 09:48:34AM +, Mel Gorman wrote:
> On Fri, Jan 10, 2014 at 05:48:55PM +0900, Joonsoo Kim wrote:
> > On Thu, Jan 09, 2014 at 09:27:20AM +, Mel Gorman wrote:
> > > On Thu, Jan 09, 2014 at 04:04:40PM +0900, Joonsoo Kim wrote:
> > > > Hello,
ust
return 'struct page' of that object, not one of first page, since the SLAB
don't use __GFP_COMP when CONFIG_MMU. To get 'struct page' of first page,
we first get a slab and try to get it via virt_to_head_page(slab->s_mem).
Cc: Mel Gorman
Signed-off-by: Joonsoo Ki
801 - 900 of 2325 matches
Mail list logo