On Tue, Aug 30, 2016 at 06:10:46PM +0530, Aneesh Kumar K.V wrote:
> "Aneesh Kumar K.V" writes:
>
> >
> >
> >> static inline void check_highest_zone(enum zone_type k)
> >> {
> >> - if (k > policy_zone && k != ZONE_MOVABLE)
> >> + if (k > policy_zone && k != ZONE_MOVABLE && !is_zone_cma_id
2016-08-29 18:27 GMT+09:00 Aneesh Kumar K.V :
> js1...@gmail.com writes:
>
>> From: Joonsoo Kim
>>
>> Hello,
>>
>> Changes from v4
>> o Rebase on next-20160825
>> o Add general fix patch for lowmem reserve
>> o Fix lowmem reserve ratio
>>
2016-08-24 16:04 GMT+09:00 Michal Hocko :
> On Wed 24-08-16 14:01:57, Joonsoo Kim wrote:
>> Looks like my mail client eat my reply so I resend.
>>
>> On Tue, Aug 23, 2016 at 09:33:18AM +0200, Michal Hocko wrote:
>> > On Tue 23-08-16 13:52:45, Joonsoo Kim wrote:
&
Looks like my mail client eat my reply so I resend.
On Tue, Aug 23, 2016 at 09:33:18AM +0200, Michal Hocko wrote:
> On Tue 23-08-16 13:52:45, Joonsoo Kim wrote:
> [...]
> > Hello, Michal.
> >
> > I agree with partial revert but revert should be a different form.
>
On Tue, Aug 23, 2016 at 05:38:08PM +0200, Michal Hocko wrote:
> On Tue 23-08-16 11:13:03, Joonsoo Kim wrote:
> > On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
> [...]
> > > I am not opposing the patch (to be honest it is quite neat) but this
> > >
On Mon, Aug 22, 2016 at 11:32:49AM +0200, Michal Hocko wrote:
> Hi,
> there have been multiple reports [1][2][3][4][5] about pre-mature OOM
> killer invocations since 4.7 which contains oom detection rework. All of
> them were for order-2 (kernel stack) alloaction requests failing because
> of a h
On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
> On Wed 17-08-16 11:20:50, Aruna Ramakrishna wrote:
> > On large systems, when some slab caches grow to millions of objects (and
> > many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> > During this time, interru
On Fri, Aug 19, 2016 at 01:20:13PM +0200, Vlastimil Babka wrote:
> On 08/09/2016 08:39 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >Until now, reserved pages for CMA are managed in the ordinary zones
> >where page's pfn are belong to. This approach has
On Tue, Aug 16, 2016 at 08:36:12AM +0200, Vlastimil Babka wrote:
> On 08/16/2016 08:16 AM, Joonsoo Kim wrote:
> >On Wed, Aug 10, 2016 at 11:12:25AM +0200, Vlastimil Babka wrote:
> >>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >>index 621e4211ce16..a5c0f914ec00 10064
On Tue, Aug 16, 2016 at 08:15:36AM +0200, Vlastimil Babka wrote:
> On 08/16/2016 08:15 AM, Joonsoo Kim wrote:
> >On Wed, Aug 10, 2016 at 11:12:23AM +0200, Vlastimil Babka wrote:
> >>--- a/include/linux/compaction.h
> >>+++ b/include/linux/compaction.h
> >>@@
On Wed, Aug 10, 2016 at 11:12:25AM +0200, Vlastimil Babka wrote:
> The __compaction_suitable() function checks the low watermark plus a
> compact_gap() gap to decide if there's enough free memory to perform
> compaction. Then __isolate_free_page uses low watermark check to decide if
> particular fr
On Wed, Aug 10, 2016 at 11:12:23AM +0200, Vlastimil Babka wrote:
> Compaction uses a watermark gap of (2UL << order) pages at various places and
> it's not immediately obvious why. Abstract it through a compact_gap() wrapper
> to create a single place with a thorough explanation.
>
> Signed-off-by
es() returns
!COMPACT_SUCCESS, watermark check could return true.
__compact_finished() calls find_suitable_fallback() and it's slightly
different with watermark check. Anyway, I don't think it is a big
problem.
Thanks.
>
> Also remove the stray "bool success" variable
On Wed, Aug 10, 2016 at 11:12:21AM +0200, Vlastimil Babka wrote:
> During reclaim/compaction loop, compaction priority can be increased by the
> should_compact_retry() function, but the current code is not optimal. Priority
> is only increased when compaction_failed() is true, which means that
> c
On Wed, Aug 10, 2016 at 11:12:20AM +0200, Vlastimil Babka wrote:
> During reclaim/compaction loop, it's desirable to get a final answer from
> unsuccessful compaction so we can either fail the allocation or invoke the OOM
> killer. However, heuristics such as deferred compaction or pageblock skip b
On Wed, Aug 10, 2016 at 04:59:39AM -0700, Andy Lutomirski wrote:
> On Sun, Jul 31, 2016 at 10:30 PM, Joonsoo Kim wrote:
> > On Fri, Jul 29, 2016 at 12:47:38PM -0700, Andy Lutomirski wrote:
> >> -- Forwarded message --
> >> From: "Joonsoo Kim&q
On Fri, Aug 05, 2016 at 09:21:56AM -0500, Christoph Lameter wrote:
> On Fri, 5 Aug 2016, Joonsoo Kim wrote:
>
> > If above my comments are fixed, all counting would be done with
> > holding a lock. So, atomic definition isn't needed for the SLAB.
>
> Ditto for sl
On Fri, Aug 12, 2016 at 09:25:37PM +0900, Sergey Senozhatsky wrote:
> On (08/11/16 11:41), Vlastimil Babka wrote:
> > On 08/10/2016 10:14 AM, Sergey Senozhatsky wrote:
> > > > @@ -1650,18 +1655,15 @@ static inline void expand(struct zone *zone,
> > > > struct page *page,
> > > > si
hna
> Cc: Mike Kravetz
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> Cc: David Rientjes
> Cc: Joonsoo Kim
> Cc: Andrew Morton
> ---
> Note: this has been tested only on x86_64.
>
> mm/slab.c | 25 -
> mm/slab.h | 15 ++-
>
On Mon, Aug 01, 2016 at 06:43:00PM -0700, Aruna Ramakrishna wrote:
> Hi Joonsoo,
>
> On 08/01/2016 05:55 PM, Joonsoo Kim wrote:
> >Your patch updates these counters not only when a slabs are created and
> >destroyed but also when object is allocated/freed from the slab. This
e
Calculating both num_slabs_partial and num_slabs_free by iterating
n->slabs_XXX list would not take too much time.
How about this solution?
Thanks.
>
> Signed-off-by: Aruna Ramakrishna
> Cc: Mike Kravetz
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> Cc: David Rientjes
On Fri, Jul 29, 2016 at 12:47:38PM -0700, Andy Lutomirski wrote:
> -- Forwarded message --
> From: "Joonsoo Kim"
> Date: Jul 28, 2016 7:57 PM
> Subject: Re: [RFC] can we use vmalloc to alloc thread stack if compaction
> failed
> To: "Andy Lutomir
On Thu, Jul 28, 2016 at 08:07:51AM -0700, Andy Lutomirski wrote:
> On Thu, Jul 28, 2016 at 3:51 AM, Xishi Qiu wrote:
> > On 2016/7/28 17:43, Michal Hocko wrote:
> >
> >> On Thu 28-07-16 16:45:06, Xishi Qiu wrote:
> >>> On 2016/7/28 15:58, Michal Hocko wrote:
> >>>
> On Thu 28-07-16 15:41:53,
On Tue, Jul 26, 2016 at 01:50:50PM +0100, Mel Gorman wrote:
> On Tue, Jul 26, 2016 at 05:11:30PM +0900, Joonsoo Kim wrote:
> > > These patches did not OOM for me on a 2G 32-bit KVM instance while running
> > > a stress test for an hour. Preliminary tests on a 64-bit system u
On Tue, Jul 26, 2016 at 05:16:22PM +0900, Joonsoo Kim wrote:
> On Thu, Jul 21, 2016 at 03:11:01PM +0100, Mel Gorman wrote:
> > Page reclaim determines whether a pgdat is unreclaimable by examining how
> > many pages have been scanned since a page was freed and comparing that to
&g
On Thu, Jul 21, 2016 at 03:11:01PM +0100, Mel Gorman wrote:
> Page reclaim determines whether a pgdat is unreclaimable by examining how
> many pages have been scanned since a page was freed and comparing that to
> the LRU sizes. Skipped pages are not reclaim candidates but contribute to
> scanned.
On Thu, Jul 21, 2016 at 03:10:56PM +0100, Mel Gorman wrote:
> Both Joonsoo Kim and Minchan Kim have reported premature OOM kills.
> The common element is a zone-constrained allocation failings. Two factors
> appear to be at fault -- pgdat being considered unreclaimable prematur
On Wed, Jul 20, 2016 at 04:21:46PM +0100, Mel Gorman wrote:
> Both Joonsoo Kim and Minchan Kim have reported premature OOM kills on
> a 32-bit platform. The common element is a zone-constrained high-order
> allocation failing. Two factors appear to be at fault -- pgdat being
>
On Wed, Jul 20, 2016 at 04:21:51PM +0100, Mel Gorman wrote:
> From: Minchan Kim
>
> Minchan Kim reported that with per-zone lru state it was possible to
> identify that a normal zone with 8^M anonymous pages could trigger
> OOM with non-atomic order-0 allocations as all pages in the zone
> were i
On Wed, Jul 20, 2016 at 04:21:48PM +0100, Mel Gorman wrote:
> From: Minchan Kim
>
> While I did stress test with hackbench, I got OOM message frequently which
> didn't ever happen in zone-lru.
>
> gfp_mask=0x26004c0(GFP_KERNEL|__GFP_REPEAT|__GFP_NOTRACK), order=0
> ..
> ..
> [] __alloc_pages_no
On Mon, Jul 18, 2016 at 03:27:14PM +0100, Mel Gorman wrote:
> On Mon, Jul 18, 2016 at 01:11:22PM +0100, Mel Gorman wrote:
> > The all_unreclaimable logic is related to the number of pages scanned
> > but currently pages skipped contributes to pages scanned. That is one
> > possibility. The other is
On Mon, Jul 18, 2016 at 04:31:11PM +0800, Xishi Qiu wrote:
> On 2016/7/18 16:05, Vlastimil Babka wrote:
>
> > On 07/18/2016 10:00 AM, Xishi Qiu wrote:
> >> On 2016/7/18 13:51, Joonsoo Kim wrote:
> >>
> >>> On Fri, Jul 15, 2016 at 10:47:06AM +0800, Xishi Q
On Mon, Jul 18, 2016 at 11:12:51AM +0200, Vlastimil Babka wrote:
> On 07/06/2016 07:09 AM, Joonsoo Kim wrote:
> >On Fri, Jun 24, 2016 at 11:54:29AM +0200, Vlastimil Babka wrote:
> >>A recent patch has added whole_zone flag that compaction sets when scanning
> >>starts
On Mon, Jul 18, 2016 at 02:21:02PM +0200, Vlastimil Babka wrote:
> On 07/18/2016 06:41 AM, Joonsoo Kim wrote:
> >On Fri, Jul 15, 2016 at 03:37:52PM +0200, Vlastimil Babka wrote:
> >>On 07/06/2016 07:39 AM, Joonsoo Kim wrote:
> >>>On Fri, Jun 24, 2016 at 11:54:32A
On Mon, Jul 18, 2016 at 08:51:16AM +0200, Vlastimil Babka wrote:
> On 07/18/2016 07:07 AM, Joonsoo Kim wrote:
> >On Thu, Jul 14, 2016 at 10:32:09AM +0200, Vlastimil Babka wrote:
> >>On 07/14/2016 07:23 AM, Joonsoo Kim wrote:
> >>
> >>I don't think there
On Fri, Jul 15, 2016 at 10:47:06AM +0800, Xishi Qiu wrote:
> alloc_migrate_target() is called from migrate_pages(), and the page
> is always from user space, so we can add __GFP_HIGHMEM directly.
No, all migratable pages are not from user space. For example,
blockdev file cache has __GFP_MOVABLE a
On Mon, Jul 11, 2016 at 04:01:52PM -0700, David Rientjes wrote:
> On Thu, 30 Jun 2016, Joonsoo Kim wrote:
>
> > We need to find a root cause of this problem, first.
> >
> > I guess that this problem would happen when isolate_freepages_block()
> > early stop due to
On Thu, Jul 14, 2016 at 10:32:09AM +0200, Vlastimil Babka wrote:
> On 07/14/2016 07:23 AM, Joonsoo Kim wrote:
> >On Fri, Jul 08, 2016 at 11:11:47AM +0100, Mel Gorman wrote:
> >>On Fri, Jul 08, 2016 at 11:44:47AM +0900, Joonsoo Kim wrote:
> >>
> >>It doesn
On Thu, Jul 14, 2016 at 10:05:00AM +0100, Mel Gorman wrote:
> On Thu, Jul 14, 2016 at 02:23:32PM +0900, Joonsoo Kim wrote:
> > >
> > > > > > And, I'd like to know why max() is used for classzone_idx rather
> > > > > > than
> > > &
On Thu, Jul 14, 2016 at 09:48:41AM +0200, Vlastimil Babka wrote:
> On 07/14/2016 08:28 AM, Joonsoo Kim wrote:
> >On Fri, Jul 08, 2016 at 11:05:32AM +0100, Mel Gorman wrote:
> >>On Fri, Jul 08, 2016 at 11:28:52AM +0900, Joonsoo Kim wrote:
> >>>On Thu, Jul 07, 2016 a
On Fri, Jul 15, 2016 at 03:37:52PM +0200, Vlastimil Babka wrote:
> On 07/06/2016 07:39 AM, Joonsoo Kim wrote:
> > On Fri, Jun 24, 2016 at 11:54:32AM +0200, Vlastimil Babka wrote:
> >> During reclaim/compaction loop, compaction priority can be increased by the
> >> shou
On Tue, Jul 12, 2016 at 03:02:19PM +0200, Alexander Potapenko wrote:
> >> +
> >> /* Add alloc meta. */
> >> cache->kasan_info.alloc_meta_offset = *size;
> >> *size += sizeof(struct kasan_alloc_meta);
> >> @@ -392,17 +385,36 @@ void kasan_cache_create(struct kmem_cache *cache,
> >
On Fri, Jul 08, 2016 at 11:05:32AM +0100, Mel Gorman wrote:
> On Fri, Jul 08, 2016 at 11:28:52AM +0900, Joonsoo Kim wrote:
> > On Thu, Jul 07, 2016 at 10:48:08AM +0100, Mel Gorman wrote:
> > > On Thu, Jul 07, 2016 at 10:12:12AM +0900, Joonsoo Kim wrote:
> > > > &
On Fri, Jul 08, 2016 at 11:11:47AM +0100, Mel Gorman wrote:
> On Fri, Jul 08, 2016 at 11:44:47AM +0900, Joonsoo Kim wrote:
> > > > > @@ -3390,12 +3386,24 @@ static int kswapd(void *p)
> > > > >* We can speed up thawing tasks if we do
On Fri, Jul 08, 2016 at 09:11:32PM +0900, Sergey Senozhatsky wrote:
> Extend page_owner with free_pages() tracking functionality. This adds to the
> dump_page_owner() output an additional backtrace, that tells us what path has
> freed the page.
>
> Aa a trivial example, let's assume that do_some_f
On Fri, Jul 08, 2016 at 04:48:38PM -0400, Kees Cook wrote:
> On Fri, Jul 8, 2016 at 1:41 PM, Kees Cook wrote:
> > On Fri, Jul 8, 2016 at 12:20 PM, Christoph Lameter wrote:
> >> On Fri, 8 Jul 2016, Kees Cook wrote:
> >>
> >>> Is check_valid_pointer() making sure the pointer is within the usable
>
On Fri, Jul 08, 2016 at 12:36:50PM +0200, Alexander Potapenko wrote:
> For KASAN builds:
> - switch SLUB allocator to using stackdepot instead of storing the
>allocation/deallocation stacks in the objects;
> - change the freelist hook so that parts of the freelist can be put
>into the qua
On Thu, Jul 07, 2016 at 11:17:01AM +0100, Mel Gorman wrote:
> On Thu, Jul 07, 2016 at 10:20:39AM +0900, Joonsoo Kim wrote:
> > > @@ -3249,9 +3249,19 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat,
> > > int order,
> > >
> > > prepare_to_wait(&pg
On Thu, Jul 07, 2016 at 10:48:08AM +0100, Mel Gorman wrote:
> On Thu, Jul 07, 2016 at 10:12:12AM +0900, Joonsoo Kim wrote:
> > > @@ -1402,6 +1406,11 @@ static unsigned long isolate_lru_pages(unsigned
> > > long nr_to_scan,
> > >
> > >
On Fri, Jul 01, 2016 at 09:01:17PM +0100, Mel Gorman wrote:
> Direct reclaim iterates over all zones in the zonelist and shrinking them
> but this is in conflict with node-based reclaim. In the default case,
> only shrink once per node.
>
> Signed-off-by: Mel Gorman
> Acked-by: Johannes Weiner
On Fri, Jul 01, 2016 at 09:01:28PM +0100, Mel Gorman wrote:
> kswapd is woken when zones are below the low watermark but the wakeup
> decision is not taking the classzone into account. Now that reclaim is
> node-based, it is only required to wake kswapd once per node and only if
> all zones are un
On Fri, Jul 01, 2016 at 09:01:16PM +0100, Mel Gorman wrote:
> kswapd goes through some complex steps trying to figure out if it should
> stay awake based on the classzone_idx and the requested order. It is
> unnecessarily complex and passes in an invalid classzone_idx to
> balance_pgdat(). What m
On Fri, Jul 01, 2016 at 09:01:12PM +0100, Mel Gorman wrote:
> This patch makes reclaim decisions on a per-node basis. A reclaimer knows
> what zone is required by the allocation request and skips pages from
> higher zones. In many cases this will be ok because it's a GFP_HIGHMEM
> request of some
On Fri, Jun 24, 2016 at 11:54:37AM +0200, Vlastimil Babka wrote:
> The compaction_ready() is used during direct reclaim for costly order
> allocations to skip reclaim for zones where compaction should be attempted
> instead. It's combining the standard compaction_suitable() check with its own
> wat
On Fri, Jun 24, 2016 at 11:54:33AM +0200, Vlastimil Babka wrote:
> The __compact_finished() function uses low watermark in a check that has to
> pass if the direct compaction is to finish and allocation should succeed. This
> is too pessimistic, as the allocation will typically use min watermark. I
On Fri, Jun 24, 2016 at 11:54:32AM +0200, Vlastimil Babka wrote:
> During reclaim/compaction loop, compaction priority can be increased by the
> should_compact_retry() function, but the current code is not optimal. Priority
> is only increased when compaction_failed() is true, which means that
> c
On Fri, Jun 24, 2016 at 11:54:29AM +0200, Vlastimil Babka wrote:
> A recent patch has added whole_zone flag that compaction sets when scanning
> starts from the zone boundary, in order to report that zone has been fully
> scanned in one attempt. For allocations that want to try really hard or canno
On Tue, Jul 05, 2016 at 02:01:29PM -0700, David Rientjes wrote:
> On Thu, 30 Jun 2016, Vlastimil Babka wrote:
>
> > > Note: I really dislike the low watermark check in split_free_page() and
> > > consider it poor software engineering. The function should split a free
> > > page, nothing more.
On Wed, Jun 29, 2016 at 02:47:20PM -0700, David Rientjes wrote:
> It's possible to isolate some freepages in a pageblock and then fail
> split_free_page() due to the low watermark check. In this case, we hit
> VM_BUG_ON() because the freeing scanner terminated early without a
> contended lock o
On Mon, Jul 04, 2016 at 12:49:08PM +0300, Andrey Ryabinin wrote:
>
>
> On 07/04/2016 07:31 AM, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > There are two bugs on qlist_move_cache(). One is that qlist's tail
> > isn't set properly. curr-
On Mon, Jul 04, 2016 at 02:45:24PM +0900, Sergey Senozhatsky wrote:
> On (07/04/16 14:29), Joonsoo Kim wrote:
> > > > On Sun, Jul 03, 2016 at 01:16:56AM +0900, Sergey Senozhatsky wrote:
> > > > > Introduce PAGE_OWNER_TRACK_FREE config option to ext
On Mon, Jul 04, 2016 at 02:07:30PM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> On (07/04/16 13:57), Joonsoo Kim wrote:
> > On Sun, Jul 03, 2016 at 01:16:56AM +0900, Sergey Senozhatsky wrote:
> > > Introduce PAGE_OWNER_TRACK_FREE config option to extend page own
On Sun, Jul 03, 2016 at 01:16:56AM +0900, Sergey Senozhatsky wrote:
> Introduce PAGE_OWNER_TRACK_FREE config option to extend page owner with
> free_pages() tracking functionality. This adds to the dump_page_owner()
> output an additional backtrace, that tells us what path has freed the
> page.
Hm
On Fri, Jul 01, 2016 at 07:38:18PM +0200, Dmitry Vyukov wrote:
> I've hit a GPF in depot_fetch_stack when it was given
> bogus stack handle. I think it was caused by a distant
> out-of-bounds that hit a different object, as the result
> we treated uninit garbage as stack handle. Maybe there is
> so
On Fri, Jul 01, 2016 at 05:17:10PM +0300, Andrey Ryabinin wrote:
>
>
> On 07/01/2016 05:02 PM, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > There are two bugs on qlist_move_cache(). One is that qlist's tail
> > isn't set properly. curr-
2016-07-01 23:20 GMT+09:00 Dmitry Vyukov :
> On Fri, Jul 1, 2016 at 4:18 PM, Andrey Ryabinin
> wrote:
>>
>>
>> On 07/01/2016 05:15 PM, Dmitry Vyukov wrote:
>>> On Fri, Jul 1, 2016 at 4:09 PM, Joonsoo Kim wrote:
>>>
2016-07-01 23:03 GMT+09:00 Dmitry Vyukov :
> On Fri, Jul 1, 2016 at 4:02 PM, wrote:
>> From: Joonsoo Kim
>>
>> There are two bugs on qlist_move_cache(). One is that qlist's tail
>> isn't set properly. curr->next can be NULL since it is singly linked
>
2016-07-01 22:55 GMT+09:00 :
> From: Joonsoo Kim
>
> There are two bugs on qlist_move_cache(). One is that qlist's tail
> isn't set properly. curr->next can be NULL since it is singly linked
> list and NULL value on tail is invalid if there is one item on qlist.
>
2016-07-01 17:11 GMT+09:00 Andrey Ryabinin :
>
>
> On 07/01/2016 10:53 AM, js1...@gmail.com wrote:
>> From: Joonsoo Kim
>>
>> If we move an item on qlist's tail, we need to update qlist's tail
>> properly. curr->next can be NULL since it is singly lin
On Thu, Jun 30, 2016 at 09:42:36AM +0200, Vlastimil Babka wrote:
> On 06/30/2016 09:31 AM, Joonsoo Kim wrote:
> >On Wed, Jun 29, 2016 at 01:55:55PM -0700, David Rientjes wrote:
> >>On Wed, 29 Jun 2016, Vlastimil Babka wrote:
> >>
> >>>On 06/29/2016 03:
On Wed, Jun 29, 2016 at 01:55:55PM -0700, David Rientjes wrote:
> On Wed, 29 Jun 2016, Vlastimil Babka wrote:
>
> > On 06/29/2016 03:39 AM, David Rientjes wrote:
> > > It's possible that the freeing scanner can be consistently expensive if
> > > memory is well compacted toward the end of the zone
On Wed, Jun 29, 2016 at 11:12:08AM -0700, Paul E. McKenney wrote:
> On Wed, Jun 29, 2016 at 07:52:06PM +0200, Geert Uytterhoeven wrote:
> > Hi Paul,
> >
> > On Wed, Jun 29, 2016 at 6:44 PM, Paul E. McKenney
> > wrote:
> > > On Wed, Jun 29, 2016 at 04:54:44PM +0200, Geert Uytterhoeven wrote:
> > >
On Wed, Jun 29, 2016 at 12:05:09PM +0200, Vlastimil Babka wrote:
> On 06/29/2016 10:12 AM, Joonsoo Kim wrote:
> >>@@ -1035,8 +1034,12 @@ static void isolate_freepages(struct com
> >>continue;
> >>
> >>/* Found a block
lkml.kernel.org/r/alpine.deb.2.10.1606211820350.97...@chino.kir.corp.google.com
> Signed-off-by: David Rientjes
> Acked-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Joonsoo Kim
> Cc: Mel Gorman
> Cc: Hugh Dickins
> Cc:
> Signed-off-by: Andrew Morton
> ---
>
&g
On Tue, Jun 28, 2016 at 07:23:23PM +0800, Chen Feng wrote:
> Hello,
>
> On 2016/6/23 10:52, Joonsoo Kim wrote:
> > On Wed, Jun 22, 2016 at 05:23:06PM +0800, Chen Feng wrote:
> >> Hello,
> >>
> >> On 2016/5/26 14:22, js1...@gmail.com wrote:
> >>&
On Mon, Jun 27, 2016 at 09:25:45PM +1000, Balbir Singh wrote:
>
>
> On 26/05/16 16:22, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > Hello,
> >
> > Changes from v2
> > o Rebase on next-20160525
> > o No other changes except followi
On Mon, Jun 27, 2016 at 05:12:43PM -0700, Paul E. McKenney wrote:
> On Wed, Jun 22, 2016 at 07:53:29PM -0700, Paul E. McKenney wrote:
> > On Wed, Jun 22, 2016 at 07:47:42PM -0700, Paul E. McKenney wrote:
> > > On Thu, Jun 23, 2016 at 11:37:56AM +0900, Joonsoo Kim wrote:
>
On Mon, Jun 27, 2016 at 10:24:05AM +0200, Vlastimil Babka wrote:
> On 05/26/2016 08:22 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >Until now, reserved pages for CMA are managed in the ordinary zones
> >where page's pfn are belong to. This approach has
On Mon, Jun 27, 2016 at 11:46:39AM +0200, Vlastimil Babka wrote:
> On 05/26/2016 08:22 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >Now, all reserved pages for CMA region are belong to the ZONE_CMA
> >and there is no other type of pages. Therefore, we don
On Mon, Jun 27, 2016 at 11:30:52AM +0200, Vlastimil Babka wrote:
> On 05/26/2016 08:22 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >Now, all reserved pages for CMA region are belong to the ZONE_CMA
> >and it only serves for GFP_HIGHUSER_MOVABLE. Therefore, w
On Fri, Jun 24, 2016 at 03:20:43PM +0200, Vlastimil Babka wrote:
> On 05/26/2016 08:22 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >Some of zone threshold depends on number of managed pages in the zone.
> >When memory is going on/offline, it can be changed an
On Fri, Jun 24, 2016 at 04:19:35PM -0700, Andrew Morton wrote:
> On Fri, 17 Jun 2016 16:57:30 +0900 js1...@gmail.com wrote:
>
> > There was a bug reported by Sasha and minor fixes is needed
> > so I send v3.
> >
> > o fix a bg reported by Sasha (mm/compaction: split freepages
> > without holding
On Wed, Jun 22, 2016 at 05:23:06PM +0800, Chen Feng wrote:
> Hello,
>
> On 2016/5/26 14:22, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > Until now, reserved pages for CMA are managed in the ordinary zones
> > where page's pfn are belong to. This
On Wed, Jun 22, 2016 at 05:49:35PM -0700, Paul E. McKenney wrote:
> On Wed, Jun 22, 2016 at 12:08:59PM -0700, Paul E. McKenney wrote:
> > On Wed, Jun 22, 2016 at 05:01:35PM +0200, Geert Uytterhoeven wrote:
> > > On Wed, Jun 22, 2016 at 2:52 AM, Joonsoo Kim
> > > w
On Tue, Jun 21, 2016 at 05:53:22PM -0700, David Rientjes wrote:
> On Wed, 22 Jun 2016, Joonsoo Kim wrote:
>
> > On Tue, Jun 21, 2016 at 04:14:28PM -0700, a...@linux-foundation.org wrote:
> > >
> > > The patch titled
> > > Subject: mm/compaction: s
On Tue, Jun 21, 2016 at 05:54:06AM -0700, Paul E. McKenney wrote:
> On Tue, Jun 21, 2016 at 03:43:02PM +0900, Joonsoo Kim wrote:
> > On Mon, Jun 20, 2016 at 06:12:54AM -0700, Paul E. McKenney wrote:
> > > On Mon, Jun 20, 2016 at 03:39:43PM +0900, Joonsoo Kim wrote:
> > &g
On Tue, Jun 21, 2016 at 04:14:28PM -0700, a...@linux-foundation.org wrote:
>
> The patch titled
> Subject: mm/compaction: split freepages without holding the zone lock fix
> has been added to the -mm tree. Its filename is
> mm-compaction-split-freepages-without-holding-the-zone-lock-fix
On Mon, Jun 20, 2016 at 06:12:54AM -0700, Paul E. McKenney wrote:
> On Mon, Jun 20, 2016 at 03:39:43PM +0900, Joonsoo Kim wrote:
> > CCing Paul to ask some question.
> >
> > On Wed, Jun 15, 2016 at 10:39:47AM +0200, Geert Uytterhoeven wrote:
> > > Hi Joonsoo,
> &
On Tue, Jun 21, 2016 at 10:08:24AM +0800, Chen Feng wrote:
>
>
> On 2016/6/20 14:48, Joonsoo Kim wrote:
> > On Fri, Jun 17, 2016 at 03:38:49PM +0800, Chen Feng wrote:
> >> Hi Kim & feng,
> >>
> >> Thanks for the share. In our platform also has the
On Fri, Jun 17, 2016 at 03:38:49PM +0800, Chen Feng wrote:
> Hi Kim & feng,
>
> Thanks for the share. In our platform also has the same use case.
>
> We only let the alloc with GFP_HIGHUSER_MOVABLE in memory.c to use cma memory.
>
> If we add zone_cma, It seems can resolve the cma migrate issue.
On Fri, Jun 17, 2016 at 11:55:59AM +0200, Michal Hocko wrote:
> On Fri 17-06-16 16:25:26, Joonsoo Kim wrote:
> > On Mon, Jun 06, 2016 at 03:56:04PM +0200, Michal Hocko wrote:
> [...]
> > > I still have troubles to understand your numbers
> > >
> > > >
CCing Paul to ask some question.
On Wed, Jun 15, 2016 at 10:39:47AM +0200, Geert Uytterhoeven wrote:
> Hi Joonsoo,
>
> On Wed, Jun 15, 2016 at 4:23 AM, Joonsoo Kim wrote:
> > On Tue, Jun 14, 2016 at 12:45:14PM +0200, Geert Uytterhoeven wrote:
> >> On Tue, Jun 14, 2016
On Mon, Jun 06, 2016 at 05:21:45PM +0200, Vlastimil Babka wrote:
> On 05/26/2016 04:37 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >This patch is motivated from Hugh and Vlastimil's concern [1].
> >
> >There are two ways to get freepage from the al
On Thu, Jun 16, 2016 at 07:09:32PM +0900, Minchan Kim wrote:
> On Thu, Jun 16, 2016 at 05:42:11PM +0900, Sergey Senozhatsky wrote:
> > On (06/16/16 15:47), Minchan Kim wrote:
> > > > [..]
> > > > > > this is what I'm getting with the [zsmalloc: keep first object
> > > > > > offset in struct page]
On Wed, Jun 15, 2016 at 11:27:31AM +0900, Joonsoo Kim wrote:
> On Tue, Jun 14, 2016 at 03:10:21PM -0400, Sasha Levin wrote:
> > On 06/14/2016 01:52 AM, Joonsoo Kim wrote:
> > > On Mon, Jun 13, 2016 at 04:31:15PM -0400, Sasha Levin wrote:
> > >> > On 05/25/2016
On Mon, Jun 06, 2016 at 03:56:04PM +0200, Michal Hocko wrote:
> On Thu 26-05-16 11:37:54, Joonsoo Kim wrote:
> > From: Joonsoo Kim
> >
> > Currently, we store each page's allocation stacktrace on corresponding
> > page_ext structure and it requires a lot of
On Tue, Jun 14, 2016 at 03:10:21PM -0400, Sasha Levin wrote:
> On 06/14/2016 01:52 AM, Joonsoo Kim wrote:
> > On Mon, Jun 13, 2016 at 04:31:15PM -0400, Sasha Levin wrote:
> >> > On 05/25/2016 10:37 PM, js1...@gmail.com wrote:
> >>> > > From: Joonsoo Kim
>
On Tue, Jun 14, 2016 at 12:45:14PM +0200, Geert Uytterhoeven wrote:
> Hi Joonsoo,
>
> On Tue, Jun 14, 2016 at 10:11 AM, Joonsoo Kim wrote:
> > On Tue, Jun 14, 2016 at 09:31:23AM +0200, Geert Uytterhoeven wrote:
> >> On Tue, Jun 14, 2016 at 8:24 AM, Joonsoo Kim
> >
On Tue, Jun 14, 2016 at 09:31:23AM +0200, Geert Uytterhoeven wrote:
> Hi Joonsoo,
>
> On Tue, Jun 14, 2016 at 8:24 AM, Joonsoo Kim wrote:
> > On Mon, Jun 13, 2016 at 09:43:13PM +0200, Geert Uytterhoeven wrote:
> >> On Tue, Apr 12, 2016 at 6:51 AM, wrote:
> >&g
On Mon, Jun 13, 2016 at 09:43:13PM +0200, Geert Uytterhoeven wrote:
> Hi Joonsoo,
Hello,
>
> On Tue, Apr 12, 2016 at 6:51 AM, wrote:
> > From: Joonsoo Kim
> >
> > To check whther free objects exist or not precisely, we need to grab a
> > lock. But, accuracy
301 - 400 of 2325 matches
Mail list logo