On Mon, Jun 13, 2016 at 04:31:15PM -0400, Sasha Levin wrote:
> On 05/25/2016 10:37 PM, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > We don't need to split freepages with holding the zone lock. It will cause
> > more contention on zone lock so not d
2016-06-03 19:23 GMT+09:00 Vlastimil Babka :
> On 05/26/2016 04:37 AM, js1...@gmail.com wrote:
>>
>> From: Joonsoo Kim
>>
>> It's not necessary to initialized page_owner with holding the zone lock.
>> It would cause more contention on the zone lock although
2016-06-03 19:10 GMT+09:00 Vlastimil Babka :
> On 05/26/2016 04:37 AM, js1...@gmail.com wrote:
>>
>> From: Joonsoo Kim
>>
>> We don't need to split freepages with holding the zone lock. It will cause
>> more contention on zone lock so not desirable.
>>
On Tue, May 31, 2016 at 02:29:24PM +0200, Vlastimil Babka wrote:
> On 05/31/2016 02:07 PM, Vlastimil Babka wrote:
> >On 05/31/2016 08:37 AM, Joonsoo Kim wrote:
> >>>@@ -3695,22 +3695,22 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned
> &g
On Tue, May 31, 2016 at 09:59:36AM +0200, Vlastimil Babka wrote:
> On 05/31/2016 08:20 AM, Joonsoo Kim wrote:
> >>>From 68f09f1d4381c7451238b4575557580380d8bf30 Mon Sep 17 00:00:00 2001
> >>From: Vlastimil Babka
> >>Date: Fri, 29 Apr 2016 11:51:17 +0200
> &
On Tue, May 10, 2016 at 09:36:02AM +0200, Vlastimil Babka wrote:
> During reclaim/compaction loop, compaction priority can be increased by the
> should_compact_retry() function, but the current code is not optimal for
> several reasons:
>
> - priority is only increased when compaction_failed() is
On Tue, May 10, 2016 at 09:35:53AM +0200, Vlastimil Babka wrote:
> After __alloc_pages_slowpath() sets up new alloc_flags and wakes up kswapd, it
> first tries get_page_from_freelist() with the new alloc_flags, as it may
> succeed e.g. due to using min watermark instead of low watermark. This attem
On Tue, May 10, 2016 at 02:30:11PM +0200, Vlastimil Babka wrote:
> On 05/10/2016 01:28 PM, Tetsuo Handa wrote:
> > Vlastimil Babka wrote:
> >> In __alloc_pages_slowpath(), alloc_flags doesn't change after it's
> >> initialized,
> >> so move the initialization above the retry: label. Also make the
On Fri, May 27, 2016 at 03:27:02PM +0800, Feng Tang wrote:
> On Fri, May 27, 2016 at 02:42:18PM +0800, Joonsoo Kim wrote:
> > On Fri, May 27, 2016 at 02:25:27PM +0800, Feng Tang wrote:
> > > On Fri, May 27, 2016 at 01:28:20PM +0800, Joonsoo Kim wrote:
> > > > On T
On Fri, May 27, 2016 at 05:11:08PM +0900, Minchan Kim wrote:
> On Fri, May 27, 2016 at 03:08:39PM +0900, Joonsoo Kim wrote:
> > On Fri, May 27, 2016 at 02:14:32PM +0900, Minchan Kim wrote:
> > > On Thu, May 26, 2016 at 04:15:28PM -0700, Shi, Yang wrote:
> > > > On
On Fri, May 27, 2016 at 02:25:27PM +0800, Feng Tang wrote:
> On Fri, May 27, 2016 at 01:28:20PM +0800, Joonsoo Kim wrote:
> > On Thu, May 26, 2016 at 04:04:54PM +0800, Feng Tang wrote:
> > > On Thu, May 26, 2016 at 02:22:22PM +0800, js1...@gmail.com wrote:
> &g
:16:08AM -0700, Yang Shi wrote:
> > >>>Per the discussion with Joonsoo Kim [1], we need check the return value
> > >>>of
> > >>>lookup_page_ext() for all call sites since it might return NULL in some
> > >>>cases,
> > &g
On Fri, May 27, 2016 at 09:42:24AM +0800, Chen Feng wrote:
> Hi Joonsoo,
> > -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */
> > +/* Free whole pageblock and set its migration type to MIGRATE_MOVABLE. */
> > void __init init_cma_reserved_pageblock(struct page *page)
> > {
>
On Thu, May 26, 2016 at 04:04:54PM +0800, Feng Tang wrote:
> On Thu, May 26, 2016 at 02:22:22PM +0800, js1...@gmail.com wrote:
> > From: Joonsoo Kim
>
> Hi Joonsoo,
>
> Nice work!
Thanks!
> > FYI, there is another attempt [3] trying to solve this problem in lkml
On Thu, May 26, 2016 at 10:12:16AM +0900, Sergey Senozhatsky wrote:
> On (05/26/16 09:43), Joonsoo Kim wrote:
> [..]
> > Hello, Sergey.
> >
> > I don't look at each patches deeply but nice work! I didn't notice that
> > rececnt zram changes makes thing sim
2016-05-25 6:15 GMT+09:00 Thomas Garnier :
> Implements Freelist randomization for the SLUB allocator. It was
> previous implemented for the SLAB allocator. Both use the same
> configuration option (CONFIG_SLAB_FREELIST_RANDOM).
>
> The list is randomized during initialization of a new set of pages
On Tue, May 24, 2016 at 02:15:22PM -0700, Thomas Garnier wrote:
> This commit reorganizes the previous SLAB freelist randomization to
> prepare for the SLUB implementation. It moves functions that will be
> shared to slab_common. It also move the definition of freelist_idx_t in
> the slab_def heade
On Wed, May 25, 2016 at 11:29:59PM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> This has started as a 'add zlib support' work, but after some
> thinking I saw no blockers for a bigger change -- a switch to
> crypto API.
>
> We don't have an idle zstreams list anymore and our write path
> now
Ccing Mel.
On Wed, May 25, 2016 at 03:36:48PM -0700, Shi, Yang wrote:
> On 5/25/2016 3:23 PM, Andrew Morton wrote:
> >On Wed, 25 May 2016 14:00:07 -0700 Yang Shi wrote:
> >
> >>register_page_bootmem_info_node() is invoked in mem_init(), so it will be
> >>called before page_alloc_init_late() if CO
On Fri, May 20, 2016 at 10:00:06AM -0700, Shi, Yang wrote:
> On 5/19/2016 7:40 PM, Joonsoo Kim wrote:
> >2016-05-20 2:18 GMT+09:00 Shi, Yang :
> >>On 5/18/2016 5:28 PM, Joonsoo Kim wrote:
> >>>
> >>>Vlastiml, thanks for ccing me on original bug report.
>
On Fri, May 20, 2016 at 09:24:35AM -0700, Thomas Garnier wrote:
> On Thu, May 19, 2016 at 7:15 PM, Joonsoo Kim wrote:
> > 2016-05-20 5:20 GMT+09:00 Thomas Garnier :
> >> I ran the test given by Joonsoo and it gave me these minimum cycles
> >> per size across 20 usage:
2016-05-20 2:18 GMT+09:00 Shi, Yang :
> On 5/18/2016 5:28 PM, Joonsoo Kim wrote:
>>
>> Vlastiml, thanks for ccing me on original bug report.
>>
>> On Wed, May 18, 2016 at 03:23:45PM -0700, Yang Shi wrote:
>>>
>>> When enabling the below kernel conf
2016-05-20 5:20 GMT+09:00 Thomas Garnier :
> I ran the test given by Joonsoo and it gave me these minimum cycles
> per size across 20 usage:
I can't understand what you did here. Maybe, it's due to my poor Engling.
Please explain more. You did single thread test? Why minimum cycles
rather than ave
On Wed, May 18, 2016 at 12:12:13PM -0700, Thomas Garnier wrote:
> I thought the mix of slab_test & kernbench would show a diverse
> picture on perf data. Is there another test that you think would be
> useful?
Single thread testing on slab_test would be meaningful because it also
touch the slowpat
On Wed, May 18, 2016 at 09:15:13PM +0800, lunar12 lunartwix wrote:
> 2016-05-18 16:48 GMT+08:00 Michal Hocko :
> > [CC linux-mm and some usual suspects]
Michal, Thanks.
> >
> > On Tue 17-05-16 23:37:55, lunar12 lunartwix wrote:
> >> A 4MB dma_alloc_coherent in kernel after malloc(2*1024) 40 time
Vlastiml, thanks for ccing me on original bug report.
On Wed, May 18, 2016 at 03:23:45PM -0700, Yang Shi wrote:
> When enabling the below kernel configs:
>
> CONFIG_DEFERRED_STRUCT_PAGE_INIT
> CONFIG_DEBUG_PAGEALLOC
> CONFIG_PAGE_EXTENSION
> CONFIG_DEBUG_VM
>
> kernel bootup may fail due to the
On Thu, May 12, 2016 at 11:23:34AM +0900, Joonsoo Kim wrote:
> On Tue, May 10, 2016 at 11:43:48AM +0200, Michal Hocko wrote:
> > On Tue 10-05-16 15:41:04, Joonsoo Kim wrote:
> > > 2016-05-05 3:16 GMT+09:00 Michal Hocko :
> > > > On Wed 04-05-16 23:32:31, Joonsoo Kim
On Tue, May 10, 2016 at 05:13:12PM +0200, Vlastimil Babka wrote:
> On 05/03/2016 07:23 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >Currently, copy_page_owner() doesn't copy all the owner information.
> >It skips last_migrate_reason because copy_page_
On Tue, May 10, 2016 at 04:56:45PM +0200, Vlastimil Babka wrote:
> On 05/03/2016 07:22 AM, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > We don't need to split freepages with holding the zone lock. It will cause
> > more contention on zone lock so not des
On Tue, May 10, 2016 at 11:43:48AM +0200, Michal Hocko wrote:
> On Tue 10-05-16 15:41:04, Joonsoo Kim wrote:
> > 2016-05-05 3:16 GMT+09:00 Michal Hocko :
> > > On Wed 04-05-16 23:32:31, Joonsoo Kim wrote:
> > >> 2016-05-04 17:47 GMT+09:00 Michal Hocko :
> [...]
>
2016-05-10 16:09 GMT+09:00 Vlastimil Babka :
> On 05/10/2016 08:41 AM, Joonsoo Kim wrote:
>>
>> You applied band-aid for CONFIG_COMPACTION and fixed some reported
>> problem but it is also fragile. Assume almost pageblock's skipbit are
>> set. In this
2016-05-05 4:40 GMT+09:00 Michal Hocko :
> On Thu 05-05-16 00:30:35, Joonsoo Kim wrote:
>> 2016-05-04 18:21 GMT+09:00 Michal Hocko :
> [...]
>> > Do we really consume 512B of stack during reclaim. That sounds more than
>> > worrying to me.
>>
>> Hmm...I ch
2016-05-05 3:16 GMT+09:00 Michal Hocko :
> On Wed 04-05-16 23:32:31, Joonsoo Kim wrote:
>> 2016-05-04 17:47 GMT+09:00 Michal Hocko :
>> > On Wed 04-05-16 14:45:02, Joonsoo Kim wrote:
>> >> On Wed, Apr 20, 2016 at 03:47:13PM -0400, Michal Hocko wrote:
>> >&g
2016-05-05 0:30 GMT+09:00 Joonsoo Kim :
> 2016-05-04 18:21 GMT+09:00 Michal Hocko :
>> On Wed 04-05-16 11:14:50, Joonsoo Kim wrote:
>>> On Tue, May 03, 2016 at 10:53:56AM +0200, Michal Hocko wrote:
>>> > On Tue 03-05-16 14:23:04, Joonsoo Kim wrote:
>> [...]
>
2016-05-04 18:23 GMT+09:00 Michal Hocko :
> On Wed 04-05-16 11:35:00, Joonsoo Kim wrote:
> [...]
>> Oops... I think more deeply and change my mind. In recursion case,
>> stack is consumed more than 1KB and it would be a problem. I think
>> that best approach is using preall
2016-05-04 18:21 GMT+09:00 Michal Hocko :
> On Wed 04-05-16 11:14:50, Joonsoo Kim wrote:
>> On Tue, May 03, 2016 at 10:53:56AM +0200, Michal Hocko wrote:
>> > On Tue 03-05-16 14:23:04, Joonsoo Kim wrote:
> [...]
>> > > Memory saving looks as follow
2016-05-04 18:04 GMT+09:00 Michal Hocko :
> On Wed 04-05-16 15:27:48, Joonsoo Kim wrote:
>> On Wed, Apr 20, 2016 at 03:47:27PM -0400, Michal Hocko wrote:
> [...]
>> > +bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
>> > + int alloc_
2016-05-04 17:34 GMT+09:00 Alexander Potapenko :
> On Tue, May 3, 2016 at 7:13 AM, wrote:
>> From: Joonsoo Kim
>>
>> Recently, we allow to save the stacktrace whose hashed value is 0.
>> It causes the problem that stackdepot could return 0 even if in success.
&
2016-05-04 17:56 GMT+09:00 Michal Hocko :
> On Wed 04-05-16 15:31:12, Joonsoo Kim wrote:
>> On Wed, May 04, 2016 at 03:01:24PM +0900, Joonsoo Kim wrote:
>> > On Wed, Apr 20, 2016 at 03:47:25PM -0400, Michal Hocko wrote:
> [...]
>> > > @@ -3408,6 +3456,17 @@ __al
2016-05-04 17:53 GMT+09:00 Michal Hocko :
> On Wed 04-05-16 15:01:24, Joonsoo Kim wrote:
>> On Wed, Apr 20, 2016 at 03:47:25PM -0400, Michal Hocko wrote:
> [...]
>
> Please try to trim your responses it makes it much easier to follow the
> discussion
Okay.
>
2016-05-04 17:47 GMT+09:00 Michal Hocko :
> On Wed 04-05-16 14:45:02, Joonsoo Kim wrote:
>> On Wed, Apr 20, 2016 at 03:47:13PM -0400, Michal Hocko wrote:
>> > Hi,
>> >
>> > This is v6 of the series. The previous version was posted [1]. The
>> > code has
On Wed, May 04, 2016 at 10:12:43AM +0200, Vlastimil Babka wrote:
> On 05/04/2016 07:45 AM, Joonsoo Kim wrote:
> >I still don't agree with some part of this patchset that deal with
> >!costly order. As you know, there was two regression reports from Hugh
> >and Aaron and
On Wed, May 04, 2016 at 03:01:24PM +0900, Joonsoo Kim wrote:
> On Wed, Apr 20, 2016 at 03:47:25PM -0400, Michal Hocko wrote:
> > From: Michal Hocko
> >
> > should_reclaim_retry will give up retries for higher order allocations
> > if none of the eligible zones has an
On Wed, Apr 20, 2016 at 03:47:27PM -0400, Michal Hocko wrote:
> From: Michal Hocko
>
> "mm: consider compaction feedback also for costly allocation" has
> removed the upper bound for the reclaim/compaction retries based on the
> number of reclaimed pages for costly orders. While this is desirable
On Wed, Apr 20, 2016 at 03:47:25PM -0400, Michal Hocko wrote:
> From: Michal Hocko
>
> should_reclaim_retry will give up retries for higher order allocations
> if none of the eligible zones has any requested or higher order pages
> available even if we pass the watermak check for order-0. This is
On Wed, Apr 20, 2016 at 03:47:13PM -0400, Michal Hocko wrote:
> Hi,
>
> This is v6 of the series. The previous version was posted [1]. The
> code hasn't changed much since then. I have found one old standing
> bug (patch 1) which just got much more severe and visible with this
> series. Other than
On Wed, May 04, 2016 at 11:14:50AM +0900, Joonsoo Kim wrote:
> On Tue, May 03, 2016 at 10:53:56AM +0200, Michal Hocko wrote:
> > On Tue 03-05-16 14:23:04, Joonsoo Kim wrote:
> > > From: Joonsoo Kim
> > >
> > > Currently, we store each page's allocation s
On Tue, May 03, 2016 at 10:53:56AM +0200, Michal Hocko wrote:
> On Tue 03-05-16 14:23:04, Joonsoo Kim wrote:
> > From: Joonsoo Kim
> >
> > Currently, we store each page's allocation stacktrace on corresponding
> > page_ext structure and it requires a lot of
On Mon, May 02, 2016 at 09:49:47AM +0200, Vlastimil Babka wrote:
> On 05/02/2016 08:14 AM, Joonsoo Kim wrote:
> >>>> >Although it's separate issue, I should mentioned one thing. Related to
> >>>> >I/O pinning issue, ZONE_CMA don't get blockdev all
seems
to be enough merit to go this way. Anyway, I will answer your comment
inline.
On Fri, Apr 29, 2016 at 10:29:02AM +0100, Mel Gorman wrote:
> On Fri, Apr 29, 2016 at 03:51:45PM +0900, Joonsoo Kim wrote:
> > Hello, Mel.
> >
> > IIUC, you may miss that alloc_contig_range(
On Wed, Apr 27, 2016 at 10:39:29AM -0500, Christoph Lameter wrote:
> On Tue, 26 Apr 2016, Andrew Morton wrote:
>
> > : CONFIG_FREELIST_RANDOM bugs me a bit - "freelist" is so vague.
> > : CONFIG_SLAB_FREELIST_RANDOM would be better. I mean, what Kconfig
> > : identifier could be used for implemen
On Tue, Apr 26, 2016 at 05:38:18PM +0800, Rui Teng wrote:
> On 4/25/16 1:21 PM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >Attached cover-letter:
> >
> >This series try to solve problems of current CMA implementation.
> >
> >CMA is introdu
On Thu, Apr 28, 2016 at 03:46:33PM +0800, Rui Teng wrote:
> On 4/25/16 1:21 PM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >Some of zone threshold depends on number of managed pages in the zone.
> >When memory is going on/offline, it can be changed an
02:36:54PM +0900, Joonsoo Kim wrote:
> > > Hello,
> > >
> > > Changes from v1
> > > o Separate some patches which deserve to submit independently
> > > o Modify description to reflect current kernel state
> > > (e.g. high-order watermark
On Tue, Apr 26, 2016 at 09:40:45PM +0200, Vlastimil Babka wrote:
> On 04/26/2016 02:55 AM, Joonsoo Kim wrote:
> >On Mon, Apr 25, 2016 at 03:35:50PM +0200, Vlastimil Babka wrote:
> >>@@ -846,9 +845,11 @@ isolate_migratepages_block(struct compact_control *cc,
> &g
t;
> [vba...@suse.cz: expanded the changelog]
> Fixes: edc2ca612496 ("mm, compaction: move pageblock checks up from
> isolate_migratepages_range()")
> Cc: sta...@vger.kernel.org
> Cc: Joonsoo Kim
> Signed-off-by: Hugh Dickins
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
Thanks.
On Tue, Apr 26, 2016 at 04:17:43PM -0700, Andrew Morton wrote:
> On Tue, 26 Apr 2016 09:21:10 -0700 Thomas Garnier wrote:
>
> > Provides an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> > SLAB freelist. The list is randomized during initialization of a new set
> > of pages. The orde
On Mon, Apr 25, 2016 at 03:35:50PM +0200, Vlastimil Babka wrote:
> From: Hugh Dickins
>
> /proc/sys/vm/stat_refresh warns nr_isolated_anon and nr_isolated_file
> go increasingly negative under compaction: which would add delay when
> should be none, or no delay when should delay. putback_movable
On Tue, Apr 12, 2016 at 01:50:59PM +0900, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> It can be reused on other place, so factor out it. Following patch will
> use it.
>
> Signed-off-by: Joonsoo Kim
>
On Mon, Apr 25, 2016 at 01:39:23PM -0700, Thomas Garnier wrote:
> Provides an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> SLAB freelist. The list is randomized during initialization of a new set
> of pages. The order on different freelist sizes is pre-computed at boot
> for performa
On Mon, Apr 25, 2016 at 02:21:04PM +0900, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> Hello,
>
> Changes from v1
> o Separate some patches which deserve to submit independently
> o Modify description to reflect current kernel state
> (e.g. high-order watermark prob
On Fri, Apr 15, 2016 at 10:10:33AM -0400, valdis.kletni...@vt.edu wrote:
> On Thu, 14 Apr 2016 10:35:47 +0900, Joonsoo Kim said:
> > On Wed, Apr 13, 2016 at 08:29:46PM -0400, Valdis Kletnieks wrote:
> > > I'm seeing my laptop crash/wedge up/something during very early
>
On Tue, Apr 19, 2016 at 09:44:54AM -0700, Thomas Garnier wrote:
> On Tue, Apr 19, 2016 at 12:15 AM, Joonsoo Kim wrote:
> > On Mon, Apr 18, 2016 at 10:14:39AM -0700, Thomas Garnier wrote:
> >> Provides an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> >>
Ccing Stephen.
On Wed, Apr 20, 2016 at 08:11:41AM +0100, Jon Hunter wrote:
> Hi Joonsoo,
>
> On 11/04/16 12:44, Jon Hunter wrote:
> > On 11/04/16 03:02, Joonsoo Kim wrote:
> >> On Fri, Apr 08, 2016 at 03:39:20PM -0500, Nishanth Menon wrote:
> >>> Hi,
> >
On Mon, Apr 18, 2016 at 10:14:39AM -0700, Thomas Garnier wrote:
> Provides an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> SLAB freelist. The list is randomized during initialization of a new set
> of pages. The order on different freelist sizes is pre-computed at boot
> for performa
2016-04-15 4:22 GMT+09:00 :
> On Thu, 14 Apr 2016 10:35:47 +0900, Joonsoo Kim said:
>
>> My fault. It should be assgined every time. Please test below patch.
>> I will send it with proper SOB after you confirm the problem disappear.
>> Thanks for report and analysis!
&g
On Tue, Apr 12, 2016 at 11:38:39AM -0500, Christoph Lameter wrote:
> On Tue, 12 Apr 2016, js1...@gmail.com wrote:
>
> > @@ -,6 +2241,7 @@ static void drain_cpu_caches(struct kmem_cache
> > *cachep)
> > {
> > struct kmem_cache_node *n;
> > int node;
> > + LIST_HEAD(list);
> >
> >
git bisect points at:
>
> commit 7a6bacb133752beacb76775797fd550417e9d3a2
> Author: Joonsoo Kim
> Date: Thu Apr 7 13:59:39 2016 +1000
>
> mm/slab: factor out kmem_cache_node initialization code
>
> It can be reused on other place, so factor out it. Following patch
On Tue, Apr 12, 2016 at 09:24:34AM +0200, Jesper Dangaard Brouer wrote:
> On Tue, 12 Apr 2016 13:51:06 +0900
> js1...@gmail.com wrote:
>
> > From: Joonsoo Kim
> >
> > To check whther free objects exist or not precisely, we need to grab a
>^^
>
On Mon, Apr 11, 2016 at 04:51:47PM +0200, Alexander Potapenko wrote:
> On Mon, Apr 11, 2016 at 4:39 PM, Alexander Potapenko
> wrote:
> > On Mon, Apr 11, 2016 at 9:44 AM, Joonsoo Kim wrote:
> >> On Mon, Mar 14, 2016 at 11:43:43AM +0100, Alexander Potapenko wrote:
> &
On Mon, Apr 11, 2016 at 10:17:13AM +0200, Vlastimil Babka wrote:
> On 04/11/2016 09:05 AM, Joonsoo Kim wrote:
> >On Thu, Mar 31, 2016 at 10:50:32AM +0200, Vlastimil Babka wrote:
> >>The goal here is to reduce latency (and increase success) of direct async
> >>compaction
On Mon, Mar 14, 2016 at 11:43:43AM +0100, Alexander Potapenko wrote:
> +depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
> + gfp_t alloc_flags)
> +{
> + u32 hash;
> + depot_stack_handle_t retval = 0;
> + struct stack_record *found = NULL,
On Thu, Mar 31, 2016 at 10:50:36AM +0200, Vlastimil Babka wrote:
> The goal of direct compaction is to quickly make a high-order page available
> for the pending allocation. The free page scanner can add significant latency
> when searching for migration targets, although to succeed the compaction,
On Thu, Mar 31, 2016 at 10:50:32AM +0200, Vlastimil Babka wrote:
> The goal here is to reduce latency (and increase success) of direct async
> compaction by making it focus more on the goal of creating a high-order page,
> at some expense of thoroughness.
>
> This is based on an older attempt [1]
rst bad commit: [2b629704a2b6a5b239f23750e5517a9d8c3a4e8c]
> mm/slab: clean-up kmem_cache_node setup
>
Hello,
I made a mistake on that patch. Could you try to test below one on
top of it.
Thanks.
->8
>From d3af3cc409527e9be6beb62ea395cde67f3c5029 Mon Sep 1
On Fri, Apr 01, 2016 at 11:10:07AM +0900, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> ZONE_MOVABLE could be treated as highmem so we need to consider it for
> accurate calculation of dirty pages. And, in following patches, ZONE_CMA
> will be introduced and it can be treated
On Mon, Mar 28, 2016 at 03:53:01PM -0700, Laura Abbott wrote:
> The per-cpu slab is designed to be the primary path for allocation in SLUB
> since it assumed allocations will go through the fast path if possible.
> When debugging is enabled, the fast path is disabled and per-cpu
> allocations are n
On Thu, Mar 31, 2016 at 01:53:14PM +0300, Nikolay Borisov wrote:
>
>
> On 03/28/2016 08:26 AM, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > Major kmem_cache metadata in slab subsystem is synchronized with
> > the slab_mutex. In SLAB, if some of them is
--
> From: Andrew Morton
> Subject: mm-rename-_count-field-of-the-struct-page-to-_refcount-fix-fix
>
> Documentation/vm/transhuge.txt too
Hello, Andrew.
There is one more site to fix.
Thanks.
----->8---
>From 046a1e1934e1fa490cc4e36bc8d556b28a8707ea Mon Sep 17 0
On Mon, Mar 28, 2016 at 08:05:41PM -0500, Christoph Lameter wrote:
> On Mon, 28 Mar 2016, js1...@gmail.com wrote:
>
> > From: Joonsoo Kim
> >
> > Slab color isn't needed to be changed strictly. Because locking
> > for changing slab color could cause
On Tue, Mar 29, 2016 at 12:23:13PM -0700, Andrew Morton wrote:
> On Tue, 29 Mar 2016 11:27:47 +0200 Vlastimil Babka wrote:
>
> > > v2: change more _count usages to _refcount
> >
> > There's also
> > Documentation/vm/transhuge.txt talking about ->_count
> > include/linux/mm.h: * requires to
On Mon, Mar 28, 2016 at 08:03:16PM -0500, Christoph Lameter wrote:
> On Mon, 28 Mar 2016, js1...@gmail.com wrote:
>
> > From: Joonsoo Kim
> >
> > Currently, determination to free a slab is done whenever free object is
> > put into the slab. This has a problem
On Mon, Mar 28, 2016 at 07:58:29PM -0500, Christoph Lameter wrote:
> On Mon, 28 Mar 2016, js1...@gmail.com wrote:
>
> > * This initializes kmem_cache_node or resizes various caches for all
> > nodes.
> > */
> > -static int alloc_kmem_cache_node(struct kmem_cache *cachep, gfp_t gfp)
> > +stati
On Mon, Mar 28, 2016 at 07:56:15PM -0500, Christoph Lameter wrote:
> On Mon, 28 Mar 2016, js1...@gmail.com wrote:
>
> > From: Joonsoo Kim
> > - spin_lock_irq(&n->list_lock);
> > - n->free_limit =
> > -
On Mon, Mar 28, 2016 at 07:50:36PM -0500, Christoph Lameter wrote:
> On Mon, 28 Mar 2016, js1...@gmail.com wrote:
>
> > Major kmem_cache metadata in slab subsystem is synchronized with
> > the slab_mutex. In SLAB, if some of them is changed, node's shared
> > array cache would be freed and re-popu
On Mon, Mar 28, 2016 at 10:58:38AM +0200, Geert Uytterhoeven wrote:
> Hi Jonsoo,
>
> On Mon, Mar 28, 2016 at 7:26 AM, wrote:
> > From: Joonsoo Kim
> >
> > Initial attemp to remove BAD_ALIEN_MAGIC is once reverted by
> > 'commit edcad2509550
2016-03-28 15:07 GMT+09:00 kbuild test robot :
> Hi Joonsoo,
>
> [auto build test ERROR on net/master]
> [also build test ERROR on v4.6-rc1 next-20160327]
> [if your patch is applied to the wrong git tree, please drop us a note to
> help improving the system]
Hello, bot.
Is there any way to stop
2016-03-28 14:59 GMT+09:00 :
> From: Joonsoo Kim
>
> Many developer already know that field for reference count of
> the struct page is _count and atomic type. They would try to handle it
> directly and this could break the purpose of page reference count
> tracepoint. To pre
On Wed, Mar 23, 2016 at 03:47:46PM +0100, Vlastimil Babka wrote:
> On 03/14/2016 08:31 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim
> >
> >There is a system that node's pfn are overlapped like as following.
> >
> >-pfn>
> >N0 N1 N2
2016-03-23 17:26 GMT+09:00 Vlastimil Babka :
> On 03/23/2016 05:44 AM, Joonsoo Kim wrote:
>>>
>>>
>>> Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on
>>> isolated pageblock")
>>> Link: https://lkml.org/lkml/2016/3
On Tue, Mar 22, 2016 at 11:55:45PM +0900, Minchan Kim wrote:
> On Tue, Mar 22, 2016 at 02:50:37PM +0900, Joonsoo Kim wrote:
> > On Mon, Mar 21, 2016 at 03:31:02PM +0900, Minchan Kim wrote:
> > > We have allowed migration for only LRU pages until now and it was
> > > enou
On Tue, Mar 22, 2016 at 11:06:29PM +0900, Minchan Kim wrote:
> On Tue, Mar 22, 2016 at 05:20:08PM +0900, Joonsoo Kim wrote:
> > 2016-03-22 17:00 GMT+09:00 Minchan Kim :
> > > On Tue, Mar 22, 2016 at 02:08:59PM +0900, Joonsoo Kim wrote:
> > >> On Fri, Mar 18, 2016 at
On Fri, Mar 18, 2016 at 03:10:09PM +0100, Vlastimil Babka wrote:
> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> >>>>>>
> >>>>>> Okay. I used following slightly optimized version and I need to
>
On Tue, Mar 22, 2016 at 03:56:46PM +0100, Lucas Stach wrote:
> Am Montag, den 21.03.2016, 13:42 +0900 schrieb Joonsoo Kim:
> > On Fri, Mar 18, 2016 at 02:32:35PM +0100, Lucas Stach wrote:
> > > Hi Vlastimil, Joonsoo,
> > >
> > > Am Freitag, den 18.03.2
2016-03-22 17:00 GMT+09:00 Minchan Kim :
> On Tue, Mar 22, 2016 at 02:08:59PM +0900, Joonsoo Kim wrote:
>> On Fri, Mar 18, 2016 at 04:58:31PM +0900, Minchan Kim wrote:
>> > "remove compressed copy from zram in-memory"
>> > applied swap_slot_free_notify call
On Mon, Mar 21, 2016 at 03:31:02PM +0900, Minchan Kim wrote:
> We have allowed migration for only LRU pages until now and it was
> enough to make high-order pages. But recently, embedded system(e.g.,
> webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory)
> so we have seen several
On Fri, Mar 18, 2016 at 04:58:31PM +0900, Minchan Kim wrote:
> "remove compressed copy from zram in-memory"
> applied swap_slot_free_notify call in *end_swap_bio_read* to
> remove duplicated memory between zram and memory.
>
> However, with introducing rw_page in zram <8c7f01025f7b>
> "zram: impl
On Mon, Mar 21, 2016 at 11:37:19AM +, Mel Gorman wrote:
> On Mon, Mar 14, 2016 at 04:31:32PM +0900, js1...@gmail.com wrote:
> > From: Joonsoo Kim
> >
> > There is a system that node's pfn are overlapped like as following.
> >
> > -pfn>
On Fri, Mar 18, 2016 at 02:32:35PM +0100, Lucas Stach wrote:
> Hi Vlastimil, Joonsoo,
>
> Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> > >>
&g
2016-03-17 18:24 GMT+09:00 Hanjun Guo :
> On 2016/3/17 14:54, Joonsoo Kim wrote:
>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
>>> On 2016/3/14 15:18, Joonsoo Kim wrote:
>>>> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
>>>
401 - 500 of 2325 matches
Mail list logo