On Thu 06-12-18 15:43:26, David Rientjes wrote:
> On Wed, 5 Dec 2018, Linus Torvalds wrote:
>
> > > Ok, I've applied David's latest patch.
> > >
> > > I'm not at all objecting to tweaking this further, I just didn't want
> > > to have this regression stand.
> >
> > Hmm. Can somebody (David?)
On Fri, Dec 21, 2018 at 02:18:45PM -0800, David Rientjes wrote:
> On Fri, 14 Dec 2018, Vlastimil Babka wrote:
>
> > > It would be interesting to know if anybody has tried using the per-zone
> > > free_area's to determine migration targets and set a bit if it should be
> > > considered a
On Fri, 14 Dec 2018, Vlastimil Babka wrote:
> > It would be interesting to know if anybody has tried using the per-zone
> > free_area's to determine migration targets and set a bit if it should be
> > considered a migration source or a migration target. If all pages for a
> > pageblock are
On Fri, 14 Dec 2018, Mel Gorman wrote:
> > In other words, I think there is a lot of potential stranding that occurs
> > for both scanners that could otherwise result in completely free
> > pageblocks. If there a single movable page present near the end of the
> > zone in an otherwise fully
On Fri, Dec 14, 2018 at 01:04:11PM -0800, David Rientjes wrote:
> On Wed, 12 Dec 2018, Vlastimil Babka wrote:
>
> > > Regarding the role of direct reclaim in the allocator, I think we need
> > > work on the feedback from compaction to determine whether it's
> > > worthwhile.
> > > That's
On 12/14/18 10:04 PM, David Rientjes wrote:
> On Wed, 12 Dec 2018, Vlastimil Babka wrote:
...
> Reclaim likely could be deterministically useful if we consider a redesign
> of how migration sources and targets are determined in compaction.
>
> Has anybody tried a migration scanner that isn't
On Wed, 12 Dec 2018, Vlastimil Babka wrote:
> > Regarding the role of direct reclaim in the allocator, I think we need
> > work on the feedback from compaction to determine whether it's worthwhile.
> > That's difficult because of the point I continue to bring up:
> > isolate_freepages() is
On Wed 12-12-18 12:00:16, Andrea Arcangeli wrote:
[...]
> Adding MADV_THISNODE/MADV_NODE_RECLAIM, will guarantee his proprietary
> software binary will run at maximum performance without cache
> interference, and he's happy to accept the risk of massive slowdown in
> case the local node is truly
On Wed, Dec 12, 2018 at 10:50:51AM +0100, Michal Hocko wrote:
> I can be convinced that larger pages really require a different behavior
> than base pages but you should better show _real_ numbers on a wider
> variety workloads to back your claims. I have only heard hand waving and
I agree with
Hello,
I now found a two socket EPYC (is this Naples?) to try to confirm the
THP effect of intra-socket THP.
CPU(s):128
On-line CPU(s) list: 0-127
Thread(s) per core:2
Core(s) per socket:32
Socket(s): 2
NUMA node(s): 8
NUMA node0 CPU(s):
On 12/12/18 1:37 AM, David Rientjes wrote:
>
> Regarding the role of direct reclaim in the allocator, I think we need
> work on the feedback from compaction to determine whether it's worthwhile.
> That's difficult because of the point I continue to bring up:
> isolate_freepages() is not
On Tue 11-12-18 16:37:22, David Rientjes wrote:
[...]
> Since it depends on the workload, specifically workloads that fit within a
> single node, I think the reasonable approach would be to have a sane
> default regardless of the use of MADV_HUGEPAGE or thp defrag settings and
> then optimzie
On Sun, 9 Dec 2018, Andrea Arcangeli wrote:
> You didn't release the proprietary software that depends on
> __GFP_THISNODE behavior and that you're afraid is getting a
> regression.
>
> Could you at least release with an open source license the benchmark
> software that you must have used to do
Hello,
On Sun, Dec 09, 2018 at 04:29:13PM -0800, David Rientjes wrote:
> [..] on this platform, at least, hugepages are
> preferred on the same socket but there isn't a significant benefit from
> getting a cross socket hugepage over small page. [..]
You didn't release the proprietary software
On Thu, 6 Dec 2018, Linus Torvalds wrote:
> > On Broadwell, the access latency to local small pages was +5.6%, remote
> > hugepages +16.4%, and remote small pages +19.9%.
> >
> > On Naples, the access latency to local small pages was +4.9%, intrasocket
> > hugepages +10.5%, intrasocket small
On Fri, 7 Dec 2018, Vlastimil Babka wrote:
> >> But *that* in turn makes for other possible questions:
> >>
> >> - if the reason we couldn't get a local hugepage is that we're simply
> >> out of local memory (huge *or* small), then maybe a remote hugepage is
> >> better.
> >>
> >>Note that
On Fri, 7 Dec 2018, Vlastimil Babka wrote:
> >> But *that* in turn makes for other possible questions:
> >>
> >> - if the reason we couldn't get a local hugepage is that we're simply
> >> out of local memory (huge *or* small), then maybe a remote hugepage is
> >> better.
> >>
> >>Note that
On 12/7/18 8:49 AM, Michal Hocko wrote:
>> But *that* in turn makes for other possible questions:
>>
>> - if the reason we couldn't get a local hugepage is that we're simply
>> out of local memory (huge *or* small), then maybe a remote hugepage is
>> better.
>>
>>Note that this now implies
On 12/7/18 8:49 AM, Michal Hocko wrote:
>> But *that* in turn makes for other possible questions:
>>
>> - if the reason we couldn't get a local hugepage is that we're simply
>> out of local memory (huge *or* small), then maybe a remote hugepage is
>> better.
>>
>>Note that this now implies
On Thu 06-12-18 20:31:46, Linus Torvalds wrote:
> [ Oops. different thread for me due to edited subject, so I saw this
> after replying to the earlier email by David ]
Sorry about that but I really wanted to make the actual discussion about
semantic clearly distinguished because the thread just
On Thu 06-12-18 20:31:46, Linus Torvalds wrote:
> [ Oops. different thread for me due to edited subject, so I saw this
> after replying to the earlier email by David ]
Sorry about that but I really wanted to make the actual discussion about
semantic clearly distinguished because the thread just
On Thu 06-12-18 15:49:04, David Rientjes wrote:
> On Thu, 6 Dec 2018, Michal Hocko wrote:
>
> > MADV_HUGEPAGE changes the picture because the caller expressed a need
> > for THP and is willing to go extra mile to get it. That involves
> > allocation latency and as of now also a potential remote
On Thu 06-12-18 15:49:04, David Rientjes wrote:
> On Thu, 6 Dec 2018, Michal Hocko wrote:
>
> > MADV_HUGEPAGE changes the picture because the caller expressed a need
> > for THP and is willing to go extra mile to get it. That involves
> > allocation latency and as of now also a potential remote
[ Oops. different thread for me due to edited subject, so I saw this
after replying to the earlier email by David ]
On Thu, Dec 6, 2018 at 1:14 AM Michal Hocko wrote:
>
> MADV_HUGEPAGE changes the picture because the caller expressed a need
> for THP and is willing to go extra mile to get it.
[ Oops. different thread for me due to edited subject, so I saw this
after replying to the earlier email by David ]
On Thu, Dec 6, 2018 at 1:14 AM Michal Hocko wrote:
>
> MADV_HUGEPAGE changes the picture because the caller expressed a need
> for THP and is willing to go extra mile to get it.
On Thu, Dec 6, 2018 at 3:43 PM David Rientjes wrote:
>
> On Broadwell, the access latency to local small pages was +5.6%, remote
> hugepages +16.4%, and remote small pages +19.9%.
>
> On Naples, the access latency to local small pages was +4.9%, intrasocket
> hugepages +10.5%, intrasocket small
On Thu, Dec 6, 2018 at 3:43 PM David Rientjes wrote:
>
> On Broadwell, the access latency to local small pages was +5.6%, remote
> hugepages +16.4%, and remote small pages +19.9%.
>
> On Naples, the access latency to local small pages was +4.9%, intrasocket
> hugepages +10.5%, intrasocket small
On Thu, 6 Dec 2018, Michal Hocko wrote:
> MADV_HUGEPAGE changes the picture because the caller expressed a need
> for THP and is willing to go extra mile to get it. That involves
> allocation latency and as of now also a potential remote access. We do
> not have complete agreement on the later
On Thu, 6 Dec 2018, Michal Hocko wrote:
> MADV_HUGEPAGE changes the picture because the caller expressed a need
> for THP and is willing to go extra mile to get it. That involves
> allocation latency and as of now also a potential remote access. We do
> not have complete agreement on the later
On Wed, 5 Dec 2018, Linus Torvalds wrote:
> > Ok, I've applied David's latest patch.
> >
> > I'm not at all objecting to tweaking this further, I just didn't want
> > to have this regression stand.
>
> Hmm. Can somebody (David?) also perhaps try to state what the
> different latency impacts end
On Wed, 5 Dec 2018, Linus Torvalds wrote:
> > Ok, I've applied David's latest patch.
> >
> > I'm not at all objecting to tweaking this further, I just didn't want
> > to have this regression stand.
>
> Hmm. Can somebody (David?) also perhaps try to state what the
> different latency impacts end
On 12/6/18 1:54 AM, Andrea Arcangeli wrote:
> On Wed, Dec 05, 2018 at 04:18:14PM -0800, David Rientjes wrote:
>> On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
>>
>> Note that in addition to COMPACT_SKIPPED that you mention, compaction can
>> fail with COMPACT_COMPLETE, meaning the full scan has
On 12/6/18 1:54 AM, Andrea Arcangeli wrote:
> On Wed, Dec 05, 2018 at 04:18:14PM -0800, David Rientjes wrote:
>> On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
>>
>> Note that in addition to COMPACT_SKIPPED that you mention, compaction can
>> fail with COMPACT_COMPLETE, meaning the full scan has
On Wed 05-12-18 16:58:02, Linus Torvalds wrote:
[...]
> I realize that we probably do want to just have explicit policies that
> do not exist right now, but what are (a) sane defaults, and (b) sane
> policies?
I would focus on the current default first (which is defrag=madvise).
This means that
On Wed 05-12-18 16:58:02, Linus Torvalds wrote:
[...]
> I realize that we probably do want to just have explicit policies that
> do not exist right now, but what are (a) sane defaults, and (b) sane
> policies?
I would focus on the current default first (which is defrag=madvise).
This means that
On Wed, Dec 5, 2018 at 3:51 PM Linus Torvalds
wrote:
>
> Ok, I've applied David's latest patch.
>
> I'm not at all objecting to tweaking this further, I just didn't want
> to have this regression stand.
Hmm. Can somebody (David?) also perhaps try to state what the
different latency impacts end
On Wed, Dec 5, 2018 at 3:51 PM Linus Torvalds
wrote:
>
> Ok, I've applied David's latest patch.
>
> I'm not at all objecting to tweaking this further, I just didn't want
> to have this regression stand.
Hmm. Can somebody (David?) also perhaps try to state what the
different latency impacts end
On Wed, Dec 05, 2018 at 04:18:14PM -0800, David Rientjes wrote:
> On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
>
> > __GFP_COMPACT_ONLY gave an hope it could give some middle ground but
> > it shows awful compaction results, it basically destroys compaction
> > effectiveness and we know why
On Wed, Dec 05, 2018 at 04:18:14PM -0800, David Rientjes wrote:
> On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
>
> > __GFP_COMPACT_ONLY gave an hope it could give some middle ground but
> > it shows awful compaction results, it basically destroys compaction
> > effectiveness and we know why
On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
> __GFP_COMPACT_ONLY gave an hope it could give some middle ground but
> it shows awful compaction results, it basically destroys compaction
> effectiveness and we know why (COMPACT_SKIPPED must call reclaim or
> compaction can't succeed because there's
On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
> __GFP_COMPACT_ONLY gave an hope it could give some middle ground but
> it shows awful compaction results, it basically destroys compaction
> effectiveness and we know why (COMPACT_SKIPPED must call reclaim or
> compaction can't succeed because there's
Hello,
On Wed, Dec 05, 2018 at 01:59:32PM -0800, David Rientjes wrote:
> [..] and the kernel test robot has reported, [..]
Just for completeness you may have missed one email:
https://lkml.kernel.org/r/87tvk1yjkp@yhuang-dev.intel.com
'So I think the report should have been a "performance
Hello,
On Wed, Dec 05, 2018 at 01:59:32PM -0800, David Rientjes wrote:
> [..] and the kernel test robot has reported, [..]
Just for completeness you may have missed one email:
https://lkml.kernel.org/r/87tvk1yjkp@yhuang-dev.intel.com
'So I think the report should have been a "performance
On Wed, Dec 5, 2018 at 3:36 PM Andrea Arcangeli wrote:
>
> Like said earlier still better to apply __GFP_COMPACT_ONLY or David's
> patch than to return to v4.18 though.
Ok, I've applied David's latest patch.
I'm not at all objecting to tweaking this further, I just didn't want
to have this
On Wed, Dec 5, 2018 at 3:36 PM Andrea Arcangeli wrote:
>
> Like said earlier still better to apply __GFP_COMPACT_ONLY or David's
> patch than to return to v4.18 though.
Ok, I've applied David's latest patch.
I'm not at all objecting to tweaking this further, I just didn't want
to have this
On Wed, Dec 05, 2018 at 02:03:10PM -0800, Linus Torvalds wrote:
> On Wed, Dec 5, 2018 at 12:40 PM Andrea Arcangeli wrote:
> >
> > So ultimately we decided that the saner behavior that gives the least
> > risk of regression for the short term, until we can do something
> > better, was the one that
On Wed, Dec 05, 2018 at 02:03:10PM -0800, Linus Torvalds wrote:
> On Wed, Dec 5, 2018 at 12:40 PM Andrea Arcangeli wrote:
> >
> > So ultimately we decided that the saner behavior that gives the least
> > risk of regression for the short term, until we can do something
> > better, was the one that
On Wed, 5 Dec 2018, Linus Torvalds wrote:
> > So ultimately we decided that the saner behavior that gives the least
> > risk of regression for the short term, until we can do something
> > better, was the one that is already applied upstream.
>
> You're ignoring the fact that people *did* report
On Wed, 5 Dec 2018, Linus Torvalds wrote:
> > So ultimately we decided that the saner behavior that gives the least
> > risk of regression for the short term, until we can do something
> > better, was the one that is already applied upstream.
>
> You're ignoring the fact that people *did* report
On Wed, Dec 5, 2018 at 12:40 PM Andrea Arcangeli wrote:
>
> So ultimately we decided that the saner behavior that gives the least
> risk of regression for the short term, until we can do something
> better, was the one that is already applied upstream.
You're ignoring the fact that people *did*
On Wed, Dec 5, 2018 at 12:40 PM Andrea Arcangeli wrote:
>
> So ultimately we decided that the saner behavior that gives the least
> risk of regression for the short term, until we can do something
> better, was the one that is already applied upstream.
You're ignoring the fact that people *did*
On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
> > thpscale Percentage Faults Huge
> >4.20.0-rc4 4.20.0-rc4
> >mmots-20181130 gfpthisnode-v1r1
> > Percentage huge-395.14 ( 0.00%)7.94 ( -91.65%)
> >
On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
> > thpscale Percentage Faults Huge
> >4.20.0-rc4 4.20.0-rc4
> >mmots-20181130 gfpthisnode-v1r1
> > Percentage huge-395.14 ( 0.00%)7.94 ( -91.65%)
> >
Hello,
Sorry, it has been challenging to keep up with all fast replies, so
I'll start by answering to the critical result below:
On Tue, Dec 04, 2018 at 10:45:58AM +, Mel Gorman wrote:
> thpscale Percentage Faults Huge
>4.20.0-rc4 4.20.0-rc4
>
Hello,
Sorry, it has been challenging to keep up with all fast replies, so
I'll start by answering to the critical result below:
On Tue, Dec 04, 2018 at 10:45:58AM +, Mel Gorman wrote:
> thpscale Percentage Faults Huge
>4.20.0-rc4 4.20.0-rc4
>
On Wed, 5 Dec 2018, Michal Hocko wrote:
> > It isn't specific to MADV_HUGEPAGE, it is the policy for all transparent
> > hugepage allocations, including defrag=always. We agree that
> > MADV_HUGEPAGE is not exactly defined: does it mean try harder to allocate
> > a hugepage locally, try
On Wed, 5 Dec 2018, Michal Hocko wrote:
> > It isn't specific to MADV_HUGEPAGE, it is the policy for all transparent
> > hugepage allocations, including defrag=always. We agree that
> > MADV_HUGEPAGE is not exactly defined: does it mean try harder to allocate
> > a hugepage locally, try
On Wed 05-12-18 10:43:43, Mel Gorman wrote:
> On Wed, Dec 05, 2018 at 10:08:56AM +0100, Michal Hocko wrote:
> > On Tue 04-12-18 16:47:23, David Rientjes wrote:
> > > On Tue, 4 Dec 2018, Mel Gorman wrote:
> > >
> > > > What should also be kept in mind is that we should avoid conflating
> > > >
On Wed 05-12-18 10:43:43, Mel Gorman wrote:
> On Wed, Dec 05, 2018 at 10:08:56AM +0100, Michal Hocko wrote:
> > On Tue 04-12-18 16:47:23, David Rientjes wrote:
> > > On Tue, 4 Dec 2018, Mel Gorman wrote:
> > >
> > > > What should also be kept in mind is that we should avoid conflating
> > > >
On Wed, Dec 05, 2018 at 10:08:56AM +0100, Michal Hocko wrote:
> On Tue 04-12-18 16:47:23, David Rientjes wrote:
> > On Tue, 4 Dec 2018, Mel Gorman wrote:
> >
> > > What should also be kept in mind is that we should avoid conflating
> > > locality preferences with THP preferences which is separate
On Wed, Dec 05, 2018 at 10:08:56AM +0100, Michal Hocko wrote:
> On Tue 04-12-18 16:47:23, David Rientjes wrote:
> > On Tue, 4 Dec 2018, Mel Gorman wrote:
> >
> > > What should also be kept in mind is that we should avoid conflating
> > > locality preferences with THP preferences which is separate
On Tue 04-12-18 16:07:27, David Rientjes wrote:
> On Tue, 4 Dec 2018, Michal Hocko wrote:
>
> > The thing I am really up to here is that reintroduction of
> > __GFP_THISNODE, which you are pushing for, will conflate madvise mode
> > resp. defrag=always with a numa placement policy because the
On Tue 04-12-18 16:07:27, David Rientjes wrote:
> On Tue, 4 Dec 2018, Michal Hocko wrote:
>
> > The thing I am really up to here is that reintroduction of
> > __GFP_THISNODE, which you are pushing for, will conflate madvise mode
> > resp. defrag=always with a numa placement policy because the
On Tue, Dec 04, 2018 at 10:45:58AM +, Mel Gorman wrote:
> I have *one* result of the series on a 1-socket machine running
> "thpscale". It creates a file, punches holes in it to create a
> very light form of fragmentation and then tries THP allocations
> using madvise measuring latency and
On Tue, Dec 04, 2018 at 10:45:58AM +, Mel Gorman wrote:
> I have *one* result of the series on a 1-socket machine running
> "thpscale". It creates a file, punches holes in it to create a
> very light form of fragmentation and then tries THP allocations
> using madvise measuring latency and
On Tue 04-12-18 16:47:23, David Rientjes wrote:
> On Tue, 4 Dec 2018, Mel Gorman wrote:
>
> > What should also be kept in mind is that we should avoid conflating
> > locality preferences with THP preferences which is separate from THP
> > allocation latencies. The whole __GFP_THISNODE approach is
On Tue 04-12-18 16:47:23, David Rientjes wrote:
> On Tue, 4 Dec 2018, Mel Gorman wrote:
>
> > What should also be kept in mind is that we should avoid conflating
> > locality preferences with THP preferences which is separate from THP
> > allocation latencies. The whole __GFP_THISNODE approach is
On Tue, 4 Dec 2018, Mel Gorman wrote:
> What should also be kept in mind is that we should avoid conflating
> locality preferences with THP preferences which is separate from THP
> allocation latencies. The whole __GFP_THISNODE approach is pushing too
> hard on locality versus huge pages when
On Tue, 4 Dec 2018, Mel Gorman wrote:
> What should also be kept in mind is that we should avoid conflating
> locality preferences with THP preferences which is separate from THP
> allocation latencies. The whole __GFP_THISNODE approach is pushing too
> hard on locality versus huge pages when
On Tue, 4 Dec 2018, Michal Hocko wrote:
> The thing I am really up to here is that reintroduction of
> __GFP_THISNODE, which you are pushing for, will conflate madvise mode
> resp. defrag=always with a numa placement policy because the allocation
> doesn't fallback to a remote node.
>
It isn't
On Tue, 4 Dec 2018, Michal Hocko wrote:
> The thing I am really up to here is that reintroduction of
> __GFP_THISNODE, which you are pushing for, will conflate madvise mode
> resp. defrag=always with a numa placement policy because the allocation
> doesn't fallback to a remote node.
>
It isn't
Much of this thread is a rehash of previous discussions so as a result,
I glossed over parts of it so there will be a degree of error. Very
preliminary results from David's approach are below and the bottom line
is that it might fix some latency issues and locality issues at the cost
of a high
Much of this thread is a rehash of previous discussions so as a result,
I glossed over parts of it so there will be a degree of error. Very
preliminary results from David's approach are below and the bottom line
is that it might fix some latency issues and locality issues at the cost
of a high
On 12/3/18 11:27 PM, Linus Torvalds wrote:
> On Mon, Dec 3, 2018 at 2:04 PM Linus Torvalds
> wrote:
>>
>> so I think all of David's patch is somewhat sensible, even if that
>> specific "order == pageblock_order" test really looks like it might
>> want to be clarified.
>
> Side note: I think
On 12/3/18 11:27 PM, Linus Torvalds wrote:
> On Mon, Dec 3, 2018 at 2:04 PM Linus Torvalds
> wrote:
>>
>> so I think all of David's patch is somewhat sensible, even if that
>> specific "order == pageblock_order" test really looks like it might
>> want to be clarified.
>
> Side note: I think
On Mon 03-12-18 13:53:21, David Rientjes wrote:
> On Mon, 3 Dec 2018, Michal Hocko wrote:
>
> > > I think extending functionality so thp can be allocated remotely if truly
> > > desired is worthwhile
> >
> > This is a complete NUMA policy antipatern that we have for all other
> > user memory
On Mon 03-12-18 13:53:21, David Rientjes wrote:
> On Mon, 3 Dec 2018, Michal Hocko wrote:
>
> > > I think extending functionality so thp can be allocated remotely if truly
> > > desired is worthwhile
> >
> > This is a complete NUMA policy antipatern that we have for all other
> > user memory
On Mon, 3 Dec 2018, Linus Torvalds wrote:
> Side note: I think maybe people should just look at that whole
> compaction logic for that block, because it doesn't make much sense to
> me:
>
> /*
> * Checks for costly allocations with __GFP_NORETRY, which
>
On Mon, 3 Dec 2018, Linus Torvalds wrote:
> Side note: I think maybe people should just look at that whole
> compaction logic for that block, because it doesn't make much sense to
> me:
>
> /*
> * Checks for costly allocations with __GFP_NORETRY, which
>
On Mon, Dec 3, 2018 at 2:04 PM Linus Torvalds
wrote:
>
> so I think all of David's patch is somewhat sensible, even if that
> specific "order == pageblock_order" test really looks like it might
> want to be clarified.
Side note: I think maybe people should just look at that whole
compaction
On Mon, Dec 3, 2018 at 2:04 PM Linus Torvalds
wrote:
>
> so I think all of David's patch is somewhat sensible, even if that
> specific "order == pageblock_order" test really looks like it might
> want to be clarified.
Side note: I think maybe people should just look at that whole
compaction
On Mon, Dec 3, 2018 at 12:12 PM Andrea Arcangeli wrote:
>
> On Mon, Dec 03, 2018 at 11:28:07AM -0800, Linus Torvalds wrote:
> >
> > One is the patch posted by Andrea earlier in this thread, which seems
> > to target just this known regression.
>
> For the short term the important thing is to fix
On Mon, Dec 3, 2018 at 12:12 PM Andrea Arcangeli wrote:
>
> On Mon, Dec 03, 2018 at 11:28:07AM -0800, Linus Torvalds wrote:
> >
> > One is the patch posted by Andrea earlier in this thread, which seems
> > to target just this known regression.
>
> For the short term the important thing is to fix
On Mon, 3 Dec 2018, Michal Hocko wrote:
> > I think extending functionality so thp can be allocated remotely if truly
> > desired is worthwhile
>
> This is a complete NUMA policy antipatern that we have for all other
> user memory allocations. So far you have to be explicit for your numa
>
On Mon, 3 Dec 2018, Michal Hocko wrote:
> > I think extending functionality so thp can be allocated remotely if truly
> > desired is worthwhile
>
> This is a complete NUMA policy antipatern that we have for all other
> user memory allocations. So far you have to be explicit for your numa
>
On Mon 03-12-18 12:39:34, David Rientjes wrote:
> On Mon, 3 Dec 2018, Michal Hocko wrote:
>
> > I have merely said that a better THP locality needs more work and during
> > the review discussion I have even volunteered to work on that. There
> > are other reclaim related fixes under work right
On Mon 03-12-18 12:39:34, David Rientjes wrote:
> On Mon, 3 Dec 2018, Michal Hocko wrote:
>
> > I have merely said that a better THP locality needs more work and during
> > the review discussion I have even volunteered to work on that. There
> > are other reclaim related fixes under work right
On Mon, 3 Dec 2018, Michal Hocko wrote:
> I have merely said that a better THP locality needs more work and during
> the review discussion I have even volunteered to work on that. There
> are other reclaim related fixes under work right now. All I am saying
> is that MADV_TRANSHUGE having numa
On Mon, 3 Dec 2018, Michal Hocko wrote:
> I have merely said that a better THP locality needs more work and during
> the review discussion I have even volunteered to work on that. There
> are other reclaim related fixes under work right now. All I am saying
> is that MADV_TRANSHUGE having numa
On Mon, 3 Dec 2018, Andrea Arcangeli wrote:
> In my earlier review of David's patch, it looked runtime equivalent to
> the __GFP_COMPACT_ONLY solution. It has the only advantage of adding a
> new gfpflag until we're sure we need it but it's the worst solution
> available for the long term in my
On Mon, 3 Dec 2018, Andrea Arcangeli wrote:
> In my earlier review of David's patch, it looked runtime equivalent to
> the __GFP_COMPACT_ONLY solution. It has the only advantage of adding a
> new gfpflag until we're sure we need it but it's the worst solution
> available for the long term in my
On Mon, 3 Dec 2018, Andrea Arcangeli wrote:
> It's trivial to reproduce the badness by running a memhog process that
> allocates more than the RAM of 1 NUMA node, under defrag=always
> setting (or by changing memhog to use MADV_HUGEPAGE) and it'll create
> swap storms despite 75% of the RAM is
On Mon, 3 Dec 2018, Andrea Arcangeli wrote:
> It's trivial to reproduce the badness by running a memhog process that
> allocates more than the RAM of 1 NUMA node, under defrag=always
> setting (or by changing memhog to use MADV_HUGEPAGE) and it'll create
> swap storms despite 75% of the RAM is
On Mon, Dec 03, 2018 at 11:28:07AM -0800, Linus Torvalds wrote:
> On Mon, Dec 3, 2018 at 10:59 AM Michal Hocko wrote:
> >
> > You are misinterpreting my words. I haven't dismissed anything. I do
> > recognize both usecases under discussion.
> >
> > I have merely said that a better THP locality
On Mon, Dec 03, 2018 at 11:28:07AM -0800, Linus Torvalds wrote:
> On Mon, Dec 3, 2018 at 10:59 AM Michal Hocko wrote:
> >
> > You are misinterpreting my words. I haven't dismissed anything. I do
> > recognize both usecases under discussion.
> >
> > I have merely said that a better THP locality
On Mon, Dec 3, 2018 at 10:59 AM Michal Hocko wrote:
>
> You are misinterpreting my words. I haven't dismissed anything. I do
> recognize both usecases under discussion.
>
> I have merely said that a better THP locality needs more work and during
> the review discussion I have even volunteered to
On Mon, Dec 3, 2018 at 10:59 AM Michal Hocko wrote:
>
> You are misinterpreting my words. I haven't dismissed anything. I do
> recognize both usecases under discussion.
>
> I have merely said that a better THP locality needs more work and during
> the review discussion I have even volunteered to
On Mon, Dec 03, 2018 at 07:59:54PM +0100, Michal Hocko wrote:
> I have merely said that a better THP locality needs more work and during
> the review discussion I have even volunteered to work on that. There
> are other reclaim related fixes under work right now. All I am saying
> is that
On Mon, Dec 03, 2018 at 07:59:54PM +0100, Michal Hocko wrote:
> I have merely said that a better THP locality needs more work and during
> the review discussion I have even volunteered to work on that. There
> are other reclaim related fixes under work right now. All I am saying
> is that
On Mon 03-12-18 10:45:35, Linus Torvalds wrote:
> On Mon, Dec 3, 2018 at 10:30 AM Michal Hocko wrote:
> >
> > I do not get it. 5265047ac301 which this patch effectively reverts has
> > regressed kvm workloads. People started to notice only later because
> > they were not running on kernels with
1 - 100 of 135 matches
Mail list logo