On Wed, 12 Sep 2018, Michal Hocko wrote:
> > Saying that we really want THP isn't an all-or-nothing decision. We
> > certainly want to try hard to fault hugepages locally especially at task
> > startup when remapping our .text segment to thp, and MADV_HUGEPAGE works
> > very well for that. Re
On Wed 12-09-18 16:21:26, Michal Hocko wrote:
> On Wed 12-09-18 09:54:17, Andrea Arcangeli wrote:
[...]
> > I wasn't particularly happy about your patch because it still swaps
> > with certain defrag settings which is still allowing things that
> > shouldn't happen without some kind of privileged c
On Wed 12-09-18 09:54:17, Andrea Arcangeli wrote:
> Hello,
>
> On Tue, Sep 11, 2018 at 01:56:13PM +0200, Michal Hocko wrote:
> > Well, it seems that expectations differ for users. It seems that kvm
> > users do not really agree with your interpretation.
>
> Like David also mentioned here:
>
> lk
Hello,
On Tue, Sep 11, 2018 at 01:56:13PM +0200, Michal Hocko wrote:
> Well, it seems that expectations differ for users. It seems that kvm
> users do not really agree with your interpretation.
Like David also mentioned here:
lkml.kernel.org/r/alpine.deb.2.21.1808211021110.258...@chino.kir.corp.
On Tue 11-09-18 13:30:20, David Rientjes wrote:
> On Tue, 11 Sep 2018, Michal Hocko wrote:
[...]
> > hugepage specific MPOL flags sounds like yet another step into even more
> > cluttered API and semantic, I am afraid. Why should this be any
> > different from regular page allocations? You are gett
On Tue, 11 Sep 2018, Michal Hocko wrote:
> > That's not entirely true, the remote access latency for remote thp on all
> > of our platforms is greater than local small pages, this is especially
> > true for remote thp that is allocated intersocket and must be accessed
> > through the interconne
On Mon 10-09-18 13:08:34, David Rientjes wrote:
> On Fri, 7 Sep 2018, Michal Hocko wrote:
[...]
> > Fix this by removing __GFP_THISNODE handling from alloc_pages_vma where
> > it doesn't belong and move it to alloc_hugepage_direct_gfpmask where we
> > juggle gfp flags for different allocation modes
On 09/08/2018 08:52 PM, Stefan Priebe - Profihost AG wrote:
> Hello,
>
> whlie using this path i got another stall - which i never saw under
> kernel 4.4. Here is the trace:
> [305111.932698] INFO: task ksmtuned:1399 blocked for more than 120 seconds.
> [305111.933612] Tainted: G
On 09/10/2018 10:08 PM, David Rientjes wrote:
> When Andrea brought this up, I suggested that the full solution would be a
> MPOL_F_HUGEPAGE flag that could define thp allocation policy -- the added
Can you elaborate on the semantics of this? You mean that a given vma
could now have two mempolic
Am 10.09.2018 um 22:08 schrieb David Rientjes:
> On Fri, 7 Sep 2018, Michal Hocko wrote:
>
>> From: Michal Hocko
>>
>> Andrea has noticed [1] that a THP allocation might be really disruptive
>> when allocated on NUMA system with the local node full or hard to
>> reclaim. Stefan has posted an allo
On Fri, 7 Sep 2018, Michal Hocko wrote:
> From: Michal Hocko
>
> Andrea has noticed [1] that a THP allocation might be really disruptive
> when allocated on NUMA system with the local node full or hard to
> reclaim. Stefan has posted an allocation stall report on 4.12 based
> SLES kernel which s
[Cc Vlastimil. The full report is
http://lkml.kernel.org/r/f7ed71c1-d599-5257-fd8f-041eb24d9...@profihost.ag]
On Sat 08-09-18 20:52:35, Stefan Priebe - Profihost AG wrote:
> [305146.987742] khugepaged: page allocation stalls for 224236ms, order:9,
> mode:0x4740ca(__GFP_HIGHMEM|__GFP_IO|__GFP_FS|__
Hello,
whlie using this path i got another stall - which i never saw under
kernel 4.4. Here is the trace:
[305111.932698] INFO: task ksmtuned:1399 blocked for more than 120 seconds.
[305111.933612] Tainted: G 4.12.0+105-ph #1
[305111.934456] "echo 0 > /proc/sys/kernel/hung_
From: Michal Hocko
Andrea has noticed [1] that a THP allocation might be really disruptive
when allocated on NUMA system with the local node full or hard to
reclaim. Stefan has posted an allocation stall report on 4.12 based
SLES kernel which suggests the same issue:
[245513.362669] kvm: page all
14 matches
Mail list logo