Hey Mel
FYI, the following report may be related to
Kswapd 100% CPU since 3.8 on Sandybridge
http://marc.info/?l=linux-mmm=141244232304682w=2
Hillf
Date: Tue, 28 Oct 2014 09:53:54 +0100
From: =?UTF-8?B?T3J0d2luIEdsw7xjaw==?= o...@odi.ch
To: linux-kernel@vger.kernel.org
> >> Subject: Re: [PATCH v2 7/8] x86, perf: Only allow rdpmc if a perf_event is
> >> mapped
> >>
> > CPU D CPU A
> > switch_mm
> > load_mm_cr4
> > x86_pmu_event_unmapped
> >
> > I wonder if the X86_CR4_PCE set on CPU D is
> > cleared by CPU A by
Subject: Re: [PATCH v2 7/8] x86, perf: Only allow rdpmc if a perf_event is
mapped
CPU D CPU A
switch_mm
load_mm_cr4
x86_pmu_event_unmapped
I wonder if the X86_CR4_PCE set on CPU D is
cleared by CPU A by broadcasting IPI.
It should
Hey Peter
> Date: Mon, 20 Oct 2014 23:56:38 +0200
> From: Peter Zijlstra
> To: torva...@linux-foundation.org, paul...@linux.vnet.ibm.com,
> t...@linutronix.de, a...@linux-foundation.org, r...@redhat.com,
> mgor...@suse.de, o...@redhat.com, mi...@redhat.com, minc...@kernel.org,
>
Hey Peter
Date: Mon, 20 Oct 2014 23:56:38 +0200
From: Peter Zijlstra pet...@infradead.org
To: torva...@linux-foundation.org, paul...@linux.vnet.ibm.com,
t...@linutronix.de, a...@linux-foundation.org, r...@redhat.com,
mgor...@suse.de, o...@redhat.com, mi...@redhat.com, minc...@kernel.org,
Hey Kees
> From: Kees Cook
> To: linux-kernel@vger.kernel.org
> Cc: Kees Cook , Will Deacon ,
> Rabin Vincent , Laura Abbott , Rob
> Herring , Leif Lindholm , Mark
> Salter , Liu hua <
> Subject: [PATCH v6 8/8] ARM: mm: allow text and rodata sections to be
> read-only
> Date: Thu, 18 Sep
Hey Kees
From: Kees Cook keesc...@chromium.org
To: linux-kernel@vger.kernel.org
Cc: Kees Cook keesc...@chromium.org, Will Deacon will.dea...@arm.com,
Rabin Vincent ra...@rab.in, Laura Abbott lau...@codeaurora.org, Rob
Herring r...@kernel.org, Leif Lindholm leif.lindh...@linaro.org, Mark
Hey Kees
> From: Kees Cook
> To: linux-kernel@vger.kernel.org
> Cc: Kees Cook , Will Deacon ,
> Rabin Vincent , Laura Abbott , Rob
> Herring , Leif Lindholm , Mark
> Salter , Liu hua <
> Subject: [PATCH v6 8/8] ARM: mm: allow text and rodata sections to be
> read-only
> Date: Thu, 18 Sep
Hey Kees
From: Kees Cook keesc...@chromium.org
To: linux-kernel@vger.kernel.org
Cc: Kees Cook keesc...@chromium.org, Will Deacon will.dea...@arm.com,
Rabin Vincent ra...@rab.in, Laura Abbott lau...@codeaurora.org, Rob
Herring r...@kernel.org, Leif Lindholm leif.lindh...@linaro.org, Mark
Hey Andy
>
> Context switches and TLB flushes can change individual bits of CR4.
> CR4 reads take several cycles, so store a shadow copy of CR4 in a
> per-cpu variable.
>
> To avoid wasting a cache line, I added the CR4 shadow to
> cpu_tlbstate, which is already touched during context switches.
>
Hey Andy
Context switches and TLB flushes can change individual bits of CR4.
CR4 reads take several cycles, so store a shadow copy of CR4 in a
per-cpu variable.
To avoid wasting a cache line, I added the CR4 shadow to
cpu_tlbstate, which is already touched during context switches.
Hi Jonathan
On Tue, Jul 22, 2014 at 12:56 AM, Jonathan Davies
wrote:
>
>
> On 18/07/14 15:08, Peter Zijlstra wrote:
>>
>> On Fri, Jul 18, 2014 at 01:59:06PM +0100, Jonathan Davies wrote:
>>>
>>> The current implementation of idle_cpu only considers tasks that might be
>>> in the
>>> CPU's
Hi Jonathan
On Tue, Jul 22, 2014 at 12:56 AM, Jonathan Davies
jonathan.dav...@citrix.com wrote:
On 18/07/14 15:08, Peter Zijlstra wrote:
On Fri, Jul 18, 2014 at 01:59:06PM +0100, Jonathan Davies wrote:
The current implementation of idle_cpu only considers tasks that might be
in the
CPU's
On Sat, Jul 19, 2014 at 9:31 AM, Steven Rostedt wrote:
> On Fri, Jul 18, 2014 at 06:22:15PM -0400, Theodore Ts'o wrote:
>>
>> And then think very hard about which patches people need to see in
>> order to be able to evaluate a patch. For example, if you have patch
>> 1 out of a series which adds
On Sat, Jul 19, 2014 at 9:31 AM, Steven Rostedt rost...@goodmis.org wrote:
On Fri, Jul 18, 2014 at 06:22:15PM -0400, Theodore Ts'o wrote:
And then think very hard about which patches people need to see in
order to be able to evaluate a patch. For example, if you have patch
1 out of a series
Commit-ID: 5d5e2b1bcbdc996e72815c03fdc5ea82c4642397
Gitweb: http://git.kernel.org/tip/5d5e2b1bcbdc996e72815c03fdc5ea82c4642397
Author: Hillf Danton
AuthorDate: Tue, 10 Jun 2014 10:58:43 +0200
Committer: Ingo Molnar
CommitDate: Wed, 18 Jun 2014 18:29:59 +0200
sched: Fix CACHE_HOT_BUDY
Commit-ID: 5d5e2b1bcbdc996e72815c03fdc5ea82c4642397
Gitweb: http://git.kernel.org/tip/5d5e2b1bcbdc996e72815c03fdc5ea82c4642397
Author: Hillf Danton dhi...@gmail.com
AuthorDate: Tue, 10 Jun 2014 10:58:43 +0200
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Wed, 18 Jun 2014 18:29:59
On Thu, May 15, 2014 at 11:38 AM, Andrew Morton
wrote:
>
> We could easily change the interface so that pages==NULL means "no
> pages" but that isn't the way it works at present.
>
Yeah, thanks /Hillf
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
Hi Andy,
On Thu, May 15, 2014 at 7:46 AM, Andy Lutomirski wrote:
> The oops can be triggered in qemu using -no-hpet (but not nohpet) by
> reading a couple of pages past the end of the vdso text. This
> should send SIGBUS instead of OOPSing.
>
> The bug was introduced by:
>
> commit
Hi Andy,
On Thu, May 15, 2014 at 7:46 AM, Andy Lutomirski l...@amacapital.net wrote:
The oops can be triggered in qemu using -no-hpet (but not nohpet) by
reading a couple of pages past the end of the vdso text. This
should send SIGBUS instead of OOPSing.
The bug was introduced by:
commit
On Thu, May 15, 2014 at 11:38 AM, Andrew Morton
a...@linux-foundation.org wrote:
We could easily change the interface so that pages==NULL means no
pages but that isn't the way it works at present.
Yeah, thanks /Hillf
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
On Thu, May 8, 2014 at 5:05 PM, Julian Andres Klode wrote:
> On Thu, May 08, 2014 at 11:44:12AM +0300, Dan Carpenter wrote:
>> +as raw text including all the headers. Run `cat raw_email.txt | git am`
>
> `cat raw_email.txt | git am` seems a bit pointless. Why not simply
> `git am raw_email.txt`?
On Thu, May 8, 2014 at 5:05 PM, Julian Andres Klode j...@jak-linux.org wrote:
On Thu, May 08, 2014 at 11:44:12AM +0300, Dan Carpenter wrote:
+as raw text including all the headers. Run `cat raw_email.txt | git am`
`cat raw_email.txt | git am` seems a bit pointless. Why not simply
`git am
hi all
On Wed, Apr 30, 2014 at 11:42 PM, Kirill A. Shutemov
wrote:
> On Tue, Apr 15, 2014 at 10:06:56PM -0400, Sasha Levin wrote:
>> Hi all,
>>
>> I often see hung task triggering in khugepaged within collapse_huge_page().
>>
>> I've initially assumed the case may be that the guests are too
hi all
On Wed, Apr 30, 2014 at 11:42 PM, Kirill A. Shutemov
kir...@shutemov.name wrote:
On Tue, Apr 15, 2014 at 10:06:56PM -0400, Sasha Levin wrote:
Hi all,
I often see hung task triggering in khugepaged within collapse_huge_page().
I've initially assumed the case may be that the guests are
On Tue, Apr 22, 2014 at 1:30 PM, Jianyu Zhan wrote:
> For a cgroup subsystem who should init early, then it should carefully
> take care of the implementation of css_alloc, because it will be called
> before mm_init() setup the world.
>
> Luckily we don't, and we better explicitly assign the
On Tue, Apr 22, 2014 at 1:30 PM, Jianyu Zhan nasa4...@gmail.com wrote:
For a cgroup subsystem who should init early, then it should carefully
take care of the implementation of css_alloc, because it will be called
before mm_init() setup the world.
Luckily we don't, and we better explicitly
On Tue, Oct 29, 2013 at 6:16 AM, Dave Hansen wrote:
> +
> +void copy_high_order_page(struct page *newpage,
> + struct page *oldpage,
> + int order)
> +{
> + int i;
> +
> + might_sleep();
> + for (i = 0; i < (1< +
On Tue, Oct 29, 2013 at 6:16 AM, Dave Hansen d...@sr71.net wrote:
+
+void copy_high_order_page(struct page *newpage,
+ struct page *oldpage,
+ int order)
+{
+ int i;
+
+ might_sleep();
+ for (i = 0; i (1order); i++) {
+
Hello Rik
On Sat, Sep 14, 2013 at 11:55 PM, Rik van Riel wrote:
> On 09/14/2013 07:53 AM, Hillf Danton wrote:
>> After page A on source node is migrated to page B on target node, hinting
>> fault is recorded on the target node for B. On the source node there is
>> anothe
Hello Rik
On Sat, Sep 14, 2013 at 11:55 PM, Rik van Riel r...@redhat.com wrote:
On 09/14/2013 07:53 AM, Hillf Danton wrote:
After page A on source node is migrated to page B on target node, hinting
fault is recorded on the target node for B. On the source node there is
another record
Hello Mel
On Tue, Sep 10, 2013 at 5:32 PM, Mel Gorman wrote:
>
> +void task_numa_free(struct task_struct *p)
> +{
> + struct numa_group *grp = p->numa_group;
> + int i;
> +
> + kfree(p->numa_faults);
> +
> + if (grp) {
> + for (i = 0; i < 2*nr_node_ids; i++)
Hello Mel
On Tue, Sep 10, 2013 at 5:32 PM, Mel Gorman wrote:
>
> +void task_numa_free(struct task_struct *p)
> +{
> + struct numa_group *grp = p->numa_group;
> + int i;
> +
> + kfree(p->numa_faults);
> +
> + if (grp) {
> + for (i = 0; i < 2*nr_node_ids; i++)
Hello Mel
On Tue, Sep 10, 2013 at 5:32 PM, Mel Gorman mgor...@suse.de wrote:
+void task_numa_free(struct task_struct *p)
+{
+ struct numa_group *grp = p-numa_group;
+ int i;
+
+ kfree(p-numa_faults);
+
+ if (grp) {
+ for (i = 0; i 2*nr_node_ids;
Hello Mel
On Tue, Sep 10, 2013 at 5:32 PM, Mel Gorman mgor...@suse.de wrote:
+void task_numa_free(struct task_struct *p)
+{
+ struct numa_group *grp = p-numa_group;
+ int i;
+
+ kfree(p-numa_faults);
+
+ if (grp) {
+ for (i = 0; i 2*nr_node_ids;
Hillo Mel
On Tue, Sep 10, 2013 at 5:32 PM, Mel Gorman wrote:
> Currently automatic NUMA balancing is unable to distinguish between false
> shared versus private pages except by ignoring pages with an elevated
> page_mapcount entirely. This avoids shared pages bouncing between the
> nodes whose
Hillo Mel
On Tue, Sep 10, 2013 at 5:32 PM, Mel Gorman mgor...@suse.de wrote:
Currently automatic NUMA balancing is unable to distinguish between false
shared versus private pages except by ignoring pages with an elevated
page_mapcount entirely. This avoids shared pages bouncing between the
On Tue, Sep 10, 2013 at 5:31 PM, Mel Gorman wrote:
> @@ -5045,15 +5038,50 @@ static int need_active_balance(struct lb_env *env)
>
> static int active_load_balance_cpu_stop(void *data);
>
> +static int should_we_balance(struct lb_env *env)
> +{
> + struct sched_group *sg = env->sd->groups;
On Tue, Sep 10, 2013 at 5:31 PM, Mel Gorman mgor...@suse.de wrote:
@@ -5045,15 +5038,50 @@ static int need_active_balance(struct lb_env *env)
static int active_load_balance_cpu_stop(void *data);
+static int should_we_balance(struct lb_env *env)
+{
+ struct sched_group *sg =
On Sun, Sep 8, 2013 at 10:46 PM, Sasha Levin wrote:
> Hi all,
>
> While fuzzing with trinity inside a KVM tools guest, running latest -next
> kernel, I've
> stumbled on the following:
>
> [ 998.281867] BUG: unable to handle kernel NULL pointer dereference at
> 0274
> [ 998.28]
On Sun, Sep 8, 2013 at 10:46 PM, Sasha Levin sasha.le...@oracle.com wrote:
Hi all,
While fuzzing with trinity inside a KVM tools guest, running latest -next
kernel, I've
stumbled on the following:
[ 998.281867] BUG: unable to handle kernel NULL pointer dereference at
0274
[
On Fri, Aug 30, 2013 at 8:18 PM, Cong Wang wrote:
> Cc'ing netdev
>
> On Fri, Aug 30, 2013 at 4:20 PM, Baoquan He wrote:
>> Hi,
>>
>> I tried the 3.11.0-rc7+ on x86_64, and after bootup, the soft lockup bug
>> happened.
>>
>> [ 48.895000] BUG: soft lockup - CPU#1 stuck for 22s! [ebtables:444]
On Fri, Aug 30, 2013 at 8:18 PM, Cong Wang xiyou.wangc...@gmail.com wrote:
Cc'ing netdev
On Fri, Aug 30, 2013 at 4:20 PM, Baoquan He baoquan...@gmail.com wrote:
Hi,
I tried the 3.11.0-rc7+ on x86_64, and after bootup, the soft lockup bug
happened.
[ 48.895000] BUG: soft lockup - CPU#1
On Fri, Aug 23, 2013 at 11:53 AM, Dave Jones wrote:
>
> It actually seems worse, seems I can trigger it even easier now, as if
> there's a leak.
>
Can you please try the new fix for TLB flush?
commit 2b047252d087be7f2ba
Fix TLB gather virtual address range invalidation corner cases
--
To
On Fri, Aug 23, 2013 at 11:53 AM, Dave Jones da...@redhat.com wrote:
It actually seems worse, seems I can trigger it even easier now, as if
there's a leak.
Can you please try the new fix for TLB flush?
commit 2b047252d087be7f2ba
Fix TLB gather virtual address range invalidation corner cases
On Thu, Aug 22, 2013 at 4:49 AM, Dave Jones wrote:
>
> didn't hit the bug_on, but got a bunch of
>
> [ 424.077993] swap_free: Unused swap offset entry 000187d5
> [ 439.377194] swap_free: Unused swap offset entry 000187e7
> [ 441.998411] swap_free: Unused swap offset entry 000187ee
> [
On Thu, Aug 22, 2013 at 4:49 AM, Dave Jones wrote:
>
> didn't hit the bug_on, but got a bunch of
>
> [ 424.077993] swap_free: Unused swap offset entry 000187d5
> [ 439.377194] swap_free: Unused swap offset entry 000187e7
> [ 441.998411] swap_free: Unused swap offset entry 000187ee
> [
On Thu, Aug 22, 2013 at 4:49 AM, Dave Jones da...@redhat.com wrote:
didn't hit the bug_on, but got a bunch of
[ 424.077993] swap_free: Unused swap offset entry 000187d5
[ 439.377194] swap_free: Unused swap offset entry 000187e7
[ 441.998411] swap_free: Unused swap offset entry 000187ee
[
On Thu, Aug 22, 2013 at 4:49 AM, Dave Jones da...@redhat.com wrote:
didn't hit the bug_on, but got a bunch of
[ 424.077993] swap_free: Unused swap offset entry 000187d5
[ 439.377194] swap_free: Unused swap offset entry 000187e7
[ 441.998411] swap_free: Unused swap offset entry 000187ee
[
On Tue, Aug 20, 2013 at 7:18 AM, Dave Jones wrote:
>
> btw, anyone have thoughts on a patch something like below ?
And another(sorry if message is reformatted by the mail agent,
and it took my an hour to get the agent back to the correct format but failed,
and thanks a lot for any howto send
On Tue, Aug 20, 2013 at 7:18 AM, Dave Jones da...@redhat.com wrote:
btw, anyone have thoughts on a patch something like below ?
And another(sorry if message is reformatted by the mail agent,
and it took my an hour to get the agent back to the correct format but failed,
and thanks a lot for any
If the allocation order is not high, direct compaction does nothing.
Can we skip compaction here if order drops to zero?
--- a/mm/vmscan.c Thu Aug 15 17:47:26 2013
+++ b/mm/vmscan.c Thu Aug 15 17:48:58 2013
@@ -3034,7 +3034,7 @@ static unsigned long balance_pgdat(pg_da
* Compact if necessary
If the allocation order is not high, direct compaction does nothing.
Can we skip compaction here if order drops to zero?
--- a/mm/vmscan.c Thu Aug 15 17:47:26 2013
+++ b/mm/vmscan.c Thu Aug 15 17:48:58 2013
@@ -3034,7 +3034,7 @@ static unsigned long balance_pgdat(pg_da
* Compact if necessary
On Wed, Aug 7, 2013 at 11:30 PM, Dave Jones wrote:
> printk didn't trigger.
>
Is a corrupted page table entry encountered, according to the
comment of swap_duplicate()?
--- a/mm/swapfile.c Wed Aug 7 17:27:22 2013
+++ b/mm/swapfile.c Thu Aug 8 23:12:30 2013
@@ -770,6 +770,7 @@ int
On Wed, Aug 7, 2013 at 11:30 PM, Dave Jones da...@redhat.com wrote:
printk didn't trigger.
Is a corrupted page table entry encountered, according to the
comment of swap_duplicate()?
--- a/mm/swapfile.c Wed Aug 7 17:27:22 2013
+++ b/mm/swapfile.c Thu Aug 8 23:12:30 2013
@@ -770,6
Hello Dave
On Wed, Aug 7, 2013 at 1:51 PM, Dave Jones wrote:
> Seen while fuzzing with lots of child processes.
>
> swap_free: Unused swap offset entry 001263f5
> BUG: Bad page map in process trinity-child29 pte:24c7ea00 pmd:09fec067
> addr:7f9db958d000 vm_flags:00100073
Hello Dave
On Wed, Aug 7, 2013 at 1:51 PM, Dave Jones da...@redhat.com wrote:
Seen while fuzzing with lots of child processes.
swap_free: Unused swap offset entry 001263f5
BUG: Bad page map in process trinity-child29 pte:24c7ea00 pmd:09fec067
addr:7f9db958d000 vm_flags:00100073
On Fri, Aug 2, 2013 at 12:17 AM, Aneesh Kumar K.V
wrote:
> Hillf Danton writes:
>
>> On Wed, Jul 31, 2013 at 2:37 PM, Joonsoo Kim wrote:
>>> On Wed, Jul 31, 2013 at 02:21:38PM +0800, Hillf Danton wrote:
>>>> On Wed, Jul 31, 2013 at 12:41 PM, Joonsoo Kim
>&g
On Fri, Aug 2, 2013 at 12:17 AM, Aneesh Kumar K.V
aneesh.ku...@linux.vnet.ibm.com wrote:
Hillf Danton dhi...@gmail.com writes:
On Wed, Jul 31, 2013 at 2:37 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Wed, Jul 31, 2013 at 02:21:38PM +0800, Hillf Danton wrote:
On Wed, Jul 31, 2013 at 12:41
On Wed, Jul 31, 2013 at 2:37 PM, Joonsoo Kim wrote:
> On Wed, Jul 31, 2013 at 02:21:38PM +0800, Hillf Danton wrote:
>> On Wed, Jul 31, 2013 at 12:41 PM, Joonsoo Kim wrote:
>> > On Wed, Jul 31, 2013 at 10:49:24AM +0800, Hillf Danton wrote:
>> >> On Wed, Jul 31,
On Wed, Jul 31, 2013 at 12:41 PM, Joonsoo Kim wrote:
> On Wed, Jul 31, 2013 at 10:49:24AM +0800, Hillf Danton wrote:
>> On Wed, Jul 31, 2013 at 10:27 AM, Joonsoo Kim wrote:
>> > On Mon, Jul 29, 2013 at 03:24:46PM +0800, Hillf Danton wrote:
>> >> On Mon, Jul 29
On Wed, Jul 31, 2013 at 12:41 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Wed, Jul 31, 2013 at 10:49:24AM +0800, Hillf Danton wrote:
On Wed, Jul 31, 2013 at 10:27 AM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Mon, Jul 29, 2013 at 03:24:46PM +0800, Hillf Danton wrote:
On Mon, Jul 29
On Wed, Jul 31, 2013 at 2:37 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Wed, Jul 31, 2013 at 02:21:38PM +0800, Hillf Danton wrote:
On Wed, Jul 31, 2013 at 12:41 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Wed, Jul 31, 2013 at 10:49:24AM +0800, Hillf Danton wrote:
On Wed, Jul 31
On Wed, Jul 31, 2013 at 10:27 AM, Joonsoo Kim wrote:
> On Mon, Jul 29, 2013 at 03:24:46PM +0800, Hillf Danton wrote:
>> On Mon, Jul 29, 2013 at 1:31 PM, Joonsoo Kim wrote:
>> > alloc_huge_page_node() use dequeue_huge_page_node() without
>> > any validation check, so
On Wed, Jul 31, 2013 at 10:27 AM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Mon, Jul 29, 2013 at 03:24:46PM +0800, Hillf Danton wrote:
On Mon, Jul 29, 2013 at 1:31 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
alloc_huge_page_node() use dequeue_huge_page_node() without
any validation
mapping is VM_NORESERVE, VM_MAYSHARE and chg is 0, this imply
> that current allocated page will go into page cache which is already
> reserved region when mapping is created. In this case, we should decrease
> reserve count. As implementing above, this patch solve the problem.
>
> Reviewed-b
e can remove function and embed it into
> dequeue_huge_page_vma() directly. This patch implement it.
>
> Reviewed-by: Wanpeng Li
> Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Joonsoo Kim
>
Acked-by: Hillf Danton
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index ca15854.
above test generate a SIGBUG which is correct,
> because all free pages are reserved and non reserved shared mapping
> can't get a free page.
>
> Reviewed-by: Wanpeng Li
> Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Joonsoo Kim
>
Acked-by: Hillf Danton
> diff --git a/mm
er condition of optimization. If this page is not
> AnonPage, we don't do optimization. This makes this optimization turning
> off for a page cache.
>
> Acked-by: Michal Hocko
> Reviewed-by: Wanpeng Li
> Reviewed-by: Naoya Horiguchi
> Signed-off-by: Joonsoo Kim
>
Acked-b
On Mon, Jul 29, 2013 at 1:28 PM, Joonsoo Kim wrote:
> If list is empty, list_for_each_entry_safe() doesn't do anything.
> So, this check is redundant. Remove it.
>
> Acked-by: Michal Hocko
> Reviewed-by: Wanpeng Li
> Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Joonsoo K
Joonsoo Kim
>
Acked-by: Hillf Danton
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 51564a8..31d78c5 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1149,12 +1149,7 @@ static struct page *alloc_huge_page(struct
> vm_area_struct *vma,
>
cko
> Reviewed-by: Wanpeng Li
> Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Joonsoo Kim
>
Acked-by: Hillf Danton
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index e2bfbf7..fc4988c 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -539,10 +539,6 @@ static
On Mon, Jul 29, 2013 at 1:31 PM, Joonsoo Kim wrote:
> There is a race condition if we map a same file on different processes.
> Region tracking is protected by mmap_sem and hugetlb_instantiation_mutex.
> When we do mmap, we don't grab a hugetlb_instantiation_mutex, but,
> grab a mmap_sem. This
On Mon, Jul 29, 2013 at 1:31 PM, Joonsoo Kim wrote:
> alloc_huge_page_node() use dequeue_huge_page_node() without
> any validation check, so it can steal reserved page unconditionally.
Well, why is it illegal to use reserved page here?
> To fix it, check the number of free_huge_page in
On Mon, Jul 29, 2013 at 1:31 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
alloc_huge_page_node() use dequeue_huge_page_node() without
any validation check, so it can steal reserved page unconditionally.
Well, why is it illegal to use reserved page here?
To fix it, check the number of
On Mon, Jul 29, 2013 at 1:31 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
There is a race condition if we map a same file on different processes.
Region tracking is protected by mmap_sem and hugetlb_instantiation_mutex.
When we do mmap, we don't grab a hugetlb_instantiation_mutex, but,
grab a
Hocko mho...@suse.cz
Reviewed-by: Wanpeng Li liw...@linux.vnet.ibm.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Hillf Danton dhi...@gmail.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e2bfbf7..fc4988c 100644
-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Hillf Danton dhi...@gmail.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 51564a8..31d78c5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1149,12 +1149,7 @@ static struct page
Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Hillf Danton dhi...@gmail.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 87d7637..2e52afea 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1020,11 +1020,8 @@ free:
spin_unlock
...@linux.vnet.ibm.com
Reviewed-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Hillf Danton dhi...@gmail.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2e52afea..1f6b3a6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2511,7 +2511,6
...@linux.vnet.ibm.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Hillf Danton dhi...@gmail.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 1f6b3a6..ca15854 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -464,6 +464,8 @@ void
we can remove function and embed it into
dequeue_huge_page_vma() directly. This patch implement it.
Reviewed-by: Wanpeng Li liw...@linux.vnet.ibm.com
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Hillf Danton dhi
Acked-by: Hillf Danton dhi...@gmail.com
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4b1b043..b3b8252 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -443,10 +443,23 @@ void reset_vma_resv_huge_pages(struct vm_area_struct
*vma)
}
/* Returns true if the VMA has associated reserve pages
On Fri, Jul 26, 2013 at 10:27 PM, Davidlohr Bueso
wrote:
> From: David Gibson
>
> At present, the page fault path for hugepages is serialized by a
> single mutex. This is used to avoid spurious out-of-memory conditions
> when the hugepage pool is fully utilized (two processes or threads can
>
On Fri, Jul 26, 2013 at 10:27 PM, Davidlohr Bueso
davidlohr.bu...@hp.com wrote:
From: David Gibson da...@gibson.dropbear.id.au
At present, the page fault path for hugepages is serialized by a
single mutex. This is used to avoid spurious out-of-memory conditions
when the hugepage pool is
revert introducing migrate_movable_pages
> - added alloc_huge_page_noerr free from ERR_VALUE
>
> ChangeLog v2:
> - updated description and renamed patch title
>
> Signed-off-by: Naoya Horiguchi
> Acked-by: Andi Kleen
> Reviewed-by: Wanpeng Li
> ---
Acked-by: Hillf Danto
_page
>
> ChangeLog v2:
> - remove unnecessary extern
> - fix page table lock in check_hugetlb_pmd_range
> - updated description and renamed patch title
>
> Signed-off-by: Naoya Horiguchi
> Acked-by: Andi Kleen
> Reviewed-by: Wanpeng Li
> ---
Acked-by: Hillf
ft_offline_huge_page() switches to use migrate_pages(),
> and migrate_huge_page() is not used any more. So let's remove it.
>
> ChangeLog v3:
> - Merged with another cleanup patch (4/10 in previous version)
>
> Signed-off-by: Naoya Horiguchi
> Acked-by: Andi Kleen
> Reviewed-by: Wanpeng Li
code removing VM_HUGETLB from vma_migratable check into a
>separate patch
> - hold hugetlb_lock in putback_active_hugepage
> - update comment near the definition of hugetlb_lock
>
> Signed-off-by: Naoya Horiguchi
> Acked-by: Andi Kleen
> Reviewed-by: Wanpeng Li
> ---
---
Acked-by: Hillf Danton dhi...@gmail.com
include/linux/hugetlb.h | 6 ++
mm/hugetlb.c| 32 +++-
mm/migrate.c| 10 +-
3 files changed, 46 insertions(+), 2 deletions(-)
diff --git v3.11-rc1.orig/include/linux/hugetlb.h
v3.11
---
Acked-by: Hillf Danton dhi...@gmail.com
include/linux/migrate.h | 5 -
mm/memory-failure.c | 15 ---
mm/migrate.c| 28 ++--
3 files changed, 14 insertions(+), 34 deletions(-)
diff --git v3.11-rc1.orig/include/linux/migrate.h
v3.11-rc1
extern
- fix page table lock in check_hugetlb_pmd_range
- updated description and renamed patch title
Signed-off-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Acked-by: Andi Kleen a...@linux.intel.com
Reviewed-by: Wanpeng Li liw...@linux.vnet.ibm.com
---
Acked-by: Hillf Danton dhi...@gmail.com
...@linux.vnet.ibm.com
---
Acked-by: Hillf Danton dhi...@gmail.com
include/linux/hugetlb.h | 3 +++
mm/hugetlb.c| 14 ++
mm/mempolicy.c | 4 +++-
3 files changed, 20 insertions(+), 1 deletion(-)
diff --git v3.11-rc1.orig/include/linux/hugetlb.h
v3.11-rc1/include/linux
On Fri, Jul 19, 2013 at 10:39 PM, Naoya Horiguchi
wrote:
> On Fri, Jul 19, 2013 at 01:40:38PM +0800, Hillf Danton wrote:
>> On Fri, Jul 19, 2013 at 5:34 AM, Naoya Horiguchi
>> wrote:
>> > @@ -518,9 +519,11 @@ static struct page *dequeue_huge_page_node(struct
On Fri, Jul 19, 2013 at 10:39 PM, Naoya Horiguchi
n-horigu...@ah.jp.nec.com wrote:
On Fri, Jul 19, 2013 at 01:40:38PM +0800, Hillf Danton wrote:
On Fri, Jul 19, 2013 at 5:34 AM, Naoya Horiguchi
n-horigu...@ah.jp.nec.com wrote:
@@ -518,9 +519,11 @@ static struct page *dequeue_huge_page_node
On Fri, Jul 19, 2013 at 5:34 AM, Naoya Horiguchi
wrote:
> @@ -518,9 +519,11 @@ static struct page *dequeue_huge_page_node(struct hstate
> *h, int nid)
> {
> struct page *page;
>
> - if (list_empty(>hugepage_freelists[nid]))
> + list_for_each_entry(page,
On Fri, Jul 19, 2013 at 5:34 AM, Naoya Horiguchi
wrote:
> This patch enables hugepage migration from migrate_pages(2),
> move_pages(2), and mbind(2).
>
> Signed-off-by: Naoya Horiguchi
> ---
Acked-by: Hillf Danton
> include/linux/mempolicy.h | 2 +-
> 1 file chang
On Fri, Jul 19, 2013 at 11:18 AM, Naoya Horiguchi
wrote:
>> > +bool isolate_huge_page(struct page *page, struct list_head *l)
>>
>> Can we replace the page parameter with p?
>
> Yes. Maybe it's strange to use the full name "page" for one parameter
> and an extremely shortened one "l" for another
On Fri, Jul 19, 2013 at 5:34 AM, Naoya Horiguchi
wrote:
> This patch extends move_pages() to handle vma with VM_HUGETLB set.
> We will be able to migrate hugepage with move_pages(2) after
> applying the enablement patch which comes later in this series.
>
> We avoid getting refcount on tail pages
On Fri, Jul 19, 2013 at 5:34 AM, Naoya Horiguchi
wrote:
> This patch extends check_range() to handle vma with VM_HUGETLB set.
> We will be able to migrate hugepage with migrate_pages(2) after
> applying the enablement patch which comes later in this series.
>
> Note that for larger hugepages
601 - 700 of 1077 matches
Mail list logo