On 6/29/18 4:39 AM, Michal Hocko wrote:
On Thu 28-06-18 17:59:25, Yang Shi wrote:
On 6/28/18 12:10 PM, Yang Shi wrote:
On 6/28/18 4:51 AM, Michal Hocko wrote:
On Wed 27-06-18 10:23:39, Yang Shi wrote:
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/29/18 4:39 AM, Michal Hocko wrote:
On Thu 28-06-18 17:59:25, Yang Shi wrote:
On 6/28/18 12:10 PM, Yang Shi wrote:
On 6/28/18 4:51 AM, Michal Hocko wrote:
On Wed 27-06-18 10:23:39, Yang Shi wrote:
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/29/18 4:34 AM, Michal Hocko wrote:
On Thu 28-06-18 12:10:10, Yang Shi wrote:
On 6/28/18 4:51 AM, Michal Hocko wrote:
On Wed 27-06-18 10:23:39, Yang Shi wrote:
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/26/18 12:43 AM, Peter Zijlstra
On 6/29/18 4:34 AM, Michal Hocko wrote:
On Thu 28-06-18 12:10:10, Yang Shi wrote:
On 6/28/18 4:51 AM, Michal Hocko wrote:
On Wed 27-06-18 10:23:39, Yang Shi wrote:
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/26/18 12:43 AM, Peter Zijlstra
On Thu 28-06-18 17:59:25, Yang Shi wrote:
>
>
> On 6/28/18 12:10 PM, Yang Shi wrote:
> >
> >
> > On 6/28/18 4:51 AM, Michal Hocko wrote:
> > > On Wed 27-06-18 10:23:39, Yang Shi wrote:
> > > >
> > > > On 6/27/18 12:24 AM, Michal Hocko wrote:
> > > > > On Tue 26-06-18 18:03:34, Yang Shi wrote:
On Thu 28-06-18 17:59:25, Yang Shi wrote:
>
>
> On 6/28/18 12:10 PM, Yang Shi wrote:
> >
> >
> > On 6/28/18 4:51 AM, Michal Hocko wrote:
> > > On Wed 27-06-18 10:23:39, Yang Shi wrote:
> > > >
> > > > On 6/27/18 12:24 AM, Michal Hocko wrote:
> > > > > On Tue 26-06-18 18:03:34, Yang Shi wrote:
On Thu 28-06-18 12:10:10, Yang Shi wrote:
>
>
> On 6/28/18 4:51 AM, Michal Hocko wrote:
> > On Wed 27-06-18 10:23:39, Yang Shi wrote:
> > >
> > > On 6/27/18 12:24 AM, Michal Hocko wrote:
> > > > On Tue 26-06-18 18:03:34, Yang Shi wrote:
> > > > > On 6/26/18 12:43 AM, Peter Zijlstra wrote:
> > >
On Thu 28-06-18 12:10:10, Yang Shi wrote:
>
>
> On 6/28/18 4:51 AM, Michal Hocko wrote:
> > On Wed 27-06-18 10:23:39, Yang Shi wrote:
> > >
> > > On 6/27/18 12:24 AM, Michal Hocko wrote:
> > > > On Tue 26-06-18 18:03:34, Yang Shi wrote:
> > > > > On 6/26/18 12:43 AM, Peter Zijlstra wrote:
> > >
On 6/28/18 12:10 PM, Yang Shi wrote:
On 6/28/18 4:51 AM, Michal Hocko wrote:
On Wed 27-06-18 10:23:39, Yang Shi wrote:
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/26/18 12:43 AM, Peter Zijlstra wrote:
On Mon, Jun 25, 2018 at 05:06:23PM
On 6/28/18 12:10 PM, Yang Shi wrote:
On 6/28/18 4:51 AM, Michal Hocko wrote:
On Wed 27-06-18 10:23:39, Yang Shi wrote:
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/26/18 12:43 AM, Peter Zijlstra wrote:
On Mon, Jun 25, 2018 at 05:06:23PM
On 6/28/18 4:51 AM, Michal Hocko wrote:
On Wed 27-06-18 10:23:39, Yang Shi wrote:
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/26/18 12:43 AM, Peter Zijlstra wrote:
On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
By looking this
On 6/28/18 4:51 AM, Michal Hocko wrote:
On Wed 27-06-18 10:23:39, Yang Shi wrote:
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/26/18 12:43 AM, Peter Zijlstra wrote:
On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
By looking this
On Wed 27-06-18 10:23:39, Yang Shi wrote:
>
>
> On 6/27/18 12:24 AM, Michal Hocko wrote:
> > On Tue 26-06-18 18:03:34, Yang Shi wrote:
> > >
> > > On 6/26/18 12:43 AM, Peter Zijlstra wrote:
> > > > On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
> > > > > By looking this deeper, we
On Wed 27-06-18 10:23:39, Yang Shi wrote:
>
>
> On 6/27/18 12:24 AM, Michal Hocko wrote:
> > On Tue 26-06-18 18:03:34, Yang Shi wrote:
> > >
> > > On 6/26/18 12:43 AM, Peter Zijlstra wrote:
> > > > On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
> > > > > By looking this deeper, we
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/26/18 12:43 AM, Peter Zijlstra wrote:
On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
By looking this deeper, we may not be able to cover all the unmapping range
for VM_DEAD, for example, if
On 6/27/18 12:24 AM, Michal Hocko wrote:
On Tue 26-06-18 18:03:34, Yang Shi wrote:
On 6/26/18 12:43 AM, Peter Zijlstra wrote:
On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
By looking this deeper, we may not be able to cover all the unmapping range
for VM_DEAD, for example, if
On Tue 26-06-18 18:03:34, Yang Shi wrote:
>
>
> On 6/26/18 12:43 AM, Peter Zijlstra wrote:
> > On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
> > > By looking this deeper, we may not be able to cover all the unmapping
> > > range
> > > for VM_DEAD, for example, if the start addr is
On Tue 26-06-18 18:03:34, Yang Shi wrote:
>
>
> On 6/26/18 12:43 AM, Peter Zijlstra wrote:
> > On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
> > > By looking this deeper, we may not be able to cover all the unmapping
> > > range
> > > for VM_DEAD, for example, if the start addr is
On 6/26/18 12:43 AM, Peter Zijlstra wrote:
On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
By looking this deeper, we may not be able to cover all the unmapping range
for VM_DEAD, for example, if the start addr is in the middle of a vma. We
can't set VM_DEAD to that vma since that
On 6/26/18 12:43 AM, Peter Zijlstra wrote:
On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
By looking this deeper, we may not be able to cover all the unmapping range
for VM_DEAD, for example, if the start addr is in the middle of a vma. We
can't set VM_DEAD to that vma since that
On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
> By looking this deeper, we may not be able to cover all the unmapping range
> for VM_DEAD, for example, if the start addr is in the middle of a vma. We
> can't set VM_DEAD to that vma since that would trigger SIGSEGV for still
> mapped
On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
> By looking this deeper, we may not be able to cover all the unmapping range
> for VM_DEAD, for example, if the start addr is in the middle of a vma. We
> can't set VM_DEAD to that vma since that would trigger SIGSEGV for still
> mapped
On 6/20/18 12:18 AM, Michal Hocko wrote:
On Tue 19-06-18 17:31:27, Nadav Amit wrote:
at 4:08 PM, Yang Shi wrote:
On 6/19/18 3:17 PM, Nadav Amit wrote:
at 4:34 PM, Yang Shi
wrote:
When running some mmap/munmap scalability tests with large memory (i.e.
300GB), the below hung task
On 6/20/18 12:18 AM, Michal Hocko wrote:
On Tue 19-06-18 17:31:27, Nadav Amit wrote:
at 4:08 PM, Yang Shi wrote:
On 6/19/18 3:17 PM, Nadav Amit wrote:
at 4:34 PM, Yang Shi
wrote:
When running some mmap/munmap scalability tests with large memory (i.e.
300GB), the below hung task
On Fri 22-06-18 18:01:08, Yang Shi wrote:
> Yes, this is true but I guess what Yang Shi meant was that an userspace
> > > access racing with munmap is not well defined. You never know whether
> > > you get your data, #PTF or SEGV because it depends on timing. The user
> > > visible change might be
On Fri 22-06-18 18:01:08, Yang Shi wrote:
> Yes, this is true but I guess what Yang Shi meant was that an userspace
> > > access racing with munmap is not well defined. You never know whether
> > > you get your data, #PTF or SEGV because it depends on timing. The user
> > > visible change might be
Yes, this is true but I guess what Yang Shi meant was that an userspace
access racing with munmap is not well defined. You never know whether
you get your data, #PTF or SEGV because it depends on timing. The user
visible change might be that you lose content and get zero page instead
if you hit
Yes, this is true but I guess what Yang Shi meant was that an userspace
access racing with munmap is not well defined. You never know whether
you get your data, #PTF or SEGV because it depends on timing. The user
visible change might be that you lose content and get zero page instead
if you hit
On 6/20/18 12:18 AM, Michal Hocko wrote:
On Tue 19-06-18 17:31:27, Nadav Amit wrote:
at 4:08 PM, Yang Shi wrote:
On 6/19/18 3:17 PM, Nadav Amit wrote:
at 4:34 PM, Yang Shi
wrote:
When running some mmap/munmap scalability tests with large memory (i.e.
300GB), the below hung task
On 6/20/18 12:18 AM, Michal Hocko wrote:
On Tue 19-06-18 17:31:27, Nadav Amit wrote:
at 4:08 PM, Yang Shi wrote:
On 6/19/18 3:17 PM, Nadav Amit wrote:
at 4:34 PM, Yang Shi
wrote:
When running some mmap/munmap scalability tests with large memory (i.e.
300GB), the below hung task
at 12:18 AM, Michal Hocko wrote:
> On Tue 19-06-18 17:31:27, Nadav Amit wrote:
>> at 4:08 PM, Yang Shi wrote:
>>
>>> On 6/19/18 3:17 PM, Nadav Amit wrote:
at 4:34 PM, Yang Shi
wrote:
> When running some mmap/munmap scalability tests with large memory (i.e.
>
at 12:18 AM, Michal Hocko wrote:
> On Tue 19-06-18 17:31:27, Nadav Amit wrote:
>> at 4:08 PM, Yang Shi wrote:
>>
>>> On 6/19/18 3:17 PM, Nadav Amit wrote:
at 4:34 PM, Yang Shi
wrote:
> When running some mmap/munmap scalability tests with large memory (i.e.
>
On 6/20/18 12:17 AM, Michal Hocko wrote:
On Tue 19-06-18 14:13:05, Yang Shi wrote:
On 6/19/18 3:02 AM, Peter Zijlstra wrote:
[...]
Hold up, two things: you having to copy most of do_munmap() didn't seem
to suggest a helper function? And second, since when are we allowed to
Yes, they will
On 6/20/18 12:17 AM, Michal Hocko wrote:
On Tue 19-06-18 14:13:05, Yang Shi wrote:
On 6/19/18 3:02 AM, Peter Zijlstra wrote:
[...]
Hold up, two things: you having to copy most of do_munmap() didn't seem
to suggest a helper function? And second, since when are we allowed to
Yes, they will
On Tue 19-06-18 17:31:27, Nadav Amit wrote:
> at 4:08 PM, Yang Shi wrote:
>
> >
> >
> > On 6/19/18 3:17 PM, Nadav Amit wrote:
> >> at 4:34 PM, Yang Shi
> >> wrote:
> >>
> >>
> >>> When running some mmap/munmap scalability tests with large memory (i.e.
> >>>
> 300GB), the below hung
On Tue 19-06-18 17:31:27, Nadav Amit wrote:
> at 4:08 PM, Yang Shi wrote:
>
> >
> >
> > On 6/19/18 3:17 PM, Nadav Amit wrote:
> >> at 4:34 PM, Yang Shi
> >> wrote:
> >>
> >>
> >>> When running some mmap/munmap scalability tests with large memory (i.e.
> >>>
> 300GB), the below hung
On Tue 19-06-18 14:13:05, Yang Shi wrote:
>
>
> On 6/19/18 3:02 AM, Peter Zijlstra wrote:
[...]
> > Hold up, two things: you having to copy most of do_munmap() didn't seem
> > to suggest a helper function? And second, since when are we allowed to
>
> Yes, they will be extracted into a helper
On Tue 19-06-18 14:13:05, Yang Shi wrote:
>
>
> On 6/19/18 3:02 AM, Peter Zijlstra wrote:
[...]
> > Hold up, two things: you having to copy most of do_munmap() didn't seem
> > to suggest a helper function? And second, since when are we allowed to
>
> Yes, they will be extracted into a helper
at 4:08 PM, Yang Shi wrote:
>
>
> On 6/19/18 3:17 PM, Nadav Amit wrote:
>> at 4:34 PM, Yang Shi
>> wrote:
>>
>>
>>> When running some mmap/munmap scalability tests with large memory (i.e.
>>>
300GB), the below hung task issue may happen occasionally.
>>> INFO: task ps:14018
at 4:08 PM, Yang Shi wrote:
>
>
> On 6/19/18 3:17 PM, Nadav Amit wrote:
>> at 4:34 PM, Yang Shi
>> wrote:
>>
>>
>>> When running some mmap/munmap scalability tests with large memory (i.e.
>>>
300GB), the below hung task issue may happen occasionally.
>>> INFO: task ps:14018
at 4:34 PM, Yang Shi wrote:
> When running some mmap/munmap scalability tests with large memory (i.e.
>> 300GB), the below hung task issue may happen occasionally.
>
> INFO: task ps:14018 blocked for more than 120 seconds.
> Tainted: GE 4.9.79-009.ali3000.alios7.x86_64 #1
>
at 4:34 PM, Yang Shi wrote:
> When running some mmap/munmap scalability tests with large memory (i.e.
>> 300GB), the below hung task issue may happen occasionally.
>
> INFO: task ps:14018 blocked for more than 120 seconds.
> Tainted: GE 4.9.79-009.ali3000.alios7.x86_64 #1
>
On 6/19/18 3:02 AM, Peter Zijlstra wrote:
On Tue, Jun 19, 2018 at 07:34:16AM +0800, Yang Shi wrote:
diff --git a/mm/mmap.c b/mm/mmap.c
index fc41c05..e84f80c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2686,6 +2686,141 @@ int split_vma(struct mm_struct *mm, struct
vm_area_struct *vma,
On 6/19/18 3:02 AM, Peter Zijlstra wrote:
On Tue, Jun 19, 2018 at 07:34:16AM +0800, Yang Shi wrote:
diff --git a/mm/mmap.c b/mm/mmap.c
index fc41c05..e84f80c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2686,6 +2686,141 @@ int split_vma(struct mm_struct *mm, struct
vm_area_struct *vma,
On Tue, Jun 19, 2018 at 07:34:16AM +0800, Yang Shi wrote:
> diff --git a/mm/mmap.c b/mm/mmap.c
> index fc41c05..e84f80c 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2686,6 +2686,141 @@ int split_vma(struct mm_struct *mm, struct
> vm_area_struct *vma,
> return __split_vma(mm, vma, addr,
On Tue, Jun 19, 2018 at 07:34:16AM +0800, Yang Shi wrote:
> diff --git a/mm/mmap.c b/mm/mmap.c
> index fc41c05..e84f80c 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2686,6 +2686,141 @@ int split_vma(struct mm_struct *mm, struct
> vm_area_struct *vma,
> return __split_vma(mm, vma, addr,
When running some mmap/munmap scalability tests with large memory (i.e.
> 300GB), the below hung task issue may happen occasionally.
INFO: task ps:14018 blocked for more than 120 seconds.
Tainted: GE 4.9.79-009.ali3000.alios7.x86_64 #1
"echo 0 >
When running some mmap/munmap scalability tests with large memory (i.e.
> 300GB), the below hung task issue may happen occasionally.
INFO: task ps:14018 blocked for more than 120 seconds.
Tainted: GE 4.9.79-009.ali3000.alios7.x86_64 #1
"echo 0 >
48 matches
Mail list logo