On Tue 28-11-17 21:14:23, John Hubbard wrote:
> On 11/28/2017 12:12 AM, Michal Hocko wrote:
> > On Mon 27-11-17 15:26:27, John Hubbard wrote:
> > [...]
> >> Let me add a belated report, then: we ran into this limit while
> >> implementing
> >> an early version of Unified Memory[1], back in 2013.
On Tue 28-11-17 21:14:23, John Hubbard wrote:
> On 11/28/2017 12:12 AM, Michal Hocko wrote:
> > On Mon 27-11-17 15:26:27, John Hubbard wrote:
> > [...]
> >> Let me add a belated report, then: we ran into this limit while
> >> implementing
> >> an early version of Unified Memory[1], back in 2013.
On 11/28/2017 12:12 AM, Michal Hocko wrote:
> On Mon 27-11-17 15:26:27, John Hubbard wrote:
> [...]
>> Let me add a belated report, then: we ran into this limit while implementing
>> an early version of Unified Memory[1], back in 2013. The implementation
>> at the time depended on tracking that
On 11/28/2017 12:12 AM, Michal Hocko wrote:
> On Mon 27-11-17 15:26:27, John Hubbard wrote:
> [...]
>> Let me add a belated report, then: we ran into this limit while implementing
>> an early version of Unified Memory[1], back in 2013. The implementation
>> at the time depended on tracking that
On Mon 27-11-17 15:26:27, John Hubbard wrote:
[...]
> Let me add a belated report, then: we ran into this limit while implementing
> an early version of Unified Memory[1], back in 2013. The implementation
> at the time depended on tracking that assumed "one allocation == one vma".
And you tried
On Mon 27-11-17 15:26:27, John Hubbard wrote:
[...]
> Let me add a belated report, then: we ran into this limit while implementing
> an early version of Unified Memory[1], back in 2013. The implementation
> at the time depended on tracking that assumed "one allocation == one vma".
And you tried
On 11/27/2017 11:52 AM, Michal Hocko wrote:
> On Mon 27-11-17 20:18:00, Mikael Pettersson wrote:
>> On Mon, Nov 27, 2017 at 11:12 AM, Michal Hocko wrote:
I've kept the kernel tunable to not break the API towards user-space,
but it's a no-op now. Also the distinction
On 11/27/2017 11:52 AM, Michal Hocko wrote:
> On Mon 27-11-17 20:18:00, Mikael Pettersson wrote:
>> On Mon, Nov 27, 2017 at 11:12 AM, Michal Hocko wrote:
I've kept the kernel tunable to not break the API towards user-space,
but it's a no-op now. Also the distinction between split_vma()
On Mon 27-11-17 12:21:21, Andi Kleen wrote:
> On Mon, Nov 27, 2017 at 08:57:32PM +0100, Michal Hocko wrote:
> > On Mon 27-11-17 19:32:18, Michal Hocko wrote:
> > > On Mon 27-11-17 09:25:16, Andi Kleen wrote:
> > [...]
> > > > The reason the limit was there originally because it allows a DoS
> > >
On Mon 27-11-17 12:21:21, Andi Kleen wrote:
> On Mon, Nov 27, 2017 at 08:57:32PM +0100, Michal Hocko wrote:
> > On Mon 27-11-17 19:32:18, Michal Hocko wrote:
> > > On Mon 27-11-17 09:25:16, Andi Kleen wrote:
> > [...]
> > > > The reason the limit was there originally because it allows a DoS
> > >
On Mon, Nov 27, 2017 at 08:57:32PM +0100, Michal Hocko wrote:
> On Mon 27-11-17 19:32:18, Michal Hocko wrote:
> > On Mon 27-11-17 09:25:16, Andi Kleen wrote:
> [...]
> > > The reason the limit was there originally because it allows a DoS
> > > attack against the kernel by filling all unswappable
On Mon, Nov 27, 2017 at 08:57:32PM +0100, Michal Hocko wrote:
> On Mon 27-11-17 19:32:18, Michal Hocko wrote:
> > On Mon 27-11-17 09:25:16, Andi Kleen wrote:
> [...]
> > > The reason the limit was there originally because it allows a DoS
> > > attack against the kernel by filling all unswappable
On Mon 27-11-17 19:32:18, Michal Hocko wrote:
> On Mon 27-11-17 09:25:16, Andi Kleen wrote:
[...]
> > The reason the limit was there originally because it allows a DoS
> > attack against the kernel by filling all unswappable memory up with VMAs.
>
> We can reduce the effect by accounting vmas to
On Mon 27-11-17 19:32:18, Michal Hocko wrote:
> On Mon 27-11-17 09:25:16, Andi Kleen wrote:
[...]
> > The reason the limit was there originally because it allows a DoS
> > attack against the kernel by filling all unswappable memory up with VMAs.
>
> We can reduce the effect by accounting vmas to
On Mon 27-11-17 20:18:00, Mikael Pettersson wrote:
> On Mon, Nov 27, 2017 at 11:12 AM, Michal Hocko wrote:
> > > I've kept the kernel tunable to not break the API towards user-space,
> > > but it's a no-op now. Also the distinction between split_vma() and
> > > __split_vma()
On Mon 27-11-17 20:18:00, Mikael Pettersson wrote:
> On Mon, Nov 27, 2017 at 11:12 AM, Michal Hocko wrote:
> > > I've kept the kernel tunable to not break the API towards user-space,
> > > but it's a no-op now. Also the distinction between split_vma() and
> > > __split_vma() disappears, so they
[resent because I can't type]
> vm.max_map_count
I always thought it is some kind of algorithmic complexity limiter and
kernel memory limiter. VMAs are under SLAB_ACCOUNT nowadays but ->mmap
list stays:
$ chgrep -e 'for.*vma = vma->vm_next' | wc -l
41
In particular
[resent because I can't type]
> vm.max_map_count
I always thought it is some kind of algorithmic complexity limiter and
kernel memory limiter. VMAs are under SLAB_ACCOUNT nowadays but ->mmap
list stays:
$ chgrep -e 'for.*vma = vma->vm_next' | wc -l
41
In particular
> vm.max_map_count
I always thought it is some kind of algorithmic complexity limiter and
kernel memory limiter. VMAs are under SLAB_ACCOUNT nowadays but ->mmap
list stays:
$ chgrep -e 'for.*vma = vma->vm_next' | wc -l
41
In particular readdir on /proc/*/map_files .
I'm saying
> vm.max_map_count
I always thought it is some kind of algorithmic complexity limiter and
kernel memory limiter. VMAs are under SLAB_ACCOUNT nowadays but ->mmap
list stays:
$ chgrep -e 'for.*vma = vma->vm_next' | wc -l
41
In particular readdir on /proc/*/map_files .
I'm saying
On Mon, Nov 27, 2017 at 6:25 PM, Andi Kleen wrote:
> It's an arbitrary scaling limit on the how many mappings the process
> has. The more memory you have the bigger a problem it is. We've
> ran into this problem too on larger systems.
>
> The reason the limit was there
On Mon, Nov 27, 2017 at 6:25 PM, Andi Kleen wrote:
> It's an arbitrary scaling limit on the how many mappings the process
> has. The more memory you have the bigger a problem it is. We've
> ran into this problem too on larger systems.
>
> The reason the limit was there originally because it
On Mon, Nov 27, 2017 at 5:22 PM, Matthew Wilcox wrote:
>> Could you be more explicit about _why_ we need to remove this tunable?
>> I am not saying I disagree, the removal simplifies the code but I do not
>> really see any justification here.
>
> I imagine he started seeing
On Mon, Nov 27, 2017 at 5:22 PM, Matthew Wilcox wrote:
>> Could you be more explicit about _why_ we need to remove this tunable?
>> I am not saying I disagree, the removal simplifies the code but I do not
>> really see any justification here.
>
> I imagine he started seeing random syscalls
On Mon, Nov 27, 2017 at 11:12 AM, Michal Hocko wrote:
> > I've kept the kernel tunable to not break the API towards user-space,
> > but it's a no-op now. Also the distinction between split_vma() and
> > __split_vma() disappears, so they are merged.
>
> Could you be more
On Mon, Nov 27, 2017 at 11:12 AM, Michal Hocko wrote:
> > I've kept the kernel tunable to not break the API towards user-space,
> > but it's a no-op now. Also the distinction between split_vma() and
> > __split_vma() disappears, so they are merged.
>
> Could you be more explicit about _why_ we
On Mon 27-11-17 09:25:16, Andi Kleen wrote:
> Michal Hocko writes:
> >
> > Could you be more explicit about _why_ we need to remove this tunable?
> > I am not saying I disagree, the removal simplifies the code but I do not
> > really see any justification here.
>
> It's an
On Mon 27-11-17 09:25:16, Andi Kleen wrote:
> Michal Hocko writes:
> >
> > Could you be more explicit about _why_ we need to remove this tunable?
> > I am not saying I disagree, the removal simplifies the code but I do not
> > really see any justification here.
>
> It's an arbitrary scaling
Michal Hocko writes:
>
> Could you be more explicit about _why_ we need to remove this tunable?
> I am not saying I disagree, the removal simplifies the code but I do not
> really see any justification here.
It's an arbitrary scaling limit on the how many mappings the process
Michal Hocko writes:
>
> Could you be more explicit about _why_ we need to remove this tunable?
> I am not saying I disagree, the removal simplifies the code but I do not
> really see any justification here.
It's an arbitrary scaling limit on the how many mappings the process
has. The more
On Mon, Nov 27, 2017 at 11:12:32AM +0100, Michal Hocko wrote:
> On Sun 26-11-17 17:09:32, Mikael Pettersson wrote:
> > - Reaching the limit causes various memory management system calls to
> > fail with ENOMEM, which is a lie. Combined with the unpredictability
> > of the number of mappings
On Mon, Nov 27, 2017 at 11:12:32AM +0100, Michal Hocko wrote:
> On Sun 26-11-17 17:09:32, Mikael Pettersson wrote:
> > - Reaching the limit causes various memory management system calls to
> > fail with ENOMEM, which is a lie. Combined with the unpredictability
> > of the number of mappings
On Sun 26-11-17 17:09:32, Mikael Pettersson wrote:
> The `vm.max_map_count' sysctl limit is IMO useless and confusing, so
> this patch disables it.
>
> - Old ELF had a limit of 64K segments, making core dumps from processes
> with more mappings than that problematic, but that was fixed in March
On Sun 26-11-17 17:09:32, Mikael Pettersson wrote:
> The `vm.max_map_count' sysctl limit is IMO useless and confusing, so
> this patch disables it.
>
> - Old ELF had a limit of 64K segments, making core dumps from processes
> with more mappings than that problematic, but that was fixed in March
34 matches
Mail list logo