On Wed, Aug 15, 2018 at 10:37:28AM -0700, Andy Lutomirski wrote:
>
>
> > On Aug 15, 2018, at 10:32 AM, Shakeel Butt wrote:
> >
> >> On Wed, Aug 15, 2018 at 10:26 AM Roman Gushchin wrote:
> >>
> >>> On Wed, Aug 15, 2018 at 10:12:42AM -0700, Andy Lutomirski wrote:
> >>>
> >>>
> > On Aug 1
On Thu, Aug 16, 2018 at 08:35:09AM +0200, Michal Hocko wrote:
> On Wed 15-08-18 13:20:44, Johannes Weiner wrote:
> [...]
> > This is completely backwards.
> >
> > We respect the limits unless there is a *really* strong reason not
> > to. The only situations I can think of is during OOM kills to av
On Wed 15-08-18 13:20:44, Johannes Weiner wrote:
[...]
> This is completely backwards.
>
> We respect the limits unless there is a *really* strong reason not
> to. The only situations I can think of is during OOM kills to avoid
> memory deadlocks and during packet reception for correctness issues
> On Aug 15, 2018, at 10:32 AM, Shakeel Butt wrote:
>
>> On Wed, Aug 15, 2018 at 10:26 AM Roman Gushchin wrote:
>>
>>> On Wed, Aug 15, 2018 at 10:12:42AM -0700, Andy Lutomirski wrote:
>>>
>>>
> On Aug 15, 2018, at 9:55 AM, Roman Gushchin wrote:
>
>> On Wed, Aug 15, 2018 at 12
On Wed, Aug 15, 2018 at 10:26 AM Roman Gushchin wrote:
>
> On Wed, Aug 15, 2018 at 10:12:42AM -0700, Andy Lutomirski wrote:
> >
> >
> > > On Aug 15, 2018, at 9:55 AM, Roman Gushchin wrote:
> > >
> > >> On Wed, Aug 15, 2018 at 12:39:23PM -0400, Johannes Weiner wrote:
> > >>> On Tue, Aug 14, 2018 a
On Wed, Aug 15, 2018 at 10:12:42AM -0700, Andy Lutomirski wrote:
>
>
> > On Aug 15, 2018, at 9:55 AM, Roman Gushchin wrote:
> >
> >> On Wed, Aug 15, 2018 at 12:39:23PM -0400, Johannes Weiner wrote:
> >>> On Tue, Aug 14, 2018 at 05:36:19PM -0700, Roman Gushchin wrote:
> >>> @@ -224,9 +224,14 @@
On Wed, Aug 15, 2018 at 09:55:17AM -0700, Roman Gushchin wrote:
> On Wed, Aug 15, 2018 at 12:39:23PM -0400, Johannes Weiner wrote:
> > On Tue, Aug 14, 2018 at 05:36:19PM -0700, Roman Gushchin wrote:
> > > @@ -224,9 +224,14 @@ static unsigned long *alloc_thread_stack_node(struct
> > > task_struct *
On Tue, Aug 14, 2018 at 06:18:01PM -0700, Shakeel Butt wrote:
> On Tue, Aug 14, 2018 at 5:37 PM Roman Gushchin wrote:
> >
> > If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> > using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> > stack pages are charged against corresponding me
> On Aug 15, 2018, at 9:55 AM, Roman Gushchin wrote:
>
>> On Wed, Aug 15, 2018 at 12:39:23PM -0400, Johannes Weiner wrote:
>>> On Tue, Aug 14, 2018 at 05:36:19PM -0700, Roman Gushchin wrote:
>>> @@ -224,9 +224,14 @@ static unsigned long *alloc_thread_stack_node(struct
>>> task_struct *tsk, in
On Wed, Aug 15, 2018 at 12:39:23PM -0400, Johannes Weiner wrote:
> On Tue, Aug 14, 2018 at 05:36:19PM -0700, Roman Gushchin wrote:
> > @@ -224,9 +224,14 @@ static unsigned long *alloc_thread_stack_node(struct
> > task_struct *tsk, int node)
> > return s->addr;
> > }
> >
> > + /
On Tue, Aug 14, 2018 at 05:36:19PM -0700, Roman Gushchin wrote:
> @@ -224,9 +224,14 @@ static unsigned long *alloc_thread_stack_node(struct
> task_struct *tsk, int node)
> return s->addr;
> }
>
> + /*
> + * Allocated stacks are cached and later reused by new threads,
On Tue 14-08-18 17:36:19, Roman Gushchin wrote:
> If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> stack pages are charged against corresponding memory cgroups
> on allocation and uncharged on releasing them.
>
> The problem is
On Tue, Aug 14, 2018 at 5:37 PM Roman Gushchin wrote:
>
> If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> stack pages are charged against corresponding memory cgroups
> on allocation and uncharged on releasing them.
>
> The pr
If CONFIG_VMAP_STACK is set, kernel stacks are allocated
using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
stack pages are charged against corresponding memory cgroups
on allocation and uncharged on releasing them.
The problem is that we do cache kernel stacks in small
per-cpu caches and
14 matches
Mail list logo