On 10/19/17 12:28 AM, Michal Hocko wrote:
On Tue 17-10-17 15:39:08, David Rientjes wrote:
On Wed, 18 Oct 2017, Yang Shi wrote:
Yes, this should catch occurrences of "huge unreclaimable slabs", right?
Yes, it sounds so. Although single "huge" unreclaimable slab might not result
in
On 10/19/17 12:28 AM, Michal Hocko wrote:
On Tue 17-10-17 15:39:08, David Rientjes wrote:
On Wed, 18 Oct 2017, Yang Shi wrote:
Yes, this should catch occurrences of "huge unreclaimable slabs", right?
Yes, it sounds so. Although single "huge" unreclaimable slab might not result
in
On Tue 17-10-17 15:39:08, David Rientjes wrote:
> On Wed, 18 Oct 2017, Yang Shi wrote:
>
> > > Yes, this should catch occurrences of "huge unreclaimable slabs", right?
> >
> > Yes, it sounds so. Although single "huge" unreclaimable slab might not
> > result
> > in excessive slabs use in a
On Tue 17-10-17 15:39:08, David Rientjes wrote:
> On Wed, 18 Oct 2017, Yang Shi wrote:
>
> > > Yes, this should catch occurrences of "huge unreclaimable slabs", right?
> >
> > Yes, it sounds so. Although single "huge" unreclaimable slab might not
> > result
> > in excessive slabs use in a
On 10/17/17 3:39 PM, David Rientjes wrote:
On Wed, 18 Oct 2017, Yang Shi wrote:
Yes, this should catch occurrences of "huge unreclaimable slabs", right?
Yes, it sounds so. Although single "huge" unreclaimable slab might not result
in excessive slabs use in a whole, but this would help to
On 10/17/17 3:39 PM, David Rientjes wrote:
On Wed, 18 Oct 2017, Yang Shi wrote:
Yes, this should catch occurrences of "huge unreclaimable slabs", right?
Yes, it sounds so. Although single "huge" unreclaimable slab might not result
in excessive slabs use in a whole, but this would help to
On Wed, 18 Oct 2017, Yang Shi wrote:
> > Yes, this should catch occurrences of "huge unreclaimable slabs", right?
>
> Yes, it sounds so. Although single "huge" unreclaimable slab might not result
> in excessive slabs use in a whole, but this would help to filter out "small"
> unreclaimable slab.
On Wed, 18 Oct 2017, Yang Shi wrote:
> > Yes, this should catch occurrences of "huge unreclaimable slabs", right?
>
> Yes, it sounds so. Although single "huge" unreclaimable slab might not result
> in excessive slabs use in a whole, but this would help to filter out "small"
> unreclaimable slab.
On 10/17/17 2:50 PM, David Rientjes wrote:
On Wed, 18 Oct 2017, Yang Shi wrote:
Please simply dump statistics for all slab caches where the memory
footprint is greater than 5% of system memory.
Unconditionally? User controlable?
Unconditionally, it's a single line of output per slab
On 10/17/17 2:50 PM, David Rientjes wrote:
On Wed, 18 Oct 2017, Yang Shi wrote:
Please simply dump statistics for all slab caches where the memory
footprint is greater than 5% of system memory.
Unconditionally? User controlable?
Unconditionally, it's a single line of output per slab
On Wed, 18 Oct 2017, Yang Shi wrote:
> > > > Please simply dump statistics for all slab caches where the memory
> > > > footprint is greater than 5% of system memory.
> > >
> > > Unconditionally? User controlable?
> >
> > Unconditionally, it's a single line of output per slab cache and there
>
On Wed, 18 Oct 2017, Yang Shi wrote:
> > > > Please simply dump statistics for all slab caches where the memory
> > > > footprint is greater than 5% of system memory.
> > >
> > > Unconditionally? User controlable?
> >
> > Unconditionally, it's a single line of output per slab cache and there
>
On 10/17/17 1:59 PM, David Rientjes wrote:
On Tue, 17 Oct 2017, Michal Hocko wrote:
On Mon 16-10-17 17:15:31, David Rientjes wrote:
Please simply dump statistics for all slab caches where the memory
footprint is greater than 5% of system memory.
Unconditionally? User controlable?
On 10/17/17 1:59 PM, David Rientjes wrote:
On Tue, 17 Oct 2017, Michal Hocko wrote:
On Mon 16-10-17 17:15:31, David Rientjes wrote:
Please simply dump statistics for all slab caches where the memory
footprint is greater than 5% of system memory.
Unconditionally? User controlable?
On Tue, 17 Oct 2017, Michal Hocko wrote:
> On Mon 16-10-17 17:15:31, David Rientjes wrote:
> > Please simply dump statistics for all slab caches where the memory
> > footprint is greater than 5% of system memory.
>
> Unconditionally? User controlable?
Unconditionally, it's a single line of
On Tue, 17 Oct 2017, Michal Hocko wrote:
> On Mon 16-10-17 17:15:31, David Rientjes wrote:
> > Please simply dump statistics for all slab caches where the memory
> > footprint is greater than 5% of system memory.
>
> Unconditionally? User controlable?
Unconditionally, it's a single line of
On Mon 16-10-17 17:15:31, David Rientjes wrote:
> Please simply dump statistics for all slab caches where the memory
> footprint is greater than 5% of system memory.
Unconditionally? User controlable?
--
Michal Hocko
SUSE Labs
On Mon 16-10-17 17:15:31, David Rientjes wrote:
> Please simply dump statistics for all slab caches where the memory
> footprint is greater than 5% of system memory.
Unconditionally? User controlable?
--
Michal Hocko
SUSE Labs
On Wed, 11 Oct 2017, Yang Shi wrote:
> @@ -161,6 +162,25 @@ static bool oom_unkillable_task(struct task_struct *p,
> return false;
> }
>
> +/*
> + * Print out unreclaimble slabs info when unreclaimable slabs amount is
> greater
> + * than all user memory (LRU pages)
> + */
> +static
On Wed, 11 Oct 2017, Yang Shi wrote:
> @@ -161,6 +162,25 @@ static bool oom_unkillable_task(struct task_struct *p,
> return false;
> }
>
> +/*
> + * Print out unreclaimble slabs info when unreclaimable slabs amount is
> greater
> + * than all user memory (LRU pages)
> + */
> +static
Kernel may panic when oom happens without killable process sometimes it
is caused by huge unreclaimable slabs used by kernel.
Although kdump could help debug such problem, however, kdump is not
available on all architectures and it might be malfunction sometime.
And, since kernel already panic it
Kernel may panic when oom happens without killable process sometimes it
is caused by huge unreclaimable slabs used by kernel.
Although kdump could help debug such problem, however, kdump is not
available on all architectures and it might be malfunction sometime.
And, since kernel already panic it
On 10/9/17 11:53 AM, Yang Shi wrote:
On 10/8/17 11:36 PM, Michal Hocko wrote:
On Mon 09-10-17 08:33:16, Michal Hocko wrote:
On Sat 07-10-17 00:37:55, Yang Shi wrote:
On 10/6/17 2:37 AM, Michal Hocko wrote:
On Thu 05-10-17 05:29:10, Yang Shi wrote:
[...]
+
On 10/9/17 11:53 AM, Yang Shi wrote:
On 10/8/17 11:36 PM, Michal Hocko wrote:
On Mon 09-10-17 08:33:16, Michal Hocko wrote:
On Sat 07-10-17 00:37:55, Yang Shi wrote:
On 10/6/17 2:37 AM, Michal Hocko wrote:
On Thu 05-10-17 05:29:10, Yang Shi wrote:
[...]
+
On 10/8/17 11:36 PM, Michal Hocko wrote:
On Mon 09-10-17 08:33:16, Michal Hocko wrote:
On Sat 07-10-17 00:37:55, Yang Shi wrote:
On 10/6/17 2:37 AM, Michal Hocko wrote:
On Thu 05-10-17 05:29:10, Yang Shi wrote:
[...]
+ list_for_each_entry_safe(s, s2, _caches, list) {
+
On 10/8/17 11:36 PM, Michal Hocko wrote:
On Mon 09-10-17 08:33:16, Michal Hocko wrote:
On Sat 07-10-17 00:37:55, Yang Shi wrote:
On 10/6/17 2:37 AM, Michal Hocko wrote:
On Thu 05-10-17 05:29:10, Yang Shi wrote:
[...]
+ list_for_each_entry_safe(s, s2, _caches, list) {
+
On 10/8/17 11:36 PM, Michal Hocko wrote:
On Mon 09-10-17 08:33:16, Michal Hocko wrote:
On Sat 07-10-17 00:37:55, Yang Shi wrote:
On 10/6/17 2:37 AM, Michal Hocko wrote:
On Thu 05-10-17 05:29:10, Yang Shi wrote:
[...]
+ list_for_each_entry_safe(s, s2, _caches, list) {
+
On 10/8/17 11:36 PM, Michal Hocko wrote:
On Mon 09-10-17 08:33:16, Michal Hocko wrote:
On Sat 07-10-17 00:37:55, Yang Shi wrote:
On 10/6/17 2:37 AM, Michal Hocko wrote:
On Thu 05-10-17 05:29:10, Yang Shi wrote:
[...]
+ list_for_each_entry_safe(s, s2, _caches, list) {
+
On Mon 09-10-17 08:33:16, Michal Hocko wrote:
> On Sat 07-10-17 00:37:55, Yang Shi wrote:
> >
> >
> > On 10/6/17 2:37 AM, Michal Hocko wrote:
> > > On Thu 05-10-17 05:29:10, Yang Shi wrote:
> [...]
> > > > + list_for_each_entry_safe(s, s2, _caches, list) {
> > > > + if
On Mon 09-10-17 08:33:16, Michal Hocko wrote:
> On Sat 07-10-17 00:37:55, Yang Shi wrote:
> >
> >
> > On 10/6/17 2:37 AM, Michal Hocko wrote:
> > > On Thu 05-10-17 05:29:10, Yang Shi wrote:
> [...]
> > > > + list_for_each_entry_safe(s, s2, _caches, list) {
> > > > + if
On Sat 07-10-17 00:37:55, Yang Shi wrote:
>
>
> On 10/6/17 2:37 AM, Michal Hocko wrote:
> > On Thu 05-10-17 05:29:10, Yang Shi wrote:
[...]
> > > + list_for_each_entry_safe(s, s2, _caches, list) {
> > > + if (!is_root_cache(s) || (s->flags & SLAB_RECLAIM_ACCOUNT))
> > > +
On Sat 07-10-17 00:37:55, Yang Shi wrote:
>
>
> On 10/6/17 2:37 AM, Michal Hocko wrote:
> > On Thu 05-10-17 05:29:10, Yang Shi wrote:
[...]
> > > + list_for_each_entry_safe(s, s2, _caches, list) {
> > > + if (!is_root_cache(s) || (s->flags & SLAB_RECLAIM_ACCOUNT))
> > > +
Hi Yang,
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.14-rc3 next-20170929]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
Hi Yang,
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.14-rc3 next-20170929]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
Hi Yang,
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.14-rc3 next-20170929]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
Hi Yang,
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.14-rc3 next-20170929]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
On 10/6/17 2:37 AM, Michal Hocko wrote:
On Thu 05-10-17 05:29:10, Yang Shi wrote:
Kernel may panic when oom happens without killable process sometimes it
is caused by huge unreclaimable slabs used by kernel.
Although kdump could help debug such problem, however, kdump is not
available on all
On 10/6/17 2:37 AM, Michal Hocko wrote:
On Thu 05-10-17 05:29:10, Yang Shi wrote:
Kernel may panic when oom happens without killable process sometimes it
is caused by huge unreclaimable slabs used by kernel.
Although kdump could help debug such problem, however, kdump is not
available on all
On Thu 05-10-17 05:29:10, Yang Shi wrote:
> Kernel may panic when oom happens without killable process sometimes it
> is caused by huge unreclaimable slabs used by kernel.
>
> Although kdump could help debug such problem, however, kdump is not
> available on all architectures and it might be
On Thu 05-10-17 05:29:10, Yang Shi wrote:
> Kernel may panic when oom happens without killable process sometimes it
> is caused by huge unreclaimable slabs used by kernel.
>
> Although kdump could help debug such problem, however, kdump is not
> available on all architectures and it might be
On Thu 05-10-17 02:08:48, Yang Shi wrote:
>
>
> On 10/4/17 7:27 AM, Michal Hocko wrote:
> > On Wed 04-10-17 02:06:17, Yang Shi wrote:
> > > +static bool is_dump_unreclaim_slabs(void)
> > > +{
> > > + unsigned long nr_lru;
> > > +
> > > + nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
> > > +
On Thu 05-10-17 02:08:48, Yang Shi wrote:
>
>
> On 10/4/17 7:27 AM, Michal Hocko wrote:
> > On Wed 04-10-17 02:06:17, Yang Shi wrote:
> > > +static bool is_dump_unreclaim_slabs(void)
> > > +{
> > > + unsigned long nr_lru;
> > > +
> > > + nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
> > > +
Kernel may panic when oom happens without killable process sometimes it
is caused by huge unreclaimable slabs used by kernel.
Although kdump could help debug such problem, however, kdump is not
available on all architectures and it might be malfunction sometime.
And, since kernel already panic it
Kernel may panic when oom happens without killable process sometimes it
is caused by huge unreclaimable slabs used by kernel.
Although kdump could help debug such problem, however, kdump is not
available on all architectures and it might be malfunction sometime.
And, since kernel already panic it
On 10/4/17 7:27 AM, Michal Hocko wrote:
On Wed 04-10-17 02:06:17, Yang Shi wrote:
+static bool is_dump_unreclaim_slabs(void)
+{
+ unsigned long nr_lru;
+
+ nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
+global_node_page_state(NR_INACTIVE_ANON) +
+
On 10/4/17 7:27 AM, Michal Hocko wrote:
On Wed 04-10-17 02:06:17, Yang Shi wrote:
+static bool is_dump_unreclaim_slabs(void)
+{
+ unsigned long nr_lru;
+
+ nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
+global_node_page_state(NR_INACTIVE_ANON) +
+
On 10/4/17 7:27 AM, Michal Hocko wrote:
On Wed 04-10-17 02:06:17, Yang Shi wrote:
+static bool is_dump_unreclaim_slabs(void)
+{
+ unsigned long nr_lru;
+
+ nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
+global_node_page_state(NR_INACTIVE_ANON) +
+
On 10/4/17 7:27 AM, Michal Hocko wrote:
On Wed 04-10-17 02:06:17, Yang Shi wrote:
+static bool is_dump_unreclaim_slabs(void)
+{
+ unsigned long nr_lru;
+
+ nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
+global_node_page_state(NR_INACTIVE_ANON) +
+
On Wed 04-10-17 02:06:17, Yang Shi wrote:
> +static bool is_dump_unreclaim_slabs(void)
> +{
> + unsigned long nr_lru;
> +
> + nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
> + global_node_page_state(NR_INACTIVE_ANON) +
> +
On Wed 04-10-17 02:06:17, Yang Shi wrote:
> +static bool is_dump_unreclaim_slabs(void)
> +{
> + unsigned long nr_lru;
> +
> + nr_lru = global_node_page_state(NR_ACTIVE_ANON) +
> + global_node_page_state(NR_INACTIVE_ANON) +
> +
Kernel may panic when oom happens without killable process sometimes it
is caused by huge unreclaimable slabs used by kernel.
Although kdump could help debug such problem, however, kdump is not
available on all architectures and it might be malfunction sometime.
And, since kernel already panic it
Kernel may panic when oom happens without killable process sometimes it
is caused by huge unreclaimable slabs used by kernel.
Although kdump could help debug such problem, however, kdump is not
available on all architectures and it might be malfunction sometime.
And, since kernel already panic it
52 matches
Mail list logo