ftrace can fail to allocate per-CPU ring buffer on systems with a large
number of CPUs coupled while large amounts of cache happening in the
page cache. Currently the ring buffer allocation doesn't retry in the VM
implementation even if direct-reclaim made some progress but still
wasn't able to find a free page. On retrying I see that the allocations
almost always succeed. The retry doesn't happen because __GFP_NORETRY is
used in the tracer to prevent the case where we might OOM, however if we
drop __GFP_NORETRY, we risk destabilizing the system if OOM killer is
triggered. To prevent this situation, use the __GFP_DONTOOM flag
introduced in earlier patches while droppping __GFP_NORETRY.

With this the following succeed without destabilizing a system with 8
CPU cores and 4GB of memory:
echo 100000 > /sys/kernel/debug/tracing/buffer_size_kb
On an 8-core system, that would allocate ~800MB.

Cc: Alexander Duyck <alexander.h.du...@intel.com>
Cc: Mel Gorman <mgor...@suse.de>
Cc: Hao Lee <haolee.sw...@gmail.com>
Cc: Vladimir Davydov <vdavydov....@gmail.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
Cc: Steven Rostedt <rost...@goodmis.org>
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Joel Fernandes <joe...@google.com>
---
 kernel/trace/ring_buffer.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 4ae268e687fe..b1cdcac6ca89 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1141,7 +1141,7 @@ static int __rb_allocate_pages(long nr_pages, struct 
list_head *pages, int cpu)
                 * not destabilized.
                 */
                bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
-                                   GFP_KERNEL | __GFP_NORETRY,
+                                   GFP_KERNEL | __GFP_DONTOOM,
                                    cpu_to_node(cpu));
                if (!bpage)
                        goto free_pages;
@@ -1149,7 +1149,7 @@ static int __rb_allocate_pages(long nr_pages, struct 
list_head *pages, int cpu)
                list_add(&bpage->list, pages);
 
                page = alloc_pages_node(cpu_to_node(cpu),
-                                       GFP_KERNEL | __GFP_NORETRY, 0);
+                                       GFP_KERNEL | __GFP_DONTOOM, 0);
                if (!page)
                        goto free_pages;
                bpage->page = page_address(page);
-- 
2.13.2.725.g09c95d1e9-goog

Reply via email to