The issue with ftrace_buffer_size.sh is that it allocates 10% of
server's memory. However the tracing ring buffer is implemented using
separate buffers for each CPU in the system. And putting new value to
the buffer_size_kb file increases each CPU buffer with new value. Now
having system with more than 10 CPUs we will get out of memory sooner or
later.

The patch makes sure the memory size is divided by number of CPUs in the
system.

Signed-off-by: Jiri Olsa <[email protected]>
Signed-off-by: Caspar Zhang <[email protected]>
---
 .../ftrace_stress/ftrace_buffer_size.sh            |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/testcases/kernel/tracing/ftrace_stress_test/ftrace_stress/ftrace_buffer_size.sh b/testcases/kernel/tracing/ftrace_stress_test/ftrace_stress/ftrace_buffer_size.sh
index af5a98e..b8f129d 100755
--- a/testcases/kernel/tracing/ftrace_stress_test/ftrace_stress/ftrace_buffer_size.sh
+++ b/testcases/kernel/tracing/ftrace_stress_test/ftrace_stress/ftrace_buffer_size.sh
@@ -17,7 +17,9 @@ LOOP=200
 
 # Use up to 10% of free memory
 free_mem=`cat /proc/meminfo | grep '^MemFree' | awk '{ print $2 }'`
-step=$(( $free_mem / 10 / $LOOP ))
+cpus=`cat /proc/cpuinfo | egrep "^processor.*:" | wc -l`
+step=$(( $free_mem / 10 / $LOOP / $cpus ))
+
 if [ $step -eq 0 ]; then
 	$step=1
 	LOOP=50
@@ -40,4 +42,3 @@ for ((; ;))
 
 	sleep 1
 }
-
------------------------------------------------------------------------------
Fulfilling the Lean Software Promise
Lean software platforms are now widely adopted and the benefits have been 
demonstrated beyond question. Learn why your peers are replacing JEE 
containers with lightweight application servers - and what you can gain 
from the move. http://p.sf.net/sfu/vmware-sfemails
_______________________________________________
Ltp-list mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ltp-list

Reply via email to