Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-21 Thread hejianet



On 9/22/16 2:24 AM, Marcelo wrote:

On Thu, Sep 22, 2016 at 12:18:46AM +0800, hejianet wrote:

Hi Marcelo

sorry for the late, just came back from a vacation.

Hi, no problem. Hope your batteries are recharged now :-)


On 9/14/16 7:55 PM, Marcelo wrote:

Hi Jia,

On Wed, Sep 14, 2016 at 01:58:42PM +0800, hejianet wrote:

Hi Marcelo


On 9/13/16 2:57 AM, Marcelo wrote:

On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:

This is to use the generic interface snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

Do you think it is acceptable if the stack usage is a little larger than 1024?
e.g. 1120
I can't find any other way to reduce the stack usage except use "static" before
unsigned long buff[TCP_MIB_MAX]

PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
B.R.

That's pretty much the question. Linux has the option on some archs to
run with 4Kb (4KSTACKS option), so this function alone would be using
25% of it in this last case. While on x86_64, it uses 16Kb (6538b8ea886e
("x86_64: expand kernel stack to 16K")).

Adding static to it is not an option as it actually makes the variable
shared amongst the CPUs (and then you have concurrency issues), plus the
fact that it's always allocated, even while not in use.

Others here certainly know better than me if it's okay to make such
usage of the stach.

What about this patch instead?
It is a trade-off. I split the aggregation process into 2 parts, it will
increase the cache miss a little bit, but it can reduce the stack usage.
After this, stack usage is 672bytes
objdump -d vmlinux | ./scripts/checkstack.pl ppc64 | grep seq_show
0xc07f7cc0 netstat_seq_show_tcpext.isra.3 [vmlinux]:672

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index c6ee8a2..cc41590 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -486,22 +486,37 @@ static const struct file_operations snmp_seq_fops = {
   */
  static int netstat_seq_show_tcpext(struct seq_file *seq, void *v)
  {
-   int i;
-   unsigned long buff[LINUX_MIB_MAX];
+   int i, c;
+   unsigned long buff[LINUX_MIB_MAX/2 + 1];
 struct net *net = seq->private;

-   memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_MAX);
+   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));

 seq_puts(seq, "TcpExt:");
 for (i = 0; snmp4_net_list[i].name; i++)
 seq_printf(seq, " %s", snmp4_net_list[i].name);

 seq_puts(seq, "\nTcpExt:");
-   snmp_get_cpu_field_batch(buff, snmp4_net_list,
-net->mib.net_statistics);
-   for (i = 0; snmp4_net_list[i].name; i++)
+   for_each_possible_cpu(c) {
+   for (i = 0; i < LINUX_MIB_MAX/2; i++)
+   buff[i] += snmp_get_cpu_field(
+ net->mib.net_statistics,
+   c, snmp4_net_list[i].entry);
+   }
+   for (i = 0; i < LINUX_MIB_MAX/2; i++)
 seq_printf(seq, " %lu", buff[i]);

+   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));
+   for_each_possible_cpu(c) {
+   for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
+   buff[i - LINUX_MIB_MAX/2] += snmp_get_cpu_field(
+   net->mib.net_statistics,
+   c,
+   snmp4_net_list[i].entry);
+   }
+for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
+seq_printf(seq, " %lu", buff[i - LINUX_MIB_MAX/2]);
+
 return 0;
  }

Yep, it halves the stack usage, but it doesn't look good heh

But well, you may try to post the patchset (with or without this last
change, you pick) officially and see how it goes. As you're posting as
RFC, it's not being evaluated as seriously.

Thanks for the suggestion, I will remove it in future patch version


FWIW, I tested your patches, using your test and /proc/net/snmp file on
a x86_64 box, Intel(R) Xeon(R) CPU E5-2643 v3.

Before the patches:

  Performance counter stats for './test /proc/net/snmp':

  5.225  cache-misses
 12.708.673.785  L1-dcache-loads
  1.288.450.174  L1-dcache-load-misses #   10,14% of all L1-dcache 
hits
  1.271.857.028  LLC-loads
  4.122  LLC-load-misses   #0,00% of all LL-cache 
hits

9,174936524 seconds time elapsed

After:

  Performance counter stats for './test /proc/net/snmp':

  2.865  cache-misses
 30.203.883.807  L1-dcache-loads
  1.215.774.643  L1-dcache-load-misses #4,03% of all L1-dcache 
hits
  1.181.662.831  LLC-loads
  2.685  LLC-load-misses   #0,00% 

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-21 Thread hejianet



On 9/22/16 2:24 AM, Marcelo wrote:

On Thu, Sep 22, 2016 at 12:18:46AM +0800, hejianet wrote:

Hi Marcelo

sorry for the late, just came back from a vacation.

Hi, no problem. Hope your batteries are recharged now :-)


On 9/14/16 7:55 PM, Marcelo wrote:

Hi Jia,

On Wed, Sep 14, 2016 at 01:58:42PM +0800, hejianet wrote:

Hi Marcelo


On 9/13/16 2:57 AM, Marcelo wrote:

On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:

This is to use the generic interface snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

Do you think it is acceptable if the stack usage is a little larger than 1024?
e.g. 1120
I can't find any other way to reduce the stack usage except use "static" before
unsigned long buff[TCP_MIB_MAX]

PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
B.R.

That's pretty much the question. Linux has the option on some archs to
run with 4Kb (4KSTACKS option), so this function alone would be using
25% of it in this last case. While on x86_64, it uses 16Kb (6538b8ea886e
("x86_64: expand kernel stack to 16K")).

Adding static to it is not an option as it actually makes the variable
shared amongst the CPUs (and then you have concurrency issues), plus the
fact that it's always allocated, even while not in use.

Others here certainly know better than me if it's okay to make such
usage of the stach.

What about this patch instead?
It is a trade-off. I split the aggregation process into 2 parts, it will
increase the cache miss a little bit, but it can reduce the stack usage.
After this, stack usage is 672bytes
objdump -d vmlinux | ./scripts/checkstack.pl ppc64 | grep seq_show
0xc07f7cc0 netstat_seq_show_tcpext.isra.3 [vmlinux]:672

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index c6ee8a2..cc41590 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -486,22 +486,37 @@ static const struct file_operations snmp_seq_fops = {
   */
  static int netstat_seq_show_tcpext(struct seq_file *seq, void *v)
  {
-   int i;
-   unsigned long buff[LINUX_MIB_MAX];
+   int i, c;
+   unsigned long buff[LINUX_MIB_MAX/2 + 1];
 struct net *net = seq->private;

-   memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_MAX);
+   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));

 seq_puts(seq, "TcpExt:");
 for (i = 0; snmp4_net_list[i].name; i++)
 seq_printf(seq, " %s", snmp4_net_list[i].name);

 seq_puts(seq, "\nTcpExt:");
-   snmp_get_cpu_field_batch(buff, snmp4_net_list,
-net->mib.net_statistics);
-   for (i = 0; snmp4_net_list[i].name; i++)
+   for_each_possible_cpu(c) {
+   for (i = 0; i < LINUX_MIB_MAX/2; i++)
+   buff[i] += snmp_get_cpu_field(
+ net->mib.net_statistics,
+   c, snmp4_net_list[i].entry);
+   }
+   for (i = 0; i < LINUX_MIB_MAX/2; i++)
 seq_printf(seq, " %lu", buff[i]);

+   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));
+   for_each_possible_cpu(c) {
+   for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
+   buff[i - LINUX_MIB_MAX/2] += snmp_get_cpu_field(
+   net->mib.net_statistics,
+   c,
+   snmp4_net_list[i].entry);
+   }
+for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
+seq_printf(seq, " %lu", buff[i - LINUX_MIB_MAX/2]);
+
 return 0;
  }

Yep, it halves the stack usage, but it doesn't look good heh

But well, you may try to post the patchset (with or without this last
change, you pick) officially and see how it goes. As you're posting as
RFC, it's not being evaluated as seriously.

Thanks for the suggestion, I will remove it in future patch version


FWIW, I tested your patches, using your test and /proc/net/snmp file on
a x86_64 box, Intel(R) Xeon(R) CPU E5-2643 v3.

Before the patches:

  Performance counter stats for './test /proc/net/snmp':

  5.225  cache-misses
 12.708.673.785  L1-dcache-loads
  1.288.450.174  L1-dcache-load-misses #   10,14% of all L1-dcache 
hits
  1.271.857.028  LLC-loads
  4.122  LLC-load-misses   #0,00% of all LL-cache 
hits

9,174936524 seconds time elapsed

After:

  Performance counter stats for './test /proc/net/snmp':

  2.865  cache-misses
 30.203.883.807  L1-dcache-loads
  1.215.774.643  L1-dcache-load-misses #4,03% of all L1-dcache 
hits
  1.181.662.831  LLC-loads
  2.685  LLC-load-misses   #0,00% 

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-21 Thread Marcelo
On Thu, Sep 22, 2016 at 12:18:46AM +0800, hejianet wrote:
> Hi Marcelo
> 
> sorry for the late, just came back from a vacation.

Hi, no problem. Hope your batteries are recharged now :-)

> 
> On 9/14/16 7:55 PM, Marcelo wrote:
> > Hi Jia,
> > 
> > On Wed, Sep 14, 2016 at 01:58:42PM +0800, hejianet wrote:
> > > Hi Marcelo
> > > 
> > > 
> > > On 9/13/16 2:57 AM, Marcelo wrote:
> > > > On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:
> > > > > This is to use the generic interface snmp_get_cpu_field{,64}_batch to
> > > > > aggregate the data by going through all the items of each cpu 
> > > > > sequentially.
> > > > > Then snmp_seq_show and netstat_seq_show are split into 2 parts to 
> > > > > avoid build
> > > > > warning "the frame size" larger than 1024 on s390.
> > > > Yeah about that, did you test it with stack overflow detection?
> > > > These arrays can be quite large.
> > > > 
> > > > One more below..
> > > Do you think it is acceptable if the stack usage is a little larger than 
> > > 1024?
> > > e.g. 1120
> > > I can't find any other way to reduce the stack usage except use "static" 
> > > before
> > > unsigned long buff[TCP_MIB_MAX]
> > > 
> > > PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
> > > B.R.
> > That's pretty much the question. Linux has the option on some archs to
> > run with 4Kb (4KSTACKS option), so this function alone would be using
> > 25% of it in this last case. While on x86_64, it uses 16Kb (6538b8ea886e
> > ("x86_64: expand kernel stack to 16K")).
> > 
> > Adding static to it is not an option as it actually makes the variable
> > shared amongst the CPUs (and then you have concurrency issues), plus the
> > fact that it's always allocated, even while not in use.
> > 
> > Others here certainly know better than me if it's okay to make such
> > usage of the stach.
> What about this patch instead?
> It is a trade-off. I split the aggregation process into 2 parts, it will
> increase the cache miss a little bit, but it can reduce the stack usage.
> After this, stack usage is 672bytes
> objdump -d vmlinux | ./scripts/checkstack.pl ppc64 | grep seq_show
> 0xc07f7cc0 netstat_seq_show_tcpext.isra.3 [vmlinux]:672
> 
> diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
> index c6ee8a2..cc41590 100644
> --- a/net/ipv4/proc.c
> +++ b/net/ipv4/proc.c
> @@ -486,22 +486,37 @@ static const struct file_operations snmp_seq_fops = {
>   */
>  static int netstat_seq_show_tcpext(struct seq_file *seq, void *v)
>  {
> -   int i;
> -   unsigned long buff[LINUX_MIB_MAX];
> +   int i, c;
> +   unsigned long buff[LINUX_MIB_MAX/2 + 1];
> struct net *net = seq->private;
> 
> -   memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_MAX);
> +   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));
> 
> seq_puts(seq, "TcpExt:");
> for (i = 0; snmp4_net_list[i].name; i++)
> seq_printf(seq, " %s", snmp4_net_list[i].name);
> 
> seq_puts(seq, "\nTcpExt:");
> -   snmp_get_cpu_field_batch(buff, snmp4_net_list,
> -net->mib.net_statistics);
> -   for (i = 0; snmp4_net_list[i].name; i++)
> +   for_each_possible_cpu(c) {
> +   for (i = 0; i < LINUX_MIB_MAX/2; i++)
> +   buff[i] += snmp_get_cpu_field(
> + net->mib.net_statistics,
> +   c, snmp4_net_list[i].entry);
> +   }
> +   for (i = 0; i < LINUX_MIB_MAX/2; i++)
> seq_printf(seq, " %lu", buff[i]);
> 
> +   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));
> +   for_each_possible_cpu(c) {
> +   for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
> +   buff[i - LINUX_MIB_MAX/2] += snmp_get_cpu_field(
> +   net->mib.net_statistics,
> +   c,
> +   snmp4_net_list[i].entry);
> +   }
> +for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
> +seq_printf(seq, " %lu", buff[i - LINUX_MIB_MAX/2]);
> +
> return 0;
>  }

Yep, it halves the stack usage, but it doesn't look good heh

But well, you may try to post the patchset (with or without this last
change, you pick) officially and see how it goes. As you're posting as
RFC, it's not being evaluated as seriously.

FWIW, I tested your patches, using your test and /proc/net/snmp file on
a x86_64 box, Intel(R) Xeon(R) CPU E5-2643 v3.

Before the patches:

 Performance counter stats for './test /proc/net/snmp':

 5.225  cache-misses

12.708.673.785  L1-dcache-loads 

 1.288.450.174  L1-dcache-load-misses #   10,14% of all L1-dcache 
hits  
 1.271.857.028  LLC-loads   

 4.122  LLC-load-misses   #0,00% of all LL-cache 
hits  

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-21 Thread Marcelo
On Thu, Sep 22, 2016 at 12:18:46AM +0800, hejianet wrote:
> Hi Marcelo
> 
> sorry for the late, just came back from a vacation.

Hi, no problem. Hope your batteries are recharged now :-)

> 
> On 9/14/16 7:55 PM, Marcelo wrote:
> > Hi Jia,
> > 
> > On Wed, Sep 14, 2016 at 01:58:42PM +0800, hejianet wrote:
> > > Hi Marcelo
> > > 
> > > 
> > > On 9/13/16 2:57 AM, Marcelo wrote:
> > > > On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:
> > > > > This is to use the generic interface snmp_get_cpu_field{,64}_batch to
> > > > > aggregate the data by going through all the items of each cpu 
> > > > > sequentially.
> > > > > Then snmp_seq_show and netstat_seq_show are split into 2 parts to 
> > > > > avoid build
> > > > > warning "the frame size" larger than 1024 on s390.
> > > > Yeah about that, did you test it with stack overflow detection?
> > > > These arrays can be quite large.
> > > > 
> > > > One more below..
> > > Do you think it is acceptable if the stack usage is a little larger than 
> > > 1024?
> > > e.g. 1120
> > > I can't find any other way to reduce the stack usage except use "static" 
> > > before
> > > unsigned long buff[TCP_MIB_MAX]
> > > 
> > > PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
> > > B.R.
> > That's pretty much the question. Linux has the option on some archs to
> > run with 4Kb (4KSTACKS option), so this function alone would be using
> > 25% of it in this last case. While on x86_64, it uses 16Kb (6538b8ea886e
> > ("x86_64: expand kernel stack to 16K")).
> > 
> > Adding static to it is not an option as it actually makes the variable
> > shared amongst the CPUs (and then you have concurrency issues), plus the
> > fact that it's always allocated, even while not in use.
> > 
> > Others here certainly know better than me if it's okay to make such
> > usage of the stach.
> What about this patch instead?
> It is a trade-off. I split the aggregation process into 2 parts, it will
> increase the cache miss a little bit, but it can reduce the stack usage.
> After this, stack usage is 672bytes
> objdump -d vmlinux | ./scripts/checkstack.pl ppc64 | grep seq_show
> 0xc07f7cc0 netstat_seq_show_tcpext.isra.3 [vmlinux]:672
> 
> diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
> index c6ee8a2..cc41590 100644
> --- a/net/ipv4/proc.c
> +++ b/net/ipv4/proc.c
> @@ -486,22 +486,37 @@ static const struct file_operations snmp_seq_fops = {
>   */
>  static int netstat_seq_show_tcpext(struct seq_file *seq, void *v)
>  {
> -   int i;
> -   unsigned long buff[LINUX_MIB_MAX];
> +   int i, c;
> +   unsigned long buff[LINUX_MIB_MAX/2 + 1];
> struct net *net = seq->private;
> 
> -   memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_MAX);
> +   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));
> 
> seq_puts(seq, "TcpExt:");
> for (i = 0; snmp4_net_list[i].name; i++)
> seq_printf(seq, " %s", snmp4_net_list[i].name);
> 
> seq_puts(seq, "\nTcpExt:");
> -   snmp_get_cpu_field_batch(buff, snmp4_net_list,
> -net->mib.net_statistics);
> -   for (i = 0; snmp4_net_list[i].name; i++)
> +   for_each_possible_cpu(c) {
> +   for (i = 0; i < LINUX_MIB_MAX/2; i++)
> +   buff[i] += snmp_get_cpu_field(
> + net->mib.net_statistics,
> +   c, snmp4_net_list[i].entry);
> +   }
> +   for (i = 0; i < LINUX_MIB_MAX/2; i++)
> seq_printf(seq, " %lu", buff[i]);
> 
> +   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));
> +   for_each_possible_cpu(c) {
> +   for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
> +   buff[i - LINUX_MIB_MAX/2] += snmp_get_cpu_field(
> +   net->mib.net_statistics,
> +   c,
> +   snmp4_net_list[i].entry);
> +   }
> +for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
> +seq_printf(seq, " %lu", buff[i - LINUX_MIB_MAX/2]);
> +
> return 0;
>  }

Yep, it halves the stack usage, but it doesn't look good heh

But well, you may try to post the patchset (with or without this last
change, you pick) officially and see how it goes. As you're posting as
RFC, it's not being evaluated as seriously.

FWIW, I tested your patches, using your test and /proc/net/snmp file on
a x86_64 box, Intel(R) Xeon(R) CPU E5-2643 v3.

Before the patches:

 Performance counter stats for './test /proc/net/snmp':

 5.225  cache-misses

12.708.673.785  L1-dcache-loads 

 1.288.450.174  L1-dcache-load-misses #   10,14% of all L1-dcache 
hits  
 1.271.857.028  LLC-loads   

 4.122  LLC-load-misses   #0,00% of all LL-cache 
hits  

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-21 Thread hejianet

Hi Marcelo

sorry for the late, just came back from a vacation.

On 9/14/16 7:55 PM, Marcelo wrote:

Hi Jia,

On Wed, Sep 14, 2016 at 01:58:42PM +0800, hejianet wrote:

Hi Marcelo


On 9/13/16 2:57 AM, Marcelo wrote:

On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:

This is to use the generic interface snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

Do you think it is acceptable if the stack usage is a little larger than 1024?
e.g. 1120
I can't find any other way to reduce the stack usage except use "static" before
unsigned long buff[TCP_MIB_MAX]

PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
B.R.

That's pretty much the question. Linux has the option on some archs to
run with 4Kb (4KSTACKS option), so this function alone would be using
25% of it in this last case. While on x86_64, it uses 16Kb (6538b8ea886e
("x86_64: expand kernel stack to 16K")).

Adding static to it is not an option as it actually makes the variable
shared amongst the CPUs (and then you have concurrency issues), plus the
fact that it's always allocated, even while not in use.

Others here certainly know better than me if it's okay to make such
usage of the stach.

What about this patch instead?
It is a trade-off. I split the aggregation process into 2 parts, it will
increase the cache miss a little bit, but it can reduce the stack usage.
After this, stack usage is 672bytes
objdump -d vmlinux | ./scripts/checkstack.pl ppc64 | grep seq_show
0xc07f7cc0 netstat_seq_show_tcpext.isra.3 [vmlinux]:672

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index c6ee8a2..cc41590 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -486,22 +486,37 @@ static const struct file_operations snmp_seq_fops = {
  */
 static int netstat_seq_show_tcpext(struct seq_file *seq, void *v)
 {
-   int i;
-   unsigned long buff[LINUX_MIB_MAX];
+   int i, c;
+   unsigned long buff[LINUX_MIB_MAX/2 + 1];
struct net *net = seq->private;

-   memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_MAX);
+   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));

seq_puts(seq, "TcpExt:");
for (i = 0; snmp4_net_list[i].name; i++)
seq_printf(seq, " %s", snmp4_net_list[i].name);

seq_puts(seq, "\nTcpExt:");
-   snmp_get_cpu_field_batch(buff, snmp4_net_list,
-net->mib.net_statistics);
-   for (i = 0; snmp4_net_list[i].name; i++)
+   for_each_possible_cpu(c) {
+   for (i = 0; i < LINUX_MIB_MAX/2; i++)
+   buff[i] += snmp_get_cpu_field(
+ net->mib.net_statistics,
+   c, snmp4_net_list[i].entry);
+   }
+   for (i = 0; i < LINUX_MIB_MAX/2; i++)
seq_printf(seq, " %lu", buff[i]);

+   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));
+   for_each_possible_cpu(c) {
+   for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
+   buff[i - LINUX_MIB_MAX/2] += snmp_get_cpu_field(
+   net->mib.net_statistics,
+   c,
+   snmp4_net_list[i].entry);
+   }
+for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
+seq_printf(seq, " %lu", buff[i - LINUX_MIB_MAX/2]);
+
return 0;
 }


+static int netstat_seq_show_ipext(struct seq_file *seq, void *v)
+{
+   int i;
+   u64 buff64[IPSTATS_MIB_MAX];
+   struct net *net = seq->private;
seq_puts(seq, "\nIpExt:");
for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
seq_puts(seq, "\nIpExt:");

You're missing a memset() call here.

Not sure if you missed this one or not..

indeed, thanks
B.R.
Jia

Thanks,
Marcelo





Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-21 Thread hejianet

Hi Marcelo

sorry for the late, just came back from a vacation.

On 9/14/16 7:55 PM, Marcelo wrote:

Hi Jia,

On Wed, Sep 14, 2016 at 01:58:42PM +0800, hejianet wrote:

Hi Marcelo


On 9/13/16 2:57 AM, Marcelo wrote:

On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:

This is to use the generic interface snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

Do you think it is acceptable if the stack usage is a little larger than 1024?
e.g. 1120
I can't find any other way to reduce the stack usage except use "static" before
unsigned long buff[TCP_MIB_MAX]

PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
B.R.

That's pretty much the question. Linux has the option on some archs to
run with 4Kb (4KSTACKS option), so this function alone would be using
25% of it in this last case. While on x86_64, it uses 16Kb (6538b8ea886e
("x86_64: expand kernel stack to 16K")).

Adding static to it is not an option as it actually makes the variable
shared amongst the CPUs (and then you have concurrency issues), plus the
fact that it's always allocated, even while not in use.

Others here certainly know better than me if it's okay to make such
usage of the stach.

What about this patch instead?
It is a trade-off. I split the aggregation process into 2 parts, it will
increase the cache miss a little bit, but it can reduce the stack usage.
After this, stack usage is 672bytes
objdump -d vmlinux | ./scripts/checkstack.pl ppc64 | grep seq_show
0xc07f7cc0 netstat_seq_show_tcpext.isra.3 [vmlinux]:672

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index c6ee8a2..cc41590 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -486,22 +486,37 @@ static const struct file_operations snmp_seq_fops = {
  */
 static int netstat_seq_show_tcpext(struct seq_file *seq, void *v)
 {
-   int i;
-   unsigned long buff[LINUX_MIB_MAX];
+   int i, c;
+   unsigned long buff[LINUX_MIB_MAX/2 + 1];
struct net *net = seq->private;

-   memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_MAX);
+   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));

seq_puts(seq, "TcpExt:");
for (i = 0; snmp4_net_list[i].name; i++)
seq_printf(seq, " %s", snmp4_net_list[i].name);

seq_puts(seq, "\nTcpExt:");
-   snmp_get_cpu_field_batch(buff, snmp4_net_list,
-net->mib.net_statistics);
-   for (i = 0; snmp4_net_list[i].name; i++)
+   for_each_possible_cpu(c) {
+   for (i = 0; i < LINUX_MIB_MAX/2; i++)
+   buff[i] += snmp_get_cpu_field(
+ net->mib.net_statistics,
+   c, snmp4_net_list[i].entry);
+   }
+   for (i = 0; i < LINUX_MIB_MAX/2; i++)
seq_printf(seq, " %lu", buff[i]);

+   memset(buff, 0, sizeof(unsigned long) * (LINUX_MIB_MAX/2 + 1));
+   for_each_possible_cpu(c) {
+   for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
+   buff[i - LINUX_MIB_MAX/2] += snmp_get_cpu_field(
+   net->mib.net_statistics,
+   c,
+   snmp4_net_list[i].entry);
+   }
+for (i = LINUX_MIB_MAX/2; snmp4_net_list[i].name; i++)
+seq_printf(seq, " %lu", buff[i - LINUX_MIB_MAX/2]);
+
return 0;
 }


+static int netstat_seq_show_ipext(struct seq_file *seq, void *v)
+{
+   int i;
+   u64 buff64[IPSTATS_MIB_MAX];
+   struct net *net = seq->private;
seq_puts(seq, "\nIpExt:");
for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
seq_puts(seq, "\nIpExt:");

You're missing a memset() call here.

Not sure if you missed this one or not..

indeed, thanks
B.R.
Jia

Thanks,
Marcelo





Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-14 Thread Marcelo
Hi Jia,

On Wed, Sep 14, 2016 at 01:58:42PM +0800, hejianet wrote:
> Hi Marcelo
> 
> 
> On 9/13/16 2:57 AM, Marcelo wrote:
> > On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:
> > > This is to use the generic interface snmp_get_cpu_field{,64}_batch to
> > > aggregate the data by going through all the items of each cpu 
> > > sequentially.
> > > Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid 
> > > build
> > > warning "the frame size" larger than 1024 on s390.
> > Yeah about that, did you test it with stack overflow detection?
> > These arrays can be quite large.
> > 
> > One more below..
> Do you think it is acceptable if the stack usage is a little larger than 1024?
> e.g. 1120
> I can't find any other way to reduce the stack usage except use "static" 
> before
> unsigned long buff[TCP_MIB_MAX]
> 
> PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
> B.R.

That's pretty much the question. Linux has the option on some archs to
run with 4Kb (4KSTACKS option), so this function alone would be using
25% of it in this last case. While on x86_64, it uses 16Kb (6538b8ea886e
("x86_64: expand kernel stack to 16K")).

Adding static to it is not an option as it actually makes the variable
shared amongst the CPUs (and then you have concurrency issues), plus the
fact that it's always allocated, even while not in use.

Others here certainly know better than me if it's okay to make such
usage of the stach.

> > > +static int netstat_seq_show_ipext(struct seq_file *seq, void *v)
> > > +{
> > > + int i;
> > > + u64 buff64[IPSTATS_MIB_MAX];
> > > + struct net *net = seq->private;
> > >   seq_puts(seq, "\nIpExt:");
> > >   for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
> > >   seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
> > >   seq_puts(seq, "\nIpExt:");
> > You're missing a memset() call here.

Not sure if you missed this one or not..

Thanks,
Marcelo


Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-14 Thread Marcelo
Hi Jia,

On Wed, Sep 14, 2016 at 01:58:42PM +0800, hejianet wrote:
> Hi Marcelo
> 
> 
> On 9/13/16 2:57 AM, Marcelo wrote:
> > On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:
> > > This is to use the generic interface snmp_get_cpu_field{,64}_batch to
> > > aggregate the data by going through all the items of each cpu 
> > > sequentially.
> > > Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid 
> > > build
> > > warning "the frame size" larger than 1024 on s390.
> > Yeah about that, did you test it with stack overflow detection?
> > These arrays can be quite large.
> > 
> > One more below..
> Do you think it is acceptable if the stack usage is a little larger than 1024?
> e.g. 1120
> I can't find any other way to reduce the stack usage except use "static" 
> before
> unsigned long buff[TCP_MIB_MAX]
> 
> PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
> B.R.

That's pretty much the question. Linux has the option on some archs to
run with 4Kb (4KSTACKS option), so this function alone would be using
25% of it in this last case. While on x86_64, it uses 16Kb (6538b8ea886e
("x86_64: expand kernel stack to 16K")).

Adding static to it is not an option as it actually makes the variable
shared amongst the CPUs (and then you have concurrency issues), plus the
fact that it's always allocated, even while not in use.

Others here certainly know better than me if it's okay to make such
usage of the stach.

> > > +static int netstat_seq_show_ipext(struct seq_file *seq, void *v)
> > > +{
> > > + int i;
> > > + u64 buff64[IPSTATS_MIB_MAX];
> > > + struct net *net = seq->private;
> > >   seq_puts(seq, "\nIpExt:");
> > >   for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
> > >   seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
> > >   seq_puts(seq, "\nIpExt:");
> > You're missing a memset() call here.

Not sure if you missed this one or not..

Thanks,
Marcelo


Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-13 Thread hejianet

Hi Marcelo


On 9/13/16 2:57 AM, Marcelo wrote:

On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:

This is to use the generic interface snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

Do you think it is acceptable if the stack usage is a little larger than 1024?
e.g. 1120
I can't find any other way to reduce the stack usage except use "static" before
unsigned long buff[TCP_MIB_MAX]

PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
B.R.

Signed-off-by: Jia He 
---
  net/ipv4/proc.c | 106 +++-
  1 file changed, 74 insertions(+), 32 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..c6fc80e 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,8 @@
  #include 
  #include 
  
+#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)

+
  /*
   *Report socket allocation statistics [m...@utu.fi]
   */
@@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
  /*
   *Called from the PROCfs module. This outputs /proc/net/snmp.
   */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
  {
int i;
+   u64 buff64[IPSTATS_MIB_MAX];
struct net *net = seq->private;
  
-	seq_puts(seq, "Ip: Forwarding DefaultTTL");

+   memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
  
+	seq_puts(seq, "Ip: Forwarding DefaultTTL");

for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
  
@@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)

   net->ipv4.sysctl_ip_default_ttl);
  
  	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);

+   snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
+  net->mib.ip_statistics,
+  offsetof(struct ipstats_mib, syncp));
for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-   seq_printf(seq, " %llu",
-  snmp_fold_field64(net->mib.ip_statistics,
-snmp4_ipstats_list[i].entry,
-offsetof(struct ipstats_mib, 
syncp)));
+   seq_printf(seq, " %llu", buff64[i]);
  
-	icmp_put(seq);	/* RFC 2011 compatibility */

-   icmpmsg_put(seq);
+   return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+   int i;
+   unsigned long buff[TCPUDP_MIB_MAX];
+   struct net *net = seq->private;
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
  
  	seq_puts(seq, "\nTcp:");

for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_tcp_list[i].name);
  
  	seq_puts(seq, "\nTcp:");

+   snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
+net->mib.tcp_statistics);
for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
/* MaxConn field is signed, RFC 2012 */
if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
-   seq_printf(seq, " %ld",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %ld", buff[i]);
else
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
}
  
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));

+
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udp_statistics);
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.udp_statistics,
-  snmp4_udp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
  
  	/* the UDP and UDP-Lite MIBs are the same */

seq_puts(seq, "\nUdpLite:");
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udplite_statistics);
for (i = 0; snmp4_udp_list[i].name != 

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-13 Thread hejianet

Hi Marcelo


On 9/13/16 2:57 AM, Marcelo wrote:

On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:

This is to use the generic interface snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

Do you think it is acceptable if the stack usage is a little larger than 1024?
e.g. 1120
I can't find any other way to reduce the stack usage except use "static" before
unsigned long buff[TCP_MIB_MAX]

PS. sizeof buff is about TCP_MIB_MAX(116)*8=928
B.R.

Signed-off-by: Jia He 
---
  net/ipv4/proc.c | 106 +++-
  1 file changed, 74 insertions(+), 32 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..c6fc80e 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,8 @@
  #include 
  #include 
  
+#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)

+
  /*
   *Report socket allocation statistics [m...@utu.fi]
   */
@@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
  /*
   *Called from the PROCfs module. This outputs /proc/net/snmp.
   */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
  {
int i;
+   u64 buff64[IPSTATS_MIB_MAX];
struct net *net = seq->private;
  
-	seq_puts(seq, "Ip: Forwarding DefaultTTL");

+   memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
  
+	seq_puts(seq, "Ip: Forwarding DefaultTTL");

for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
  
@@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)

   net->ipv4.sysctl_ip_default_ttl);
  
  	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);

+   snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
+  net->mib.ip_statistics,
+  offsetof(struct ipstats_mib, syncp));
for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-   seq_printf(seq, " %llu",
-  snmp_fold_field64(net->mib.ip_statistics,
-snmp4_ipstats_list[i].entry,
-offsetof(struct ipstats_mib, 
syncp)));
+   seq_printf(seq, " %llu", buff64[i]);
  
-	icmp_put(seq);	/* RFC 2011 compatibility */

-   icmpmsg_put(seq);
+   return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+   int i;
+   unsigned long buff[TCPUDP_MIB_MAX];
+   struct net *net = seq->private;
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
  
  	seq_puts(seq, "\nTcp:");

for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_tcp_list[i].name);
  
  	seq_puts(seq, "\nTcp:");

+   snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
+net->mib.tcp_statistics);
for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
/* MaxConn field is signed, RFC 2012 */
if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
-   seq_printf(seq, " %ld",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %ld", buff[i]);
else
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
}
  
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));

+
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udp_statistics);
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.udp_statistics,
-  snmp4_udp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
  
  	/* the UDP and UDP-Lite MIBs are the same */

seq_puts(seq, "\nUdpLite:");
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udplite_statistics);
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
  

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-13 Thread hejianet

Hi Marcelo


On 9/13/16 2:57 AM, Marcelo wrote:

On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:

This is to use the generic interface snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

I found scripts/checkstack.pl could analyze the stack usage statically.
[root@tian-lp1 kernel]# objdump -d vmlinux | scripts/checkstack.pl ppc64|grep 
seq
0xc07d4b18 netstat_seq_show_tcpext.isra.7 [vmlinux]:1120
0xc07ccbe8 fib_triestat_seq_show [vmlinux]: 496
0xc083e7a4 tcp6_seq_show [vmlinux]: 480
0xc07d4908 snmp_seq_show_ipstats.isra.6 [vmlinux]:464
0xc07d4d18 netstat_seq_show_ipext.isra.8 [vmlinux]:464
0xc06f5bd8 proto_seq_show [vmlinux]:416
0xc07f5718 xfrm_statistics_seq_show [vmlinux]:  416
0xc07405b4 dev_seq_printf_stats [vmlinux]:  400

seems the stack usage in netstat_seq_show_tcpext is too big.
Will consider how to reduce it

B.R.
Jia

Signed-off-by: Jia He 
---
  net/ipv4/proc.c | 106 +++-
  1 file changed, 74 insertions(+), 32 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..c6fc80e 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,8 @@
  #include 
  #include 
  
+#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)

+
  /*
   *Report socket allocation statistics [m...@utu.fi]
   */
@@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
  /*
   *Called from the PROCfs module. This outputs /proc/net/snmp.
   */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
  {
int i;
+   u64 buff64[IPSTATS_MIB_MAX];
struct net *net = seq->private;
  
-	seq_puts(seq, "Ip: Forwarding DefaultTTL");

+   memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
  
+	seq_puts(seq, "Ip: Forwarding DefaultTTL");

for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
  
@@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)

   net->ipv4.sysctl_ip_default_ttl);
  
  	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);

+   snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
+  net->mib.ip_statistics,
+  offsetof(struct ipstats_mib, syncp));
for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-   seq_printf(seq, " %llu",
-  snmp_fold_field64(net->mib.ip_statistics,
-snmp4_ipstats_list[i].entry,
-offsetof(struct ipstats_mib, 
syncp)));
+   seq_printf(seq, " %llu", buff64[i]);
  
-	icmp_put(seq);	/* RFC 2011 compatibility */

-   icmpmsg_put(seq);
+   return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+   int i;
+   unsigned long buff[TCPUDP_MIB_MAX];
+   struct net *net = seq->private;
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
  
  	seq_puts(seq, "\nTcp:");

for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_tcp_list[i].name);
  
  	seq_puts(seq, "\nTcp:");

+   snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
+net->mib.tcp_statistics);
for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
/* MaxConn field is signed, RFC 2012 */
if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
-   seq_printf(seq, " %ld",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %ld", buff[i]);
else
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
}
  
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));

+
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udp_statistics);
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-   seq_printf(seq, " %lu",
-   

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-13 Thread hejianet

Hi Marcelo


On 9/13/16 2:57 AM, Marcelo wrote:

On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:

This is to use the generic interface snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

I found scripts/checkstack.pl could analyze the stack usage statically.
[root@tian-lp1 kernel]# objdump -d vmlinux | scripts/checkstack.pl ppc64|grep 
seq
0xc07d4b18 netstat_seq_show_tcpext.isra.7 [vmlinux]:1120
0xc07ccbe8 fib_triestat_seq_show [vmlinux]: 496
0xc083e7a4 tcp6_seq_show [vmlinux]: 480
0xc07d4908 snmp_seq_show_ipstats.isra.6 [vmlinux]:464
0xc07d4d18 netstat_seq_show_ipext.isra.8 [vmlinux]:464
0xc06f5bd8 proto_seq_show [vmlinux]:416
0xc07f5718 xfrm_statistics_seq_show [vmlinux]:  416
0xc07405b4 dev_seq_printf_stats [vmlinux]:  400

seems the stack usage in netstat_seq_show_tcpext is too big.
Will consider how to reduce it

B.R.
Jia

Signed-off-by: Jia He 
---
  net/ipv4/proc.c | 106 +++-
  1 file changed, 74 insertions(+), 32 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..c6fc80e 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,8 @@
  #include 
  #include 
  
+#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)

+
  /*
   *Report socket allocation statistics [m...@utu.fi]
   */
@@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
  /*
   *Called from the PROCfs module. This outputs /proc/net/snmp.
   */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
  {
int i;
+   u64 buff64[IPSTATS_MIB_MAX];
struct net *net = seq->private;
  
-	seq_puts(seq, "Ip: Forwarding DefaultTTL");

+   memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
  
+	seq_puts(seq, "Ip: Forwarding DefaultTTL");

for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
  
@@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)

   net->ipv4.sysctl_ip_default_ttl);
  
  	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);

+   snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
+  net->mib.ip_statistics,
+  offsetof(struct ipstats_mib, syncp));
for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-   seq_printf(seq, " %llu",
-  snmp_fold_field64(net->mib.ip_statistics,
-snmp4_ipstats_list[i].entry,
-offsetof(struct ipstats_mib, 
syncp)));
+   seq_printf(seq, " %llu", buff64[i]);
  
-	icmp_put(seq);	/* RFC 2011 compatibility */

-   icmpmsg_put(seq);
+   return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+   int i;
+   unsigned long buff[TCPUDP_MIB_MAX];
+   struct net *net = seq->private;
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
  
  	seq_puts(seq, "\nTcp:");

for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_tcp_list[i].name);
  
  	seq_puts(seq, "\nTcp:");

+   snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
+net->mib.tcp_statistics);
for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
/* MaxConn field is signed, RFC 2012 */
if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
-   seq_printf(seq, " %ld",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %ld", buff[i]);
else
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
}
  
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));

+
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udp_statistics);
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-   seq_printf(seq, " %lu",
-  

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-12 Thread Marcelo
On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:
> This is to use the generic interface snmp_get_cpu_field{,64}_batch to 
> aggregate the data by going through all the items of each cpu sequentially.
> Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
> warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

> 
> Signed-off-by: Jia He 
> ---
>  net/ipv4/proc.c | 106 
> +++-
>  1 file changed, 74 insertions(+), 32 deletions(-)
> 
> diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
> index 9f665b6..c6fc80e 100644
> --- a/net/ipv4/proc.c
> +++ b/net/ipv4/proc.c
> @@ -46,6 +46,8 @@
>  #include 
>  #include 
>  
> +#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)
> +
>  /*
>   *   Report socket allocation statistics [m...@utu.fi]
>   */
> @@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
>  /*
>   *   Called from the PROCfs module. This outputs /proc/net/snmp.
>   */
> -static int snmp_seq_show(struct seq_file *seq, void *v)
> +static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
>  {
>   int i;
> + u64 buff64[IPSTATS_MIB_MAX];
>   struct net *net = seq->private;
>  
> - seq_puts(seq, "Ip: Forwarding DefaultTTL");
> + memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
>  
> + seq_puts(seq, "Ip: Forwarding DefaultTTL");
>   for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
>   seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
>  
> @@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)
>  net->ipv4.sysctl_ip_default_ttl);
>  
>   BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
> + snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
> +net->mib.ip_statistics,
> +offsetof(struct ipstats_mib, syncp));
>   for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
> - seq_printf(seq, " %llu",
> -snmp_fold_field64(net->mib.ip_statistics,
> -  snmp4_ipstats_list[i].entry,
> -  offsetof(struct ipstats_mib, 
> syncp)));
> + seq_printf(seq, " %llu", buff64[i]);
>  
> - icmp_put(seq);  /* RFC 2011 compatibility */
> - icmpmsg_put(seq);
> + return 0;
> +}
> +
> +static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
> +{
> + int i;
> + unsigned long buff[TCPUDP_MIB_MAX];
> + struct net *net = seq->private;
> +
> + memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
>  
>   seq_puts(seq, "\nTcp:");
>   for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
>   seq_printf(seq, " %s", snmp4_tcp_list[i].name);
>  
>   seq_puts(seq, "\nTcp:");
> + snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
> +  net->mib.tcp_statistics);
>   for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
>   /* MaxConn field is signed, RFC 2012 */
>   if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
> - seq_printf(seq, " %ld",
> -snmp_fold_field(net->mib.tcp_statistics,
> -snmp4_tcp_list[i].entry));
> + seq_printf(seq, " %ld", buff[i]);
>   else
> - seq_printf(seq, " %lu",
> -snmp_fold_field(net->mib.tcp_statistics,
> -snmp4_tcp_list[i].entry));
> + seq_printf(seq, " %lu", buff[i]);
>   }
>  
> + memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
> +
> + snmp_get_cpu_field_batch(buff, snmp4_udp_list,
> +  net->mib.udp_statistics);
>   seq_puts(seq, "\nUdp:");
>   for (i = 0; snmp4_udp_list[i].name != NULL; i++)
>   seq_printf(seq, " %s", snmp4_udp_list[i].name);
> -
>   seq_puts(seq, "\nUdp:");
>   for (i = 0; snmp4_udp_list[i].name != NULL; i++)
> - seq_printf(seq, " %lu",
> -snmp_fold_field(net->mib.udp_statistics,
> -snmp4_udp_list[i].entry));
> + seq_printf(seq, " %lu", buff[i]);
> +
> + memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
>  
>   /* the UDP and UDP-Lite MIBs are the same */
>   seq_puts(seq, "\nUdpLite:");
> + snmp_get_cpu_field_batch(buff, snmp4_udp_list,
> +  net->mib.udplite_statistics);
>   for (i = 0; snmp4_udp_list[i].name != NULL; i++)
>   seq_printf(seq, " %s", snmp4_udp_list[i].name);
> -
>   seq_puts(seq, "\nUdpLite:");
>   for (i = 0; snmp4_udp_list[i].name != NULL; i++)
> - 

Re: [RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-12 Thread Marcelo
On Fri, Sep 09, 2016 at 02:33:57PM +0800, Jia He wrote:
> This is to use the generic interface snmp_get_cpu_field{,64}_batch to 
> aggregate the data by going through all the items of each cpu sequentially.
> Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
> warning "the frame size" larger than 1024 on s390.

Yeah about that, did you test it with stack overflow detection?
These arrays can be quite large.

One more below..

> 
> Signed-off-by: Jia He 
> ---
>  net/ipv4/proc.c | 106 
> +++-
>  1 file changed, 74 insertions(+), 32 deletions(-)
> 
> diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
> index 9f665b6..c6fc80e 100644
> --- a/net/ipv4/proc.c
> +++ b/net/ipv4/proc.c
> @@ -46,6 +46,8 @@
>  #include 
>  #include 
>  
> +#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)
> +
>  /*
>   *   Report socket allocation statistics [m...@utu.fi]
>   */
> @@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
>  /*
>   *   Called from the PROCfs module. This outputs /proc/net/snmp.
>   */
> -static int snmp_seq_show(struct seq_file *seq, void *v)
> +static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
>  {
>   int i;
> + u64 buff64[IPSTATS_MIB_MAX];
>   struct net *net = seq->private;
>  
> - seq_puts(seq, "Ip: Forwarding DefaultTTL");
> + memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
>  
> + seq_puts(seq, "Ip: Forwarding DefaultTTL");
>   for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
>   seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
>  
> @@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)
>  net->ipv4.sysctl_ip_default_ttl);
>  
>   BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
> + snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
> +net->mib.ip_statistics,
> +offsetof(struct ipstats_mib, syncp));
>   for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
> - seq_printf(seq, " %llu",
> -snmp_fold_field64(net->mib.ip_statistics,
> -  snmp4_ipstats_list[i].entry,
> -  offsetof(struct ipstats_mib, 
> syncp)));
> + seq_printf(seq, " %llu", buff64[i]);
>  
> - icmp_put(seq);  /* RFC 2011 compatibility */
> - icmpmsg_put(seq);
> + return 0;
> +}
> +
> +static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
> +{
> + int i;
> + unsigned long buff[TCPUDP_MIB_MAX];
> + struct net *net = seq->private;
> +
> + memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
>  
>   seq_puts(seq, "\nTcp:");
>   for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
>   seq_printf(seq, " %s", snmp4_tcp_list[i].name);
>  
>   seq_puts(seq, "\nTcp:");
> + snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
> +  net->mib.tcp_statistics);
>   for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
>   /* MaxConn field is signed, RFC 2012 */
>   if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
> - seq_printf(seq, " %ld",
> -snmp_fold_field(net->mib.tcp_statistics,
> -snmp4_tcp_list[i].entry));
> + seq_printf(seq, " %ld", buff[i]);
>   else
> - seq_printf(seq, " %lu",
> -snmp_fold_field(net->mib.tcp_statistics,
> -snmp4_tcp_list[i].entry));
> + seq_printf(seq, " %lu", buff[i]);
>   }
>  
> + memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
> +
> + snmp_get_cpu_field_batch(buff, snmp4_udp_list,
> +  net->mib.udp_statistics);
>   seq_puts(seq, "\nUdp:");
>   for (i = 0; snmp4_udp_list[i].name != NULL; i++)
>   seq_printf(seq, " %s", snmp4_udp_list[i].name);
> -
>   seq_puts(seq, "\nUdp:");
>   for (i = 0; snmp4_udp_list[i].name != NULL; i++)
> - seq_printf(seq, " %lu",
> -snmp_fold_field(net->mib.udp_statistics,
> -snmp4_udp_list[i].entry));
> + seq_printf(seq, " %lu", buff[i]);
> +
> + memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
>  
>   /* the UDP and UDP-Lite MIBs are the same */
>   seq_puts(seq, "\nUdpLite:");
> + snmp_get_cpu_field_batch(buff, snmp4_udp_list,
> +  net->mib.udplite_statistics);
>   for (i = 0; snmp4_udp_list[i].name != NULL; i++)
>   seq_printf(seq, " %s", snmp4_udp_list[i].name);
> -
>   seq_puts(seq, "\nUdpLite:");
>   for (i = 0; snmp4_udp_list[i].name != NULL; i++)
> - seq_printf(seq, " %lu",
> - 

[RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-09 Thread Jia He
This is to use the generic interface snmp_get_cpu_field{,64}_batch to 
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Signed-off-by: Jia He 
---
 net/ipv4/proc.c | 106 +++-
 1 file changed, 74 insertions(+), 32 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..c6fc80e 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,8 @@
 #include 
 #include 
 
+#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)
+
 /*
  * Report socket allocation statistics [m...@utu.fi]
  */
@@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
 /*
  * Called from the PROCfs module. This outputs /proc/net/snmp.
  */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 {
int i;
+   u64 buff64[IPSTATS_MIB_MAX];
struct net *net = seq->private;
 
-   seq_puts(seq, "Ip: Forwarding DefaultTTL");
+   memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
+   seq_puts(seq, "Ip: Forwarding DefaultTTL");
for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
@@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)
   net->ipv4.sysctl_ip_default_ttl);
 
BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
+   snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
+  net->mib.ip_statistics,
+  offsetof(struct ipstats_mib, syncp));
for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-   seq_printf(seq, " %llu",
-  snmp_fold_field64(net->mib.ip_statistics,
-snmp4_ipstats_list[i].entry,
-offsetof(struct ipstats_mib, 
syncp)));
+   seq_printf(seq, " %llu", buff64[i]);
 
-   icmp_put(seq);  /* RFC 2011 compatibility */
-   icmpmsg_put(seq);
+   return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+   int i;
+   unsigned long buff[TCPUDP_MIB_MAX];
+   struct net *net = seq->private;
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
seq_puts(seq, "\nTcp:");
for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
seq_puts(seq, "\nTcp:");
+   snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
+net->mib.tcp_statistics);
for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
/* MaxConn field is signed, RFC 2012 */
if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
-   seq_printf(seq, " %ld",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %ld", buff[i]);
else
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
}
 
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
+
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udp_statistics);
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.udp_statistics,
-  snmp4_udp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
/* the UDP and UDP-Lite MIBs are the same */
seq_puts(seq, "\nUdpLite:");
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udplite_statistics);
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
seq_puts(seq, "\nUdpLite:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.udplite_statistics,
-  snmp4_udp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
 
seq_putc(seq, '\n');
return 0;
 }
 
+static 

[RFC PATCH v3 2/7] proc: Reduce cache miss in {snmp,netstat}_seq_show

2016-09-09 Thread Jia He
This is to use the generic interface snmp_get_cpu_field{,64}_batch to 
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show and netstat_seq_show are split into 2 parts to avoid build
warning "the frame size" larger than 1024 on s390.

Signed-off-by: Jia He 
---
 net/ipv4/proc.c | 106 +++-
 1 file changed, 74 insertions(+), 32 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..c6fc80e 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,8 @@
 #include 
 #include 
 
+#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)
+
 /*
  * Report socket allocation statistics [m...@utu.fi]
  */
@@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
 /*
  * Called from the PROCfs module. This outputs /proc/net/snmp.
  */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 {
int i;
+   u64 buff64[IPSTATS_MIB_MAX];
struct net *net = seq->private;
 
-   seq_puts(seq, "Ip: Forwarding DefaultTTL");
+   memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
+   seq_puts(seq, "Ip: Forwarding DefaultTTL");
for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
@@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)
   net->ipv4.sysctl_ip_default_ttl);
 
BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
+   snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
+  net->mib.ip_statistics,
+  offsetof(struct ipstats_mib, syncp));
for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-   seq_printf(seq, " %llu",
-  snmp_fold_field64(net->mib.ip_statistics,
-snmp4_ipstats_list[i].entry,
-offsetof(struct ipstats_mib, 
syncp)));
+   seq_printf(seq, " %llu", buff64[i]);
 
-   icmp_put(seq);  /* RFC 2011 compatibility */
-   icmpmsg_put(seq);
+   return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+   int i;
+   unsigned long buff[TCPUDP_MIB_MAX];
+   struct net *net = seq->private;
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
seq_puts(seq, "\nTcp:");
for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
seq_puts(seq, "\nTcp:");
+   snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
+net->mib.tcp_statistics);
for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
/* MaxConn field is signed, RFC 2012 */
if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
-   seq_printf(seq, " %ld",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %ld", buff[i]);
else
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.tcp_statistics,
-  snmp4_tcp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
}
 
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
+
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udp_statistics);
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
seq_puts(seq, "\nUdp:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.udp_statistics,
-  snmp4_udp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
+
+   memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
/* the UDP and UDP-Lite MIBs are the same */
seq_puts(seq, "\nUdpLite:");
+   snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+net->mib.udplite_statistics);
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
seq_puts(seq, "\nUdpLite:");
for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-   seq_printf(seq, " %lu",
-  snmp_fold_field(net->mib.udplite_statistics,
-  snmp4_udp_list[i].entry));
+   seq_printf(seq, " %lu", buff[i]);
 
seq_putc(seq, '\n');
return 0;
 }
 
+static int