Re: net-snmp-5.4.2.1 snmpd cpu usage 100%

2009-11-25 Thread bheemesh v
Hello,

ls does not take that long and is available as in /proc/net/dev of the
mips filesystem.

Now i have this problem specifically for large VLAN creations which would
mean large number of interface data and IP-data , my concern is specific to
these areas.
Bcos when no large VLAN's avaiab;e not much to update for IF-MIB and IP-MIB.
Excuse me if i am not making sense here, as i am a novice in net-snmp area.

But frankly when no large VLAN's are present snmpd process shows 0% on top
output.
But anythng above 1000 VLANS, this is shows 100%.

Let me know.

Thanks.
BR,
Bheemesh



On Sat, Nov 21, 2009 at 2:24 AM, Mike Ayers mike_ay...@tvworks.com wrote:

  From: bheemesh v [mailto:bheem...@gmail.com]
  Sent: Friday, November 20, 2009 12:12 AM

  YES, CPU stays 100% for ever, i think your guess might be right. But as
  an observation when i had less than 2000 VLAN's this was not the case
  it is used to fluctuate between 10-40-100 but predominently 100%.

 H...

I once did some testing on performance of directories in the ext2
 filesystem.  I found that directory operations - list, add a file, remove a
 file, etc.  were frisky up until about 1000 entries.  By 2000 entries there
 was a perceptible slowdown.  With 4000 entries in a directory, it took over
 a day just to remove all the files.  My conclusion was that the directory
 operations were iterating the entire directory list for each operation,
 leading to exponential performance degradation.  I saw the same problem
 adding routes, with the same thresholds.  If this is your issue, you will
 have problems no matter what, but heavy caching should help matters a
 little.  To test this, do an ls in the directory that holds all the vlans.


HTH,

 Mike


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


Re: net-snmp-5.4.2.1 snmpd cpu usage 100%

2009-11-20 Thread bheemesh v
Hello Wes Hardaker,

I have answers to some your questions again:

WH: 2) make sure the code is supporting caching.  With 4000 entries in the
WH: /proc/net/dev file you *don't* want to re-read it frequently.

As earlier said with strace out put as well while attaching snmpd to gdb to
do frequent back tracing i found that cache_load was being called
frequently.
One sample backtrace goes like this:

--
#0  0x00b74d60 in write () from /lib64/libc.so.6
#1  0x00b1d9dc in _IO_new_file_write () from /lib64/libc.so.6
#2  0x00b1d570 in new_do_write () from /lib64/libc.so.6
#3  0x00b1d970 in _IO_new_do_write () from /lib64/libc.so.6
#4  0x00b1d724 in _IO_new_file_xsputn () from /lib64/libc.so.6
#5  0x00af83f0 in vfprintf () from /lib64/libc.so.6
#6  0x00afe1f4 in fprintf () from /lib64/libc.so.6
#7  0x00835948 in log_handler_file () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#8  0x00835bb4 in snmp_log_string () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#9  0x00835ccc in snmp_vlog () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#10 0x00830b84 in debugmsg () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#11 0x00867bc0 in netsnmp_compare_netsnmp_index () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#12 0x008682fc in array_qsort () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#13 0x008684c0 in array_qsort () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#14 0x008684c0 in array_qsort () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#15 0x0086861c in Sort_Array () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#16 0x00869094 in netsnmp_binary_array_get () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#17 0x0086925c in netsnmp_binary_array_insert () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#18 0x008691d0 in _ba_insert () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#19 0x007374bc in CONTAINER_INSERT_HELPER () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmpmibs.so.15
#20 0x007373fc in CONTAINER_INSERT () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmpmibs.so.15
#21 0x00737238 in _netsnmp_ioctl_ipaddress_container_load_v4 () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmpmibs.so.15
#22 0x00732e60 in netsnmp_arch_ipaddress_container_load () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmpmibs.so.15
#23 0x00731bc8 in netsnmp_access_ipaddress_container_load () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmpmibs.so.15
#24 0x0070bb88 in ipAddressTable_container_load () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmpmibs.so.15
#25 0x0070a49c in _cache_load () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmpmibs.so.15
#26 0x005eb948 in _cache_load () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmphelpers.so.15
#27 0x005ea574 in _timer_reload () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmphelpers.so.15
#28 0x00843ae8 in run_alarms () from
/opt/nokiasiemens/SS_NetSnmp//lib/libnetsnmp.so.15
#29 0x000120005498 in receive ()
#30 0x0001200049f0 in main ()

--


WH:  3) Is it possible that parsing is failing?  IE, is it possible there
is
WH:   a bug in the code that is preventing the function from ever
quitting?
WH:   Does the CPU stay at 100% forever or does it eventually drop back
down?

YES, CPU stays 100% for ever, i think your guess might be right. But as an
observation when i had less than 2000 VLAN's this was not the case it is
used to fluctuate between 10-40-100 but predominently 100%.

WH: If it takes too long to fill the cache because of that many entries,
you
WH: may want to increase the caching time for that particular code.

But from the grpofile output i attached it does not look like that, ofcourse
i woudl like to introduce cache-timeout in ipAddressTable_initialise().

Let me know.

Thanks very much.

Best Regards,
Bheemesh




On Fri, Nov 20, 2009 at 12:52 PM, bheemesh v bheem...@gmail.com wrote:

 Hello Wes Hardaker,

 Thanks very much for your valuable inputs here.
 We are starting the snmpd with -f option, so i guess i am going right
 there.

 I am attaching a sample gpofile output file to your refrence generated from
 my HW environnment.
 May be this can give better data for investigations.

 I will get back on the remaining questions.
 But as an observation for cache-timeout, it is being used for
 ifTable_initialization, but not for ipAddressTable_initialization.
 So cache- flags are only being used commonly.

 Let me know.

 Thanks very much.

 BR,
 Bheemesh





 On Thu, Nov 19, 2009 at 7:10 AM, Wes Hardaker 
 harda...@users.sourceforge.net wrote:


 bv Has anybody done so far profiling with gprof or any 

RE: net-snmp-5.4.2.1 snmpd cpu usage 100%

2009-11-20 Thread Mike Ayers
 From: bheemesh v [mailto:bheem...@gmail.com]
 Sent: Friday, November 20, 2009 12:12 AM

 YES, CPU stays 100% for ever, i think your guess might be right. But as
 an observation when i had less than 2000 VLAN's this was not the case
 it is used to fluctuate between 10-40-100 but predominently 100%.

H...

I once did some testing on performance of directories in the ext2 
filesystem.  I found that directory operations - list, add a file, remove a 
file, etc.  were frisky up until about 1000 entries.  By 2000 entries there was 
a perceptible slowdown.  With 4000 entries in a directory, it took over a day 
just to remove all the files.  My conclusion was that the directory operations 
were iterating the entire directory list for each operation, leading to 
exponential performance degradation.  I saw the same problem adding routes, 
with the same thresholds.  If this is your issue, you will have problems no 
matter what, but heavy caching should help matters a little.  To test this, do 
an ls in the directory that holds all the vlans.


HTH,

Mike

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


Re: net-snmp-5.4.2.1 snmpd cpu usage 100%

2009-11-18 Thread Wes Hardaker

bv Has anybody done so far profiling with gprof or any equivalent tool
bv for src code profiling for net-snmp-5.4.2.1 or any prev versions?
bv So far we are finding it tough to generate the gmon.out file for
bv profiling data generation.

I have, but it's been a while.  In the past I successfully used gprof to
profile various aspects of the agent.

bv We have seen reasons being the start directory, multi threaded programs
bv unsuitable for profiling etc.

We don't use multithreading.  Do make sure you run the agent with -f though.

bv Also so far for this mail chain, no one has given any leads, is it because
bv my question is ambiguos or do i have to give bit more background of info
bv regarding our system setup?

Actually, I think you've done a really nice job explaining your
problem (much better than our average user).  Unfortunately, your
problem is also fairly complex which makes it hard to answer quickly
(and a large number of developers got very busy with other aspects of
their life and jobs all at the same time).

 netsnmp_arch_interface_container_load() function from
 if-mib/data_access/interface_linux.c has a file pointer to proc/net/dev
 from it tries to get updates of each interface in a while loop, which takes
 too much of cpu time when we have many VLAN interfaces defind in our server
 this loads CPU 100%.

There are a few things I can think of:

1) the code in question isn't well optimized for 4000 vlans and takes a
   long time to load.  I'm actually not sure, but I'd think it would
   actually be ok.
2) make sure the code is supporting caching.  With 4000 entries in the
   /proc/net/dev file you *don't* want to re-read it frequently.
3) Is it possible that parsing is failing?  IE, is it possible there is
   a bug in the code that is preventing the function from ever quitting?
   Does the CPU stay at 100% forever or does it eventually drop back down?
4) You should try running a *single* snmpgetnext on the iftable with the
   following options: -r 0 -t 600, which will make snmpgetnext wait
   for a long time and only try once.  See if you ever get a response.
   If you do, rerun the command *immediately* (and hopefully the next
   request will return more quickly if it's cached).

If it takes too long to fill the cache because of that many entries, you
may want to increase the caching time for that particular code.
-- 
Wes Hardaker
Cobham Analytic Solutions

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


Re: net-snmp-5.4.2.1 snmpd cpu usage 100%

2009-11-16 Thread bheemesh v
HI,

Greetings.

Has anybody done so far profiling with gprof or any equivalent tool for src
code profiling for net-snmp-5.4.2.1 or any prev versions?
So far we are finding it tough to generate the gmon.out file for profiling
data generation.

What could be the reason?
We have seen reasons being the start directory, multi threaded programs
unsuitable for profiling etc.

Also so far for this mail chain, no one has given any leads, is it because
my question is ambiguos or do i have to give bit more background of info
regarding our system setup?
kindly let me know.
Thanks.

BR,
Bheemesh





On Wed, Nov 11, 2009 at 2:45 PM, bheemesh v bheem...@gmail.com wrote:

 Hello,

 Since not much updates heard in this case, i want to post some more
 analysis i have done in this case:
 Most of the cpu time seems to be taken by the if-mib's interface
 container_load.

 Specifically the macro-level profiling as well as the strace that i have
 posted last time has revealed that:

 netsnmp_arch_interface_container_load() function from
 if-mib/data_access/interface_linux.c has a file pointer to proc/net/dev
 from it tries to get updates of each interface in a while loop, which takes
 too much of cpu time when we have many VLAN interfaces defind in our server
 this loads CPU 100%.

 Specially i have requirement to have atleast 4000 VLAN interfaces and the
 above logic needs a re-look in this case.

 I request the experts in IF-MIB group to do the needful here, though i have
 tried only imature changes here by introducing some usleep(100), which
 though is a work around time being.
  A better optimization would solve this problem permanently.

 Please update in this respect.

 Thanks.

 BR,
 Bheemesh





 On Thu, Nov 5, 2009 at 1:25 PM, bheemesh v bheem...@gmail.com wrote:

 Hello,

 Greetings.

 Continuing from my below question posted about net-snmp 5.4.2.1 taking
 100% cpu usage on mips HW, i found the problem similar to the problem ID:
 *Inefficient route reading kills snmpd on - ID: 465161*

 Now i see from our debug logs, an open being called for every VLAN/IP
 interface created on the server and it repeats and looks like this has taken
 most of the cpu usage time.

 The snmpd process is in a loop reading from these files and then doing
 some operation .. I only grepped for opening files…. Maybe this continuous
 looping is consuming cpu too much:

 open(/proc/sys/net/ipv4/
 neigh/ethrtm1_2_v995/retrans_time_ms, O_RDONLY) = 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v996/retrans_time_ms, O_RDONLY)
 = 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v997/retrans_time_ms, O_RDONLY)
 = 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v998/retrans_time_ms, O_RDONLY)
 = 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v999/retrans_time_ms, O_RDONLY)
 = 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v1000/retrans_time_ms, O_RDONLY)
 = 11
 open(/proc/stat, O_RDONLY)= 9
 open(/proc/vmstat, O_RDONLY)  = 9
 open(/proc/net/snmp, O_RDONLY)= 9
 open(/proc/net/dev, O_RDONLY) = 9
 open(/proc/sys/net/ipv4/neigh/lo/retrans_time_ms, O_RDONLY) = 11
 open(/proc/sys/net/ipv4/neigh/eth2/retrans_time_ms, O_RDONLY) = 11
 open(/proc/sys/net/ipv4/neigh/eth0/retrans_time_ms, O_RDONLY) = 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v1/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v2/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v3/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v4/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v5/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v6/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v7/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v8/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v9/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v10/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v11/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v12/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v13/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v14/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v15/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v16/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v17/retrans_time_ms, O_RDONLY) =
 11

 This needs optimization on the number read/open operation done on the
 interfaces as above?

 Also from the posted problem ID (*ID: 465161*)status, it looks closed
 without any solution being discussed, is it already corrected?

 Please let me know in this regard with your suggessions.
 Thanks very much.

 Best Regards,
 Bheemesh





 On Sun, Nov 1, 2009 at 9:59 AM, bheemesh v bheem...@gmail.com wrote:

 Hello,

 As part of observations i wanted to 

Re: net-snmp-5.4.2.1 snmpd cpu usage 100%

2009-11-11 Thread bheemesh v
Hello,

Since not much updates heard in this case, i want to post some more analysis
i have done in this case:
Most of the cpu time seems to be taken by the if-mib's interface
container_load.

Specifically the macro-level profiling as well as the strace that i have
posted last time has revealed that:

netsnmp_arch_interface_container_load() function from
if-mib/data_access/interface_linux.c has a file pointer to proc/net/dev
from it tries to get updates of each interface in a while loop, which takes
too much of cpu time when we have many VLAN interfaces defind in our server
this loads CPU 100%.

Specially i have requirement to have atleast 4000 VLAN interfaces and the
above logic needs a re-look in this case.

I request the experts in IF-MIB group to do the needful here, though i have
tried only imature changes here by introducing some usleep(100), which
though is a work around time being.
 A better optimization would solve this problem permanently.

Please update in this respect.

Thanks.

BR,
Bheemesh




On Thu, Nov 5, 2009 at 1:25 PM, bheemesh v bheem...@gmail.com wrote:

 Hello,

 Greetings.

 Continuing from my below question posted about net-snmp 5.4.2.1 taking 100%
 cpu usage on mips HW, i found the problem similar to the problem ID:
 *Inefficient route reading kills snmpd on - ID: 465161*

 Now i see from our debug logs, an open being called for every VLAN/IP
 interface created on the server and it repeats and looks like this has taken
 most of the cpu usage time.

 The snmpd process is in a loop reading from these files and then doing some
 operation .. I only grepped for opening files…. Maybe this continuous
 looping is consuming cpu too much:

 open(/proc/sys/net/ipv4/
 neigh/ethrtm1_2_v995/retrans_time_ms, O_RDONLY) = 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v996/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v997/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v998/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v999/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_2_v1000/retrans_time_ms, O_RDONLY)
 = 11
 open(/proc/stat, O_RDONLY)= 9
 open(/proc/vmstat, O_RDONLY)  = 9
 open(/proc/net/snmp, O_RDONLY)= 9
 open(/proc/net/dev, O_RDONLY) = 9
 open(/proc/sys/net/ipv4/neigh/lo/retrans_time_ms, O_RDONLY) = 11
 open(/proc/sys/net/ipv4/neigh/eth2/retrans_time_ms, O_RDONLY) = 11
 open(/proc/sys/net/ipv4/neigh/eth0/retrans_time_ms, O_RDONLY) = 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v1/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v2/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v3/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v4/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v5/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v6/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v7/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v8/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v9/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v10/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v11/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v12/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v13/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v14/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v15/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v16/retrans_time_ms, O_RDONLY) =
 11
 open(/proc/sys/net/ipv4/neigh/ethrtm1_1_v17/retrans_time_ms, O_RDONLY) =
 11

 This needs optimization on the number read/open operation done on the
 interfaces as above?

 Also from the posted problem ID (*ID: 465161*)status, it looks closed
 without any solution being discussed, is it already corrected?

 Please let me know in this regard with your suggessions.
 Thanks very much.

 Best Regards,
 Bheemesh





 On Sun, Nov 1, 2009 at 9:59 AM, bheemesh v bheem...@gmail.com wrote:

 Hello,

 As part of observations i wanted to add that on the mips HW where if large
 number of VLAN's are being configured the IP-MIB has to update the
 ipAddressTable, where in we see such a CPU load of 100%.

 On the other mips servers nodes where there are no such large VLAN's being
 configured, we do not see cpu load ans it shows 0%.

 Is it something to deal with IP-MIB? or the sub aganet configuration for
 routing?

 Kindly let me know.
 Thanks.

 Best Regards,
 Bheemesh






 On Fri, Oct 30, 2009 at 10:56 PM, bheemesh v bheem...@gmail.com wrote:

 Hello,

 Currently we are using the net-snmp-5.4.2.1 version on our linux server,
 with snmpd running as master agent on x86 environments and as sub-agent on
 both x86 and 

Re: net-snmp-5.4.2.1 snmpd cpu usage 100%

2009-10-31 Thread bheemesh v
Hello,

As part of observations i wanted to add that on the mips HW where if large
number of VLAN's are being configured the IP-MIB has to update the
ipAddressTable, where in we see such a CPU load of 100%.

On the other mips servers nodes where there are no such large VLAN's being
configured, we do not see cpu load ans it shows 0%.

Is it something to deal with IP-MIB? or the sub aganet configuration for
routing?

Kindly let me know.
Thanks.

Best Regards,
Bheemesh





On Fri, Oct 30, 2009 at 10:56 PM, bheemesh v bheem...@gmail.com wrote:

 Hello,

 Currently we are using the net-snmp-5.4.2.1 version on our linux server,
 with snmpd running as master agent on x86 environments and as sub-agent on
 both x86 and mips environments.

 Now for the sub aganet on x86 cpu usage looks healthy but for the mips
 snmpd the cpu usage is occupying 100% most of the time.

 Has such scenario reported in the past in this mailing list (though i did
 not come across any)? please suggest your inputs.

 The stratup of the snmpd for mips sub agent is as below:
 *snmpd -f -C -c /XX/XXX/etc/SS_NetSNMP/snmpd_subagt.conf -Ls 0 -X -I
 -var_route ipCidrRouteTable inetCidrRouteTable*

 The *snmpd_subagt.conf * has this details:

 *agentxSocket tcp:MasterSnmpAgent:705*

 Please send in your inputs and let me know for any more details needed from
 my side too.

 Thanks in advance.
 Best Regards,
 Bheemesh

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users