HI,

Greetings.

Has anybody done so far profiling with gprof or any equivalent tool for src
code profiling for net-snmp-5.4.2.1 or any prev versions?
So far we are finding it tough to generate the gmon.out file for profiling
data generation.

What could be the reason?
We have seen reasons being the start directory, multi threaded programs
unsuitable for profiling etc.

Also so far for this mail chain, no one has given any leads, is it because
my question is ambiguos or do i have to give bit more background of info
regarding our system setup?
kindly let me know.
Thanks.

BR,
Bheemesh





On Wed, Nov 11, 2009 at 2:45 PM, bheemesh v <bheem...@gmail.com> wrote:

> Hello,
>
> Since not much updates heard in this case, i want to post some more
> analysis i have done in this case:
> Most of the cpu time seems to be taken by the if-mib's interface
> container_load.
>
> Specifically the macro-level profiling as well as the strace that i have
> posted last time has revealed that:
>
> netsnmp_arch_interface_container_load() function from
> "if-mib/data_access/interface_linux.c" has a file pointer to "proc/net/dev"
> from it tries to get updates of each interface in a while loop, which takes
> too much of cpu time when we have many VLAN interfaces defind in our server
> this loads CPU 100%.
>
> Specially i have requirement to have atleast 4000 VLAN interfaces and the
> above logic needs a re-look in this case.
>
> I request the experts in IF-MIB group to do the needful here, though i have
> tried only imature changes here by introducing some usleep(100), which
> though is a work around time being.
>  A better optimization would solve this problem permanently.
>
> Please update in this respect.
>
> Thanks.
>
> BR,
> Bheemesh
>
>
>
>
>
> On Thu, Nov 5, 2009 at 1:25 PM, bheemesh v <bheem...@gmail.com> wrote:
>
>> Hello,
>>
>> Greetings.
>>
>> Continuing from my below question posted about net-snmp 5.4.2.1 taking
>> 100% cpu usage on mips HW, i found the problem similar to the problem ID:
>> "*Inefficient route reading kills snmpd on - ID: 465161*"
>>
>> Now i see from our debug logs, an open being called for every VLAN/IP
>> interface created on the server and it repeats and looks like this has taken
>> most of the cpu usage time.
>>
>> The snmpd process is in a loop reading from these files and then doing
>> some operation .. I only grepped for opening files…. Maybe this continuous
>> looping is consuming cpu too much:
>>
>> open("/proc/sys/net/ipv4/
>> neigh/ethrtm1_2_v995/retrans_time_ms", O_RDONLY) = 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_2_v996/retrans_time_ms", O_RDONLY)
>> = 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_2_v997/retrans_time_ms", O_RDONLY)
>> = 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_2_v998/retrans_time_ms", O_RDONLY)
>> = 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_2_v999/retrans_time_ms", O_RDONLY)
>> = 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_2_v1000/retrans_time_ms", O_RDONLY)
>> = 11
>> open("/proc/stat", O_RDONLY)            = 9
>> open("/proc/vmstat", O_RDONLY)          = 9
>> open("/proc/net/snmp", O_RDONLY)        = 9
>> open("/proc/net/dev", O_RDONLY)         = 9
>> open("/proc/sys/net/ipv4/neigh/lo/retrans_time_ms", O_RDONLY) = 11
>> open("/proc/sys/net/ipv4/neigh/eth2/retrans_time_ms", O_RDONLY) = 11
>> open("/proc/sys/net/ipv4/neigh/eth0/retrans_time_ms", O_RDONLY) = 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v1/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v2/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v3/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v4/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v5/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v6/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v7/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v8/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v9/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v10/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v11/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v12/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v13/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v14/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v15/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v16/retrans_time_ms", O_RDONLY) =
>> 11
>> open("/proc/sys/net/ipv4/neigh/ethrtm1_1_v17/retrans_time_ms", O_RDONLY) =
>> 11
>>
>> This needs optimization on the number read/open operation done on the
>> interfaces as above?
>>
>> Also from the posted problem ID (*ID: 465161*)status, it looks closed
>> without any solution being discussed, is it already corrected?
>>
>> Please let me know in this regard with your suggessions.
>> Thanks very much.
>>
>> Best Regards,
>> Bheemesh
>>
>>
>>
>>
>>
>> On Sun, Nov 1, 2009 at 9:59 AM, bheemesh v <bheem...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> As part of observations i wanted to add that on the mips HW where if
>>> large number of VLAN's are being configured the IP-MIB has to update the
>>> ipAddressTable, where in we see such a CPU load of 100%.
>>>
>>> On the other mips servers nodes where there are no such large VLAN's
>>> being configured, we do not see cpu load ans it shows 0%.
>>>
>>> Is it something to deal with IP-MIB? or the sub aganet configuration for
>>> routing?
>>>
>>> Kindly let me know.
>>> Thanks.
>>>
>>> Best Regards,
>>> Bheemesh
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Oct 30, 2009 at 10:56 PM, bheemesh v <bheem...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> Currently we are using the net-snmp-5.4.2.1 version on our linux server,
>>>> with snmpd running as master agent on x86 environments and as sub-agent on
>>>> both x86 and mips environments.
>>>>
>>>> Now for the sub aganet on x86 cpu usage looks healthy but for the mips
>>>> snmpd the cpu usage is occupying 100% most of the time.
>>>>
>>>> Has such scenario reported in the past in this mailing list (though i
>>>> did not come across any)? please suggest your inputs.
>>>>
>>>> The stratup of the snmpd for mips sub agent is as below:
>>>> *snmpd -f -C -c /XX/XXX/etc/SS_NetSNMP/snmpd_subagt.conf -Ls 0 -X -I
>>>> -var_route ipCidrRouteTable inetCidrRouteTable*
>>>>
>>>> The *snmpd_subagt.conf * has this details:
>>>>
>>>> *agentxSocket tcp:MasterSnmpAgent:705*
>>>>
>>>> Please send in your inputs and let me know for any more details needed
>>>> from my side too.
>>>>
>>>> Thanks in advance.
>>>> Best Regards,
>>>> Bheemesh
>>>>
>>>
>>>
>>
>
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users

Reply via email to