Send netdisco-users mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        https://lists.sourceforge.net/lists/listinfo/netdisco-users
or, via email, send a message with subject or body 'help' to
        [email protected]

You can reach the person managing the list at
        [email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of netdisco-users digest..."
Today's Topics:

   1. Re: Some issues after upgrade (Tobias Gerlach)
   2. Re: Some issues after upgrade (Oliver Gorwits)
   3. Re: Some issues after upgrade (Tobias Gerlach)
   4. Re: Some issues after upgrade (Oliver Gorwits)
   5. Re: Some issues after upgrade (Oliver Gorwits)
   6. Re: Some issues after upgrade (Oliver Gorwits)
--- Begin Message ---
My "~/environments/deployment.yml"

https://nopaste.xyz/?9e37e030151a1828#OzZu6+PFins18JEqzHGjM0t7fzKD5H1ZHksPfBkOYG8=

2017-12-21 11:03 GMT+01:00 Tobias Gerlach <[email protected]>:
> Hello,
>
> yesterday I upgraded my ND2 installation from version 2.36.9 to 2.37.3.
>
> After the update I noticed that the default schedule configuration in
> "~/perl5/lib/perl5/auto/share/dist/App-Netdisco/environments/deployment.yml"
> seems to take precedence over the user defined config in
> "~/environments/deployment.yml".
> I haven't made any changes to my configuration.
>
> Also the polling time increased significantly. Before I was able to
> poll my ~7500 devices within ~45 minutes, now it takes up to 4 times
> longer. I made sure that all 32 polling instances are running.
>
> One question I have is regarding the "snmp_auth" statement. From the
> documentation it seems to be superseded by "device_auth". However in
> "~/perl5/lib/perl5/auto/share/dist/App-Netdisco/config.yml" the
> "snmp_auth" statement is still used. Is it recommended to replace
> "snmp_auth" by "device_auth" in "~/environments/deployment.yml"?
>
> Thanks.



--- End Message ---
--- Begin Message ---
Hi Tobias,

On 2017-12-21 10:03, Tobias Gerlach wrote:
After the update I noticed that the default schedule configuration in
"~/perl5/lib/perl5/auto/share/dist/App-Netdisco/environments/deployment.yml"
seems to take precedence over the user defined config in
"~/environments/deployment.yml".
I haven't made any changes to my configuration.

OK I will check on this soon - it seems like a bug and I think I understand what the app is doing. A fix should be simple.

Also the polling time increased significantly. Before I was able to
poll my ~7500 devices within ~45 minutes, now it takes up to 4 times
longer. I made sure that all 32 polling instances are running.

Does that mean you are running 32 instances of netdisco-backend, or 32 workers on one instance of netdisco-backend?

You could try running one instance and see what happens...

One question I have is regarding the "snmp_auth" statement. From the
documentation it seems to be superseded by "device_auth". However in
"~/perl5/lib/perl5/auto/share/dist/App-Netdisco/config.yml" the
"snmp_auth" statement is still used. Is it recommended to replace
"snmp_auth" by "device_auth" in "~/environments/deployment.yml"?

Yes it is recommended to replace snmp_auth by device_auth but only so that the documentation matches. There is no operational need to change.

Sorry for the issues, I hope we can get them fixed soon!

regards,
Oliver.



--- End Message ---
--- Begin Message ---
Hello Oliver,

sorry, I wasn't precise enough. 32 workers are running on one instance
of netdisco-backend.
The relevant configuration parts is:

workers:
  tasks: 'AUTO * 4'

Since the server has 8 CPU's installed, the number of workers are as
expected and as before the update. However the overall polling time is
much longer now.



--- End Message ---
--- Begin Message ---

On 2017-12-21 13:45, Tobias Gerlach wrote:
sorry, I wasn't precise enough. 32 workers are running on one instance
of netdisco-backend.

OK thanks. It may be necessary to run the backend in foreground/CLI mode with "log:debug" to see what it is doing and if any errors are thrown which are slowing down the jobs.

Other thoughts come to mind:

1. the "skipped devices" list of SNMP auth failures (undiscoverable devices) is reset when the backend restarts, and if this is a long list, it can occupy many workers while being rebuild (10 timeouts per device until they are added to the list). Making sure the config discover_only or devices_only is set correctly will help. 2. you could truncate the "admin" table in the database (the job queue) and run a discoverall to let netdisco start fresh. 3. you could increase snmptimeout to 5000000 (five seconds) from one second, the default.

regards,
oliver.


The relevant configuration parts is:

workers:
  tasks: 'AUTO * 4'

Since the server has 8 CPU's installed, the number of workers are as
expected and as before the update. However the overall polling time is
much longer now.

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Netdisco mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/netdisco-users



--- End Message ---
--- Begin Message ---
Hi Tobias

On 2017-12-21 10:03, Tobias Gerlach wrote:
After the update I noticed that the default schedule configuration in
"~/perl5/lib/perl5/auto/share/dist/App-Netdisco/environments/deployment.yml"
seems to take precedence over the user defined config in
"~/environments/deployment.yml".
I haven't made any changes to my configuration.

I took your config and what I can see is that Netdisco is merging your config and its own - this is what we asked it to do :-).

However you do not have any nbtwalk jobs in your own config, so they are added by Netdisco. If that is what you're seeing, sorry! In the short term you can set "nbtstat_no: 'any'" in your config to at least stop the jobs.

I have just pushed a new release (2.037004) to CPAN which allows you to set a config item to "null", for example:

schedule:
  nbtwalk: null

So you could add this and then Netdisco will ignore all the nbtwalk schedule.

regards,
oliver.



Also the polling time increased significantly. Before I was able to
poll my ~7500 devices within ~45 minutes, now it takes up to 4 times
longer. I made sure that all 32 polling instances are running.

One question I have is regarding the "snmp_auth" statement. From the
documentation it seems to be superseded by "device_auth". However in
"~/perl5/lib/perl5/auto/share/dist/App-Netdisco/config.yml" the
"snmp_auth" statement is still used. Is it recommended to replace
"snmp_auth" by "device_auth" in "~/environments/deployment.yml"?

Thanks.

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Netdisco mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/netdisco-users



--- End Message ---
--- Begin Message ---


On 2017-12-21 13:45, Tobias Gerlach wrote:
Hello Oliver,

sorry, I wasn't precise enough. 32 workers are running on one instance
of netdisco-backend.
The relevant configuration parts is:

workers:
  tasks: 'AUTO * 4'

Further to my previous email, something like:

ND2_SINGLE_WORKER=1 DBIC_TRACE=1 bin/netdisco-backend-fg

May allow you to see what's going on (possibly omit the DBIC_TRACE).

regards,
Oliver.


Since the server has 8 CPU's installed, the number of workers are as
expected and as before the update. However the overall polling time is
much longer now.

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Netdisco mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/netdisco-users



--- End Message ---
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Netdisco mailing list - Digest Mode
[email protected]
https://lists.sourceforge.net/lists/listinfo/netdisco-users

Reply via email to