Thanks for your help,
I have even tried to make you a donation  :-)
I think your point of view is a good approach to my tasks (anycase, in urban
traffic there are not out-of-date critical information, when we get an
alarm, group state, detector value every second). For that reason I have
started with 
mib2c -S cache=1 -c mib2c.table_data.conf tableName
Unfortunately I get errors, some errors are easy (comments with # and not
with //, error: expected ';' before ')' token, etc), I don´t know if there
is a new version of the mib2c.table_data.conf or I should repair these
errors. Anycase, I use the correct mib2c.*.conf?








-----Mensaje original-----
De: Dave Shield [mailto:[email protected]] 
Enviado el: lunes, 30 de marzo de 2009 17:12
Para: Gómez González, Enrique
CC: [email protected]
Asunto: Re: I obtain that with make install

2009/3/30 "Gómez González, Enrique" <[email protected]>:
> If I want to give a quickly answer to the client, it is better that the
info
> stay in the MIB stored. I have just to consult a value with snmpget and
not
> to interroge in that moment the underlying subsystem.

Hmmm...   I think you're being a bit over-simplistic.
There is very little in the SNMP world (or computing in general)
where one approach is clearly "better" than another.
It's better(!) to think in terms of advantages and disadvantages
for each approach.

The advantage of having an internal representation of the
underlying data is that it's easily accessible, and hence
requests can be handled quickly.
   The disadvantages are twofold -  the internal representation
will inevitably be somewhat out-of-date, and there's an overhead
incurred in loading this internal representation of the data
(which may never actually be needed).
   And there's a definite trade-off between those last two.
The more often you refresh the local cache, the less out-of-date
the information is, but the greater the overhead.

The advantage of reporting live data is that the results are always
bang up-to-date.
   The disadvantages are that there may be a (hopefully slight) delay
in retrieving this data, and subsequent requests may end up retrieving
inconsistent data.  (E.g if a row gets created/deleted during the course
of a walk).



> I implement an Agent to fulfil the NTCPI specification (urban traffic). I
> search the way to implement the NTCIP1201-2004 and NTCIP1202-2004 and
> NTCIP8004-A-2004 MIBs. I would happily use snmpset to fill with the
traffic
> signs information.

No - that doesn't feel right.   Particularly not if the data is large
and rapidly
changing.   Your agent is going to spend the whole time maintaining the
internal database, and have no time left for actually reporting this
information
to the people who need to know it!

A much better approach would be to use one of the other table helpers
(I'd personally suggest the tdata helper), and make use of the cache
helper to keep this maintained.
  You'd need a 'cache_load' routine to read the information in from the
underlying subsystem, and another 'cache_free' routine to release this
internal cache.

  When the agent first receives a request for this MIB, it calls the
cache_load
routine to pull the whole dataset in from the underlying subsystem, and then
uses this to answer the query.   That is a little timeconsuming, I admit,
but it's only a problem for the first request.   When the second
request arrives,
then the data is already available, and the answer can be provided
immediately.
If there's then a lull before the third (or thirty third) request,
then the agent
spots that the cache is "too old" to be useful, and calls the cache_load
routine
to retrieve the latest data again.

Such an approach not a panacea, but it's a reasonable compromise - easy to
code,
and very flexible.   By changing the timeout on the cache, you can balance
the
needs of timeliness and processing overheads.  (And you can even tweak this
dynamically, since there's a MIB to control the cache!)

The cache can even be regularly refreshed automatically so that the data is
always "relatively new", without holding up the request processing to reload
the data on demand.   And none of this will affect the main MIB module
handler
code - it's all set up in the initialisation routine.


There are several examples of this technique in the current agent mibgroup
tree.
Have a look at those, and see how things are currently done.


I admit that this approach could be used with the data_set helper.
But I'm really
not convinced that this is particularly appropriate.   This helper is
not particularly
efficient or easy-to-use (IMO), and I personally find having a suitable
per-row
data structure much easier to work with.
   And one consequence is that I have little or no knowledge of (or interest
in)
the data_set helper, so can't really offer advice or assistance.

Dave
------------------------------------------------------------------------------
_______________________________________________
Net-snmp-coders mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders

Reply via email to