[Warning: Typical Shield Ramblings ahead]
Before responding further, I think it's worth taking a step back,
and thinking about exactly what is involved when processing
a request for an object within a MIB table.
There are basically two aspects to this:
a) Locating the appropriate row of the table for this request
b) Retrieving/Updating the relevant column from that row
Remembering that the MIB table is simply a standard interface
to some underlying data set - which need not necessarily be
held as an table internally within the system being managed.
For a GET/SET request, finding the appropriate row is usually
fairly straightforward, since you're given the requested index.
So this is typically a simple lookup (assuming the underlying
data uses the same indexing, and can be accessed directly)
The fun starts with GETNEXT processing, and tables where
the underlying data doesn't allow direct indexing. That's where
the various table helpers come into play.
There are three general techniques for finding the appropriate row:
i) Loop through all the entries, checking each in turn
This is the basis of the 'iterate'-style family helpers,
and allows for a different SNMP indexing to that used
in the underlying system.
But it's typically fairly inefficient.
ii) Hold an internal cache of the table.
The indexing of this can be organised to match that
of the MIB table, regardless of the underlying data
organisation, so the helper can identify the appropriate
row directly.
This is much more efficient than the iteration approach,
but typically involves some time-lag in the data being returned.
iii) Use the underlying subsystem directly to identify the
appropriate row.
This is the aim of the 'raw-table' framework, and
can be both efficient and timely,
But it does put more work on the MIB-specific handler
and/or the underlying subsystem.
Most of the MIB helpers adopt one or other of the first two approaches,
since these are essentially independent of the details of the MIB
being implemented. It's difficult to provide a generalised mechanism
for the third approach.
To this point, we've not really been concerned with the internal details
of a particular row - and all of the MIB table implementations will use
essentially one of these three approaches.
The other difference between helpers relates to how individual
columns are processed within the given row.
Some helpers handle the column objects internally, either by
using a general-purpose data structure ('table_dataset') or by
using automatically generated MIB-specific driving code (MfD,
iterate_access) The effect is for any user-provided code to work
with individual column values (e.g. via a series of object-specific
routines), rather than the row as a whole.
The alternative approach is for the helper to regard the row as a
"black box" data structure, and pass this to the MIB-specific handler
which looks after the column processing. This is the approach taken
by the basic iterate helper, and the table_data/table_tdata helpers.
Now the code generated by these second set of mib2c templates
works by default with a data structure containing one field for each
column in the table, and assumes that the handler will extract the
relevant field(s) from that row's data structure.
But this is all handled within the MIB-specific handler, and doesn't
affect the common helper processing at all. So it's perfectly possible
to have an "empty-shell" data structure, containing a subset of the
data for that particular row (or none at all), and allow the handler
to query the underlying subsystem directly for certain column values.
That would be my personal approach to the problem as you've
outlined it - hold an internal cache of the table, with "empty" row
data structures (perhaps just containing static information, such
as descriptions or names). Probably using the 'table_tdata'
helper (which is my personal preference for most tables!)
This would allow the agent to know which rows were valid, and
hence take care of the row-selection processing, while still
allowing you to retrieve the actual data from the application being
managed.
Such an approach wouldn't work as well if rows are being created
and deleted all the time (e.g. monitoring connections to a web
server, or running processes). But if the rows are relatively static,
and it's just the statistics data that is volatile, then it should prove
a reasonably reliable and straightforward approach.
If I get the chance, I'll look more closely at your other ideas, and
the suggestions that Robert has been making. But that would be
my personal starting point.
As for your other comments:
> However, one thing I forgot to mention is that rows for most tables *can*
> and *will* be added and deleted through SNMP (i.e., most tables are *not*
> read only, and rows can be added as well).
That's not a problem.
All the table helpers can cope with adding/deleting rows.
You'd simply have to ensure that the MIB-specific handler also
took care of adding the new row to the underlying application
(and similarly for removing deleted entries).
What tends to be more of a difficulty is dealing with rows
that appear/disappear due to some external, non-SNMP-driven
mechanism. If everything is controlled via SNMP, then the MIB
module within the agent has full and accurate knowledge of what
exists. If things can change external to this, then there needs to
be some form of re-synchronisation at suitable intervals.
(e.g. by time-limiting the internal cache)
>> If the list of rows is reasonably stable, then the simplest approach
>> would probably be to use the 'mib2c.table_data.conf' template,
>> and use this to determine which row of the table is needed for
>> any given request.
> I'm not quite sure I'm following you. You mean, _container_table_handler()
> calling _data_lookup() calling CONTAINER_FIND() (i.e., .find() in my custom
> container?).
I don't think you need worry about the internals of how containers work.
Having registered your table, the helper will know the necessary indexing
and can take care of all this automatically.
The mib2c-generated code includes template routines for adding/deleting
rows from the table. All you need to do is link these in with the underlying
system being managed - so that rows created via SNMP will appear in the
application as well.
> I'm not even sure I got the difference between mib2c.container.conf and
> mib2c.table_data.conf...
I'd need to check the details, but I believe they are basically very similar.
The main differences concern how visible the internals of the table container
needs to be to the MIB-specific handler code.
I won't comment on the locking questions just now.
It might be useful if you could clarify the precise issues here.
What exactly is the locking intended to protect against.
Dave
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Net-snmp-coders mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders