[Oops - forgot to add back the Cc: to the mailing list!
Please - always keep discussions on the list, not
privately to me directly.
Thanks]
On 1 February 2012 09:20, Dave Shield <[email protected]> wrote:
> On 31 January 2012 19:55, Francois Bouchard <[email protected]> wrote:
>>
>>> How large is the table, and how quickly does the data change?
>> It can chage at a pace of 5 seconds. The table is 9 columns wide and can
>> have let's say from zero to 100+ rows.
>
>
>> .... I'll check this out for caching maybe the text file periodically,
>> and update the table form that file when a request occurs.
>
> It certainly sounds as if caching the table for say 3-5 seconds
> ought to work. The first request in walking the table would
> trigger the code to read in the data file, and populate the internal
> cached version. Subsequent requests would then use this cached
> data directly.
>
> The first request of a subsequent walk, would typically fail the
> "data is too old" checks, and trigger a reload of the data file.
>
> The main issues here would be if walking the table took longer
> than the cache timeout value (and hence the file would be reloaded
> part way through). Or if you issued two walks in rapid succession,
> in which case the second one would continue to use the cached data.
>
>
>
>> For now we're using "mib2c.create-dataset.conf" script for generating the
>> read-only table's code.
>> The template it generates contains one initialization function and a single
>> handler routine for operation on table.
>> And mib2c.conf gives you the choice to have cache.
>
> The dataset framework includes cache support, yes - but I'm not
> convinced it's the best choice for the scenario you've outlined.
>
> Both dataset and table_data frameworks hold an internal representation
> of the table, used for processing incoming requests. But in the case of
> the table_data (and table_tdata) helpers, the generated code defines a
> MIB-specific data structure to hold the contents of a given row, while
> the dataset helper uses a general-purpose mechanism to represent the
> contents of a row. (Essentially a linked list of varbinds, I believe)
>
> The handler for a table_data-style implementation would be passed the
> row structure for the current request, together with information about
> which column object(s) were being asked for - and would then extract
> the corresponding fields directly from that data structure.
>
> The dataset framework would need to search the list of column values
> in order to return the appropriate one(s). This is clearly going to be
> slower than going directly to the required value.
>
> The advantage of the dataset framework is that this processing is
> completely automatic, and doesn't require any MIB-specific code to
> identify these values. So typically there wouldn't need to be a MIB
> handler routine at all - everything would be handled by the dataset
> helper internally.
>
> So that's the tradeoff to bear in mind - the need for additional code
> (table_data) vs slightly slower performance (dataset)
>
> Dave
------------------------------------------------------------------------------
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
_______________________________________________
Net-snmp-users mailing list
[email protected]
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users