1) Lets say the first request made is to "snmpwalk" through Table2. Are you
saying that Table1's container_load() procedure has never been called yet so
none of the index values have been loaded and none of the rows have been
allocated and placed into the Table1 container yet? If this is true, then how
does an "snmpwalk" through Table2 work since no rows have been allocated to
store the data yet?
I am getting way confused. What actually happens when an "snmpwalk" of Table2
occurs first? Does the container_load() procedure of Table2 get called first?
If so, then I am confused since I thought Table2's container_load() would not
be reserving index values and not be performing any row allocation (basically
not doing much of anything really) since I thought all of this would be handled
in Table1 container_load() procedure only.
What exactly should be placed into the container_load() procedure of Table2?
If you indicate I should be reserving index values and/or allocating rows for
the container then I am confused since I thought all of this would be done in
in Table1 functionality.
Actually, by now you understand I have a main Table1 (with 4 fields) and a
Table2 (with 8 fields) which is an extension to Table1. Perhaps you can
describe to me what functionality should be placed into the
Table1_container_load(), Table1_row_prep(), Table2_container_load() and
Table2_row_prep() routines? I think my problem is that I understand what the
container_load and row_prep routines should do, but have no idea how to use
them correctly when one of my tables is an extension of another table.
2) You state "Luckily, there is a failry easy way around that, which is sharing
a single cache as well. See the function _ifXTable_container_init to see how it
looks up the ifTable cache to share a container".
I looked at this code and can see how another table's cache can be connected to
the container of another table. Perhaps you can explain to me "why" I should
be doing this as am I not clear as to why you are recommending this code to me
in the first place.
3) You stated "Now, the problem is that when you are loading the container, you
will have no idea which of the 6 tables triggered the load."
Well this seems bad since I will not know what I am really loading? Are you
indicating I should not be implementing it this way by adding all fields into
the data_context of Table1? Is this your way of indicating to me that I will
run into too many problems with combining all fields into the data_context of
Table1 and that I should use sub-containers after all?
4) I wish you were local so I could stop by and have you simply draw some ideas
on some paper as to what you are recommending. It is challenging for me to
grasp how to do all of this as well as having no one here locally to bounce
ideas around with. Again, thank you for your time and patience.
Robert Story <[EMAIL PROTECTED]> wrote: On Fri, 1 Jun 2007 12:54:20 -0700 (PDT)
Need wrote:
NH> So Robert ..... instead of me implementing sub-containers, lets say I
implement your other suggestion of adding all the fields defined in
Table2,3,4,5,6 into the "data_context" structure for Table1. I believe I will
be able to use a "union" to define these new fields within the structure to
conserve space since each table is mutually exclusive to each other.
Ok.. there is one issue with sharing a single container, but you may not
care. The problem is that when walking the secondary tables, you will have to
add code to each to make sure the current row index applies to the table in
question, and then tell the agent to skip the row if it does not. So, if you
have 1000 rows in Table1, and only 10 of which are in Table 2, there will be
990 wasted calls to Table2.
Also note that there currently is not any generated code to handle skipping an
inappropriate row, as currently the assumption is that the container only has
appropriate rows. But it should be fairly trivial to fix that.
NH> I assume the "container_load()" procedure of Table1 (main table) will be
called first
That is an invalid assumption. By default, the container_load function is
called when a request comes in for a table. So it is possible that the first
query the agent gets will be for table2, and not table1. Luckily, there is a
failry easy way around that, which is sharing a single cache as well. See the
function _ifXTable_container_init to see how it looks up the ifTable cache to
share a container.
Now, the problem is that when you are loading the container, you will have no
idea which of the 6 tables triggered the load.
NH> When the "container_load()" procedure for Table1 is called, I would like to
populate all of Table1 fields at this time, however, must I populate all fields
defined in Table2,3,4,5,6 at this time as well?
NH>
NH> If I do not populate the fields from Table2,3,4,5,6 at this time, then how
will I be
NH> able to populate the fields later? Will the "container_load()" procedure
for Table2,3,4,5,6 be called eventually and should I populate those fields at
this time instead? If so, how will I know what correct table index (ie: row
index) from Table1 the data should be inserted into at this time?
No, as I said before, you can use the row_prep function to delay the data
lookup until it is needed.
NH> Also, just curious .... when the cache expires, then I assume the
"container_load()" procedure is called again to regenerate the complete table
over again as it originally did ... is this correct? I assume new rows will be
allocated again.
Be default, the container is flushed, and must be reloaded from scratch. But
you can set some cache attributes to prevent that, but then you must be
careful to correctly add new entries and remove old entries. See the ifTable
implementation for an example of one way to do this.
---------------------------------
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders