On 4/12/07, Bryan Pendleton <[EMAIL PROTECTED]> wrote:

This test seems to be very sensitive to the precise query execution
strategy that is being used, but I don't see how the test is
controlling that query execution strategy.

Can somebody clarify how the test works for me?

Perhaps this comment from the original (pre-JUnit) test should have
been kept as well:

 Not done in ij since we need to do many "next" and "update" to be
 able to excercise the code of creating temp conglomerate for virtual
 memory heap.  We need at minimum
 200 rows in table, if "maxMemoryPerTable" property is set to 1 (KB).
 This includes 100 rows to fill the hash table and another 100 rows
 to fill the in-memory heap.

So the size of the rows for t1 cause the hashtable to fill after 100
rows, the in-memory heap  after another 100, and then finally a few
more rows to go to the backing store. It appears to depend on the
behavior of the in-memory heap of reversing the order of the rows that
are put into it. I'm not entirely sure what the 'in-memory heap' here
is referring to, presumably this is the in-memory portion of a
BackingStoreHashtable?

As Mike pointed out, this test has been failing in the nightlies, so
it might not be your change that is causing the failure. Also
unfortunate is the fact that I can't reproduce this on my machine,
although I am working on that. The check for row order was added
because from the comments in the previous test, it was clear that the
row order was what was intended to be checked.

andrew

Reply via email to