Jeffrey Clary wrote:
Thanks, Mike. I'll look up how to add something to JIRA and put it in
there.
We've got change working locally where the creator of the
BackingStoreHashTableFromScan gets the holdability from the Activation
object. I'm afraid of it though, because I don't even know enough to
consider issues like the derby temporary files you mention below.
As far as a workaround goes, I don't think I'm interested in one that
depends on data size. I can't predict how big result sets might be in
my application. I am looking into whether we can get by without having
two statements active against the same connection, thus taking
holdability out of the picture entirely.
I am not sure if your test case matches your application, but if your
application is multi-threaded in any way, where there could be 2 threads
using the same connection I would highly recommend moving away from
that. Even with holdability there is a limited set of operations one
can do after a commit (basically you have to do a next) and if you
aren't controlling what happens with the other thread doing a commit
the cursor code could still get other spurious errors than this one.
If you completely control the order of statments then this is not an issue.
-----Original Message-----
From: Mike Matrigali [mailto:[EMAIL PROTECTED]
Sent: Friday, March 16, 2007 12:45 PM
To: [email protected]
Subject: Re: Possible problem in
org.apache.derby.impl.store.access.BackingStoreHashTableFromScan
this definitely looks like a bug, and I think you have the right
analysis. You should report a bug in JIRA with your findings and test
case.
If you are interested in working on it, some things to consider:
1) would need to get the holdability info all the way down from
execution into
the call. I think the interesting place is in
java/engine/org/apache/derby/impl/sql/execute/HashScanResultSet.java
I didn't see holdability right off in this file, maybe someone can
add the right way to get this info in this class?
2) need to check if temporary files are going to work right in
holdability case. It looks like when actual backing to disk was
added the holdability case was not considered.
Does anyone know if derby temporary files will work correctly if held
open past commit. Off hand I don't remember the process where they
are cleaned up - is that currently keyed by commit?
3) Are you interested in a workaround? If the hash got created in
memory rather than disk then this would probably work. I think
there are some flags to force bigger in memory hash result sets.
Jeffrey Clary wrote:
Folks,
I'm new to Derby and to these lists, so I'm not sure what I am
reporting
is a bug or expected behavior. You can see an earlier question I
asked
on the derby-user list 3/15/2007 titled "Heap container closed
exception
(2 statements on same connection)."
I am not seeing the behavior I would expect after calling
Connection.setHoldability(ResultSet. HOLD_CURSORS_OVER_COMMIT). I
have
attached a test program that displays the behavior. Here is an
outline
of what happens (with autocommit on):
1. Execute a statement that returns a fairly large result set.
2. Execute another statement on the same connection that
logically
does not affect the first result set, but that does update the
database.
3. Iterate through the first result set.
4. After some number of calls to next(), take an exception
indicating "heap container closed."
I have looked a bit into the Derby source code, and I think that the
issue is related to the
org.apache.derby.impl.store.access.BackingStoreHashTableFromScan
constructor. It passes a constant false value to its super in the
keepAfterCommit argument. In fact, there is a comment there that says
"Do not keep the hash table after a commit." It seems to me that this
value should be based on the holdability attribute of the statement,
as
set in the connection or when the statement is created. But knowing
so
little about the Derby implementation I don't have any idea whether
that
would trigger some unintended consequence.
Any advice would be appreciated.
Thanks,
Jeff Clary