Gunnar Hilling wrote:
Hello,hmm, if implicit locking is set true, all queried objects will be read locked. Read lock based on object itself, thus GC can't remove objects with read-lock.
what can I do to prevent "Out of Memory"-Errors when working on large datasets?
I tried to do "tx.commit(); tx.begin();" after doing part of the (big) job, but I didn't succeed. Do I have to empty the Cache explicitly? What is recommended? I create the objects that cause the error in my application, so I thought they should be garbage-collected... or aren't they?
Try to set implicit locking to false when query large datasets. Or you can try to use paging from the PB-api (don't know if odmg-api supports paging).
regards, Armin
Thanks,
-Gunnar
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
