On Fri, 02 Jan 2004 16:21:06 +0100, Armin Waibel wrote:

> Hi,
> 
> Gunnar Hilling wrote:
> 
>> Hello,
>> 
>> what can I do to prevent "Out of Memory"-Errors when working on large
>> datasets?
>> 
>> I tried to do "tx.commit(); tx.begin();" after doing part of the (big)
>> job, but I didn't succeed.
>> Do I have to empty the Cache explicitly?
>> What is recommended?
>> I create the objects that cause the error in my application, so I thought
>> they should be garbage-collected... or aren't they?
>> 
> hmm, if implicit locking is set true, all queried objects will be read 
> locked. Read lock based on object itself,  thus GC can't remove objects 
> with read-lock.
> Try to set implicit locking to false when query large datasets. Or you 
> can try to use paging from the PB-api (don't know if odmg-api supports 
> paging).
> 
using PB-api doesn't help.
I tried to set implicit locking to false, but get errors about
"dlist_id null values" (see thread "strange error" from last night)
I think when updating values I would have to write lock some objects
myself but how can it tell which ones?

thanks,

-Gunnar



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to