Am 03.11.2020 um 06:44 schrieb Tanvi Shah:
Hi,
I think this fetch size limit should be able to change through the jackrabbit 
api also. I think the task should be taken by jackrabbit team to make it 
configurable.

Optimally, things should work without configuration.

That said, this is open source. You have the source. You can very easily
modify the code to see whether setting the fetch limit actually helps in
your case.

Altenatively (or additionally), it would be good to have a test case
that shows the problem and which can be used to verify that a change
actually helps.

Also I needed to understand is there some another provision through which S3 
garbage collection could be initiated for such huge Database.

I don't think there's a way to get it running before fixing the OOM in
the scan phase first.

Best regards, Julian

Reply via email to