Mike Matrigali wrote:
>
> I have reported DERBY-5624 to track this issue. I think I understand
> the problem, but would feel much better with a reproducible test case
> I could run. Feel free to add your information to DERBY-5624.
>
In our case, we just have a single table with about 5 million rows of
essentially junk data. I delete some portion of the data that's older than
some margin (half, single day's worth, etc.) and run compression.
I also tried another table that's about twice as big, but it required an 8
MB stack size. I've run out of heap space a few times as well, but I'm still
working on reproducing it.
--
View this message in context:
http://old.nabble.com/CALL-SYSCS_UTIL.SYSCS_COMPRESS_TABLE-tp33322411p33358994.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.