[
https://issues.apache.org/jira/browse/DERBY-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12694033#action_12694033
]
Knut Anders Hatlen commented on DERBY-4119:
-------------------------------------------
Thanks for looking at the patch.
I think you're right that the float->int cast in grow() in the first patch did
the right thing, so the extra changes to that method in the second patch should
not change the result in any way.
Good point that some JVMs don't allow vectors/arrays whose size is close to
Integer.MAX_VALUE. Instead of guessing the largest supported array size,
perhaps we could catch OutOfMemoryError and stop growing if that happens. The
old code did this:
// Attempt to allocate a new array. If the allocation
// fails, tell the caller that there are no more
// nodes available.
Node[] newArray = new Node[array.length *
GROWTH_MULTIPLIER];
if (newArray == null)
return null;
The checking for newArray==null didn't make any sense, since the array
allocation is not allowed to return null. My guess is that catching an
OutOfMemoryError here, and returning null, would do what was originally
intended. And it would also solve the problem with JVMs that don't support that
large arrays, as it seems like an OutOfMemoryError is what's being thrown in
this situation.
> Compress on a large table fails with IllegalArgumentException - Illegal
> Capacity
> --------------------------------------------------------------------------------
>
> Key: DERBY-4119
> URL: https://issues.apache.org/jira/browse/DERBY-4119
> Project: Derby
> Issue Type: Bug
> Components: Store
> Affects Versions: 10.5.1.0
> Reporter: Kristian Waagan
> Assignee: Knut Anders Hatlen
> Attachments: overflow.diff, overflow2.diff
>
>
> When compressing a large table, Derby failed with the following exception:
> IllegalArgumentException; Illegal Capacity: -X
> I was able to access the database afterwards, but haven't yet checked if all
> the data is still available.
> The compress was started with CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('schema',
> 'table', 1) from ij.
> The data in the table was inserted with 25 concurrent threads. This seems to
> cause excessive table growth, as the data inserted should weigh in at around
> 2 GB. The table size after the insert is ten times bigger, 20 GB.
> I have been able to generate the table and do a compress earlier, but then I
> have been using fewer insert threads.
> I have also been able to successfully compress the table when retrying after
> the failure occurred (shut down the database, then booted again and
> compressed).
> I'm trying to reproduce, and will post more information (like the stack
> trace) later.
> So far my attempts at reproducing has failed. Normally the data is generated
> and the compress is started without shutting down the database. My attempts
> this far has consisted of doing compress on the existing database (where the
> failure was first seen).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.