[
https://issues.apache.org/jira/browse/DERBY-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12689415#action_12689415
]
Knut Anders Hatlen commented on DERBY-4119:
-------------------------------------------
One likely suspect is this vector allocation in
impl.store.access.sort.MergeSort:
leftovers = new Vector(mergeRuns.size() - maxMergeRuns);
Now, this is protected by "while (mergeRuns.size() > maxMergeRuns)", so one
should assume that the capacity argument is always positive. However, an
integer overflow may make it negative if maxMergeRuns is negative. maxMergeRuns
should of course never be negative, but I think there is a possibility that it
can end up negative. Its value is taken from SortBuffer.capacity() which
returns NodeAllocator.maxSize - 1. NodeAllocator has this method which changes
the value of maxSize:
/**
Expand the node allocator's capacity by certain percent.
**/
public void grow(int percent)
{
if (percent > 0) // cannot shrink
maxSize = maxSize * (100+percent)/100;
}
This method is always called with percent=100, and it will suffer from an
integer overflow when the original maxSize is greater than ~11000000 because
(maxSize * 200) will be greater than Integer.MAX_VALUE. It should be easy to
change this calculation so that intermediate results are less likely to
overflow.
> Compress on a large table fails with IllegalArgumentException - Illegal
> Capacity
> --------------------------------------------------------------------------------
>
> Key: DERBY-4119
> URL: https://issues.apache.org/jira/browse/DERBY-4119
> Project: Derby
> Issue Type: Bug
> Components: Store
> Affects Versions: 10.5.1.0
> Reporter: Kristian Waagan
>
> When compressing a large table, Derby failed with the following exception:
> IllegalArgumentException; Illegal Capacity: -X
> I was able to access the database afterwards, but haven't yet checked if all
> the data is still available.
> The compress was started with CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('schema',
> 'table', 1) from ij.
> The data in the table was inserted with 25 concurrent threads. This seems to
> cause excessive table growth, as the data inserted should weigh in at around
> 2 GB. The table size after the insert is ten times bigger, 20 GB.
> I have been able to generate the table and do a compress earlier, but then I
> have been using fewer insert threads.
> I have also been able to successfully compress the table when retrying after
> the failure occurred (shut down the database, then booted again and
> compressed).
> I'm trying to reproduce, and will post more information (like the stack
> trace) later.
> So far my attempts at reproducing has failed. Normally the data is generated
> and the compress is started without shutting down the database. My attempts
> this far has consisted of doing compress on the existing database (where the
> failure was first seen).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.