[ 
https://issues.apache.org/jira/browse/DERBY-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12689993#action_12689993
 ] 

Larry Hartsook commented on DERBY-4119:
---------------------------------------

Hi Kristian,

We have a lot of trouble reproducing it, too. In fact, we've only seen it a 
customer sites and haven't been able to construct a simple test case. What I've 
noticed is that, when the problem occurs, we are usually able to shutdown the 
database, restart it and run the alter table statement successfully. We run 
Derby in embedded mode and often have multiple different databases open 
simultaneously.

Here are the stats you requested:
    page cache size 6250
    page size is 16384
   JVM max heap is anywhere from 8 GB to 20 GB.

The most recent times I've seen the error has been at a customer site. They 
were running RedHat on a server with, I think, 64 GB RAM and 4 dual-core CPUs. 
The JVM was configured with a max heap of 20 GB

> Compress on a large table fails with IllegalArgumentException - Illegal 
> Capacity
> --------------------------------------------------------------------------------
>
>                 Key: DERBY-4119
>                 URL: https://issues.apache.org/jira/browse/DERBY-4119
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.5.1.0
>            Reporter: Kristian Waagan
>         Attachments: overflow.diff
>
>
> When compressing a large table, Derby failed with the following exception:
> IllegalArgumentException; Illegal Capacity: -X
> I was able to access the database afterwards, but haven't yet checked if all 
> the data is still available.
> The compress was started with CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('schema', 
> 'table', 1) from ij.
> The data in the table was inserted with 25 concurrent threads. This seems to 
> cause excessive table growth, as the data inserted should weigh in at around 
> 2 GB. The table size after the insert is ten times bigger, 20 GB.
> I have been able to generate the table and do a compress earlier, but then I 
> have been using fewer insert threads.
> I have also been able to successfully compress the table when retrying after 
> the failure occurred (shut down the database, then booted again and 
> compressed).
> I'm trying to reproduce, and will post more information (like the stack 
> trace) later.
> So far my attempts at reproducing has failed. Normally the data is generated 
> and the compress is started without shutting down the database. My attempts 
> this far has consisted of doing compress on the existing database (where the 
> failure was first seen).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to