[ 
http://issues.apache.org/jira/browse/DERBY-606?page=comments#action_12440827 ] 
            
Mayuresh Nirhali commented on DERBY-606:
----------------------------------------

I was able to reproduce this error.
I am working with 30 million rows of schema mentioned in the previous comments. 
Total size of the DB is about 12 GB.

I am also able to reproduce this issue by just setting the truncate bit ('APP', 
'TEST', 0, 0, 1). here, I am not sure if this has any dependency on the 
previous runs of Defragment operation, as I am working with the same DB setup. 
But, just setting the truncate bit is convinient and faster to debug the 
scenario.

On further investigation, It was found that while Allocated Extent associated 
with last allocated page is being compressed, All the pages are found to be 
free, thus new_highest_page is set to '-1'. Now, when the 
CompressSpaceOperation is being logged CompressedNumber.writeInt method is 
called with value -1. This method is written to throw exception if the value is 
less than Zero, hence the IOException occurs.

In my opinion, Having a scenario with new_highest_page set to -1 is valid and 
in such a case there should not be any issue logging such operation. But, I am 
no expert in the Store module and would like to  know If this assumption is 
wrong or if I am missing something. If I am on the right track then the fix 
would be to update CompressedNumber.writeInt method to handle *valid negative 
values.

> SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails on (very) large tables
> --------------------------------------------------------------------
>
>                 Key: DERBY-606
>                 URL: http://issues.apache.org/jira/browse/DERBY-606
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.1.1.0
>         Environment: Java 1.5.0_04 on Windows Server 2003 Web Edition
>            Reporter: Jeffrey Aguilera
>
> SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails with one of the following error 
> messages when applied to a very large table (>2GB):
> Log operation null encounters error writing itself out to the log stream, 
> this could be caused by an errant log operation or internal log buffer full 
> due to excessively large log operation. SQLSTATE: XJ001: Java exception: ': 
> java.io.IOException'.
> or
> The exception 'java.lang.ArrayIndexOutOfBoundsException' was thrown while 
> evaluating an expression. SQLSTATE: XJ001: Java exception: ': 
> java.lang.ArrayIndexOutOfBoundsException'.
> In either case, no entry is written to the console log or to derby.log.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to