[ http://issues.apache.org/jira/browse/DERBY-1661?page=all ]

Mike Matrigali updated DERBY-1661:
----------------------------------


I have reviewed and committed this change.  To me I think this is a reasonable 
1st step to addressing this
issue, it should avoid most errors when doing large index builds on default 
linux systems.  I hope that 
in subsequent releases the additional changes Sunitha proposes in her notes can 
be looked at.  

Looking at upping the default 1 meg in memory part of the sort should be looked 
at, this is incredibly small
for a large number of the system we are running on.  

Also putting each sort merge run in a separate file was the easiest at the 
time, but does use up a lot of 
filesystem resources.  In any case the entire derby server should maintain some 
control on open files,
currently the sort module can eat  up 512 open files per connections in worst 
case.  Some other solutions
would be to tie the open files into something like the open file cache in the 
store, or maybe just store all
the sort runs in a single file - i think they are written sequentially so may 
not be hard to do using existing
streamed file support in the store.

m1_142:134>svn commit

Sending        
java\engine\org\apache\derby\impl\store\access\sort\ExternalSortFactory.java
Transmitting file data .
Committed revision 430578.

> Create index on large tables fails with too many open files , 
> FileNotFoundException.
> ------------------------------------------------------------------------------------
>
>                 Key: DERBY-1661
>                 URL: http://issues.apache.org/jira/browse/DERBY-1661
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.2.0.0, 10.1.2.1, 
> 10.1.3.0, 10.1.3.1
>            Reporter: Sunitha Kambhampati
>         Assigned To: Sunitha Kambhampati
>             Fix For: 10.2.0.0
>
>         Attachments: 1661_Notes.txt, 1661_PerfResults.xls, derby1661.diff.txt
>
>
> Create index fails on a table with 18million rows during sort with too many 
> open files error
> This error was first seen when running against a tpcc-like test.  The test 
> creates the tables, inserts data and then creates the indexes & adds 
> constraints.
> Customer table has 18 million rows in this case. The below error is thrown on 
> create index. 
>  
> ij> create index customer_last on customer(c_w_id, c_d_id, c_last);
> ERROR XSDF1: Exception during creation of file 
> /home/cloudtst/SinglePerf/testruns/scripts/dbtpcc/tmp/T1128794811044.tmp for 
> container
> ERROR XJ001: Java exception: 
> '/home/cloudtst/SinglePerf/testruns/scripts/dbtpcc/tmp/T1128794811044.tmp 
> (Too many open files): java.io.FileNotFoundException'.
> ij

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to