Today someone posted a bug report on not being able to add or remove an
index from a cached table with 8 million rows and a 1.3 GB data file. This
limitation stems from the current 2GB size limit on cached tables. All the
rows in the table have to be recreated and the limit is reached during this
operation.

A while back someone offered to help with testing a solution that would
increase the limit. I did the initial work, which is commented in source
files, but the person who was supposed to do the testing had other
commitments and we did not pursue the matter. This work would allow the size
limit to raise to a theoretical 16GB in a single .data file. I know it is
better to use a number of files instead of a single one but we don't have
the time to implement this for the next version.

The questions are:

1. What are the random access file size limits with different JVM /
operating system combinations? We need this information to prevent the file
size from growing to an unsupported size. AFIK there is a 4GB limit with NT
and a 2GB limit on ext2, but I won't rely on what I think I know and need
informed comments on this.

2. We need volunteers to run tests with large files which will take a long
time to complete. The more the merrier. I will provide the tests. Who is
prepared to run some tests?

If there enough volunteers, I will run the initial tests and commit the new
code to CVS.

Fred Toussi



-------------------------------------------------------
This SF.Net email sponsored by: Free pre-built ASP.NET sites including
Data Reports, E-commerce, Portals, and Forums are available now.
Download today and enter to win an XBOX or Visual Studio .NET.
http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01
_______________________________________________
hsqldb-developers mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/hsqldb-developers

Reply via email to