On Fri, Dec 01, 2006 at 08:35:24AM +0100, kamil wrote:
> I want to preallocate disk space for database. I have only one table with ~1 
> milion entries, each entry takes about 30 bytes. Entries are added/removed 
> but there is some maximum number of items, which can be put into the table 
> at the same time. Is it possible to allocate a fixed disk space for such 
> database ? How large should it be ? If yes, then is  there a better way to 
> create large files than massive insterts/deletes ?

It's not necessarily the case that pre-allocating space does what I
suspect you want.  Consider a filesystem like ZFS.  Writing 1GB of
garbage to a file that you will eventually overwrite does not guarantee
that you'll have 1GB of space to write into that file.  That's because
ZFS uses a copy-on-write approach and snapshots can hold references to
the space that would be released on a CoW operation.  And SQLite might
use CoW too someday, for all I know, so that pre-creating 10^6 rows
wouldn't necessarily guarantee that you have room for 10^6 UPDATEs no
matter what filesystem you're using.

What you want is a way to get a guarantee from the OS that there will be
some amount of disk space that you can use to grow some file.  You can't
get that portably.

Nico
-- 

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to