An old COBOL system we had did this. It never allocated less than 64
blocks of disk space. It did work.
A lot of modern file systems (eg, EXT2 and EXT3) do this anyway by
reserving space after your file for later use. So if you are using a
file system with plenty of free space, file expansion will (mostly) be
as a continuous extension of exiting data.
Apart from file fragmentation, there is also table space fragmentation.
A sequential read through an index on a table may not be a sequential
read along a disk cylinder. Therefore resulting in low performance. I
don't know whether VACUUM helps or hinders this effect.
From experience I know that dumping an entire DB as SQL, then
destroying database, then parsing back in. Can result in significant
read performance gains. Where database is not cached by OS file cache
system. I would *guess* that where the database is cached, none of this
will make much difference. :)
Just my two pence worth...
Cory Nelson wrote:
I think his issue is that the database is changing size too often. He
wants it to automatically expand in larger chunks so there is less
fragmentation on the disk.
Good idea, assuming it's settable via pragma.
On 9/13/05, Jay Sprenkle <[EMAIL PROTECTED]> wrote:
On 9/13/05, GreatNews <[EMAIL PROTECTED]> wrote:
Hi D. Richard Hipp,
I'm developing a desktop rss reader using your excellent sqlite engine.
One
issue my users found is that sqlite database can get heavily fragmented
over
time. I'm wondering if it's a viable suggestion that sqlite pre-allocates
disk space when creating database, and grows the db file by bigger
chunk(e.g. grow by 20% or so in size each time)?
Why not do a vacuum every 10th time (or something similar) you exit the
program?
---
The Castles of Dereth Calendar: a tour of the art and architecture of
Asheron's Call
http://www.lulu.com/content/77264
--
Ben Clewett
+44(0)1923 460000
Project Manager
Road Tech Computer Systems Ltd
http://www.roadrunner.uk.com