Am 12.08.2010 13:16, schrieb Martin.Engelschalk:
> Am 12.08.2010 13:04, schrieb TeDe:
>>   Am 12.08.2010 12:16, schrieb Martin.Engelschalk:
>>> Am 12.08.2010 12:08, schrieb TeDe:
>>>>    Hello,
>>>>
>>>> I want to import a big subset of data from one database to a new one. I
>>>> attach the two databases together and use
>>>>
>>>> insert into customers select * from source.customers where name LIKE 'x%'
>>>>
>>>> I can approximately calculate, how big the new database will grow. Is
>>>> there a way to tell SQLite to reserve an inital space or numer of pages
>>>> instead of letting the database file grow again and again? I'm looking
>>>> for a way to speed up the import.
>>>>
>>> Hello Thomas,
>>>
>>> I create a dummy table with a blob field and fill it with a very large
>>> empty blob. Then I drop the table. The empty pages remain behind an can
>>> the be used by the followimng inserts.
>>>
>> Hello Martin,
>>
>> that sounds like a good idea. Do you use it to have enough space for
>> later operations or because you want to speed up the inserts?
>> How big is the space you reserve by this and how much faster is it? I
>> presume, you have to allocate quite big BLOBs.
>>
>> Best regards,
>>
>> Thomas
> Hello Thomas,
>
> My primary goal was not a speedy insert but to avoid fragmentation of 
> the resulting database file, which slows down later access to the file. 
> So, this is not exactly on topic of your post.
> I did not measure the changes in speed of the insert. However, later 
> selects, which in my case use practically all the data in the database, 
> speed up on the order of 20%.
> I have to admit that this does not seem like much. However, my customer 
> for some reason did not like the fragmentation and insisted on a solution.
Is that because of the lower fragmentation?

Anyway, I consider 20% very good for the little effort you have to make,
especially when you can re-use the code. If you have a well designed
database and a well written application, it could become a hard job to
squeeze out another 20% if you need them.

Thomas
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to