> I can see where he is coming from. By reserving the appropriate number of > pages up front the import does not have to wait for disk IO or CPU cycles if > it runs out of pages.
I wouldn't be so sure about that. Did anybody make any measurements? 1) I don't know where do you think CPU cycles are saved but you definitely get some more CPU cycles in maintaining free-list of pages which will never be there if database grows page-by-page. 2) Even if you use Martin's technique by creating some big blob why do you think that SQLite will grow database file by necessary amount of pages at once instead of page-by-page? And is there something in SQLite's code that writes several sequential pages in one OS call instead of writing them page-by-page? 3) Even if you don't use Martin's technique and indeed let database grow somehow in one system call why do you think that letting file grow by one big chunk uses somehow less IO and CPU cycles inside OS file system support than letting it grow piece-by-piece, or should I say sector-by-sector? I can agree that making file grow in one big piece instead of many small ones seems to compact most IO into one call instead of many. But will it be somehow faster? I doubt it. And bear in mind that all your efforts can be immediately trashed away by another process reading some big file(s) which will consume all OS file cache, so OS will have to re-read your database file later when you actually need it. This way I guess overall number of IO operations on the system will only increase... Pavel On Thu, Aug 12, 2010 at 6:34 AM, Timothy A. Sawyer <tsaw...@mybowlingdiary.com> wrote: > I can see where he is coming from. By reserving the appropriate number of > pages up front the import does not have to wait for disk IO or CPU cycles if > it runs out of pages. > > ------Original Message------ > From: Pavel Ivanov > Sender: sqlite-users-boun...@sqlite.org > To: General Discussion of SQLite Database > ReplyTo: General Discussion of SQLite Database > Subject: Re: [sqlite] Reserve database pages > Sent: Aug 12, 2010 06:20 > >> I can approximately calculate, how big the new database will grow. Is >> there a way to tell SQLite to reserve an inital space or numer of pages >> instead of letting the database file grow again and again? I'm looking >> for a way to speed up the import. > > Why do you think that this kind of function will speed up your import? > > > Pavel > > On Thu, Aug 12, 2010 at 6:08 AM, TeDe <tede_1...@gmx.de> wrote: >> Hello, >> >> I want to import a big subset of data from one database to a new one. I >> attach the two databases together and use >> >> insert into customers select * from source.customers where name LIKE 'x%' >> >> I can approximately calculate, how big the new database will grow. Is >> there a way to tell SQLite to reserve an inital space or numer of pages >> instead of letting the database file grow again and again? I'm looking >> for a way to speed up the import. >> >> Thanks in advance, >> >> Thomas >>_______________________________________________ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >> > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > ** Sent from my mobile device with the tiny keys ** > Timothy A. Sawyer, CISSP > Managing Director > MBD Consulting, LLC > 55 Madison Av., Suite 400 > Morristown, NJ 07960 > Phone: (973) 285-3261 | Fax: (973) 538-0503 > Web: http://www.mybowlingdiary.com > Email: tsaw...@mybowlingdiary.com > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users