> Do not be tempted by the incremental vacuum feature. Incremental > vacuum will reduce the database size as content is deleted, but it > will not reduce fragmentation. In fact, incremental vacuum will > likely increase fragmentation. Incremental vacuum is just a variation
> on auto_vacuum. It is designed for flash memory with zero seek latency. > D. Richard Hipp > d...@hwaci.com Thanks for the reply, but I am confused again. Is incremental vacuum different from the vacuum command? It seems like vacuum would defragment the database according to the description here: http://www.sqlite.org/lang_vacuum.html "The VACUUM command cleans the main database by copying its contents to a temporary database file and reloading the original database file from the copy. This eliminates free pages, aligns table data to be contiguous, and otherwise cleans up the database file structure." If incremental vacuum and vacuum are the same, then I am still uncertain of what to do about my original problem. Any ideas on why the sum function is slow on my existing table, but it is fast on a copy of the table? Also, after calling vacuum, sum is fast on the original table. Here's my original question: I am trying to use the Sum function on a column in a table with ~450K rows in it. Select sum(Col4) from Table1 Where Table1 looks like this: Create TABLE Table1 ( Col1 INTEGER NOT NULL, Col2 INTEGER NOT NULL, Col3 INTEGER NOT NULL, Col4 BIGINT NOT NULL, Col5 BIGINT NOT NULL, Col6 BLOB NOT NULL, Col7 CHAR(1) DEFAULT '0', Col8 NUMERIC(2) NOT NULL, Col9 NUMERIC(2) NOT NULL, Col10 INTEGER NOT NULL, Col11 INTEGER NOT NULL, CONSTRAINT FK_1 FOREIGN KEY (Col1) REFERENCES Table2 (Col1) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT PK_1 PRIMARY KEY (Col10, Col11, Col1, Col3 DESC) ); It takes over 2 minutes to execute when using the original table. I created an exact copy of the table with the same indices and constraints and inserted all the data from the original table into it. Summing that column on the copied table only takes a few seconds. I am guessing that using the copied table is faster because it has all of its data arranged contiguously, but that is just a guess. Can anyone shed some light on this? Making a copy of the table is not an option, so is there anything I can do to get better performance from the original table? Thanks The information contained in this email message and its attachments is intended only for the private and confidential use of the recipient(s) named above, unless the sender expressly agrees otherwise. Transmission of email over the Internet is not a secure communications medium. If you are requesting or have requested the transmittal of personal data, as defined in applicable privacy laws by means of email or in an attachment to email you must select a more secure alternate means of transmittal that supports your obligations to protect such personal data. If the reader of this message is not the intended recipient and/or you have received this email in error, you must take no action based on the information in this email and you are hereby notified that any dissemination, misuse, copying, or disclosure of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email and delete the original message. _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users