Thanks, Simon, for your detailed answer.
I will try the suggestions you and Michael supplied and see if there is
any significant inprovement.
Tim
On 3/20/2012 11:13 AM, Simon Slavin wrote:
On 20 Mar 2012, at 2:21pm, Tim Morton<[email protected]> wrote:
My app reads dozens of SQLite databases ranging in size from 1MB to 100MB with
a simple table structure like,
"create table dictionary(id INTEGER PRIMARY KEY, topics, definition)"
On startup the app reads all the databases to extract the "topics" column data. With the
DB configured as above extracting the topic data is a lengthy process. It seems the whole DB file
is being read to just access the relatively small "topics" column.
If I add another table with just the topics data,
"create table dictionary(id INTEGER PRIMARY KEY, topics)"
access is quicker. However, if I make another DB file with just the topics
column, access is much quicker. I don't really want to use two files to access
a single DB.
Also, the DB is only read from; not written to by the app.
Thank you for your detailed description which helps a lot in considering your
situation.
My questions,
Does table creation order affect the speed of access?
These things depend mostly on your hardware. Windows deals much better with
defragmented files than fragmented files. A lot of its speed comes from
assumptions like if you're looking at sector s, you're shortly going to want
sector s+1. To speed up your situation, after your database file has been
written, execute a VACUUM command in SQL then (if you're running Windows)
defragment your hard disk.
If the above does not give you fast enough access, then it may be possible to
speed things up still further but writing one TABLE then the other. You can do
this when you originally make the file, or you can use the sqlite shell tool to
.dump the file to SQL commands and .read the SQL commands back in to form
another database file. After doing this, once again, if you're running Windows
do another defragment. I don't expect that to make much difference but it
might be worth trying. You should also look at the PRAGMA command Michael
recommended.
Is there a way to read only the part of the file that has the queried
table/column thus saving time and hard dive grinding?
Is it possible to read a small table/column from a huge file nearly as quick as
the small table alone from a small file? And if so, how?
Your TABLE definition is as efficient as it can be. Your most efficient
reading code in each situation is
SELECT id,topics FROM dictionary
Any other optimization can't be done inside the SQL command.
Simon.
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users