-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 28/05/14 02:26, Hadashi, Rinat wrote:
> I have 13 tables, of which 2 are huge, 2 are medium and the rest are
> very small. My huge tables have 3 columns: numeric, numeric and varchar
> with millions of rows. I keep an index on the numeric columns.
> 
> Does that bring up any column ordering suggestion?

Showing the schema and slowest query will help.  Also what is the average
size of the varchar values?  What operating system and filesystem are you
using?

SQLite stores each row as each column sequentially encoded.  If for
example your varchar was around 32kb then to read two rows would require
seeking/reading about every 32kb, which is way less efficient than if it
was 10 bytes in size in which case multiple rows come back with each read.

You haven't mentioned what you tried already.  Good starters are running
vacuum after populating the database and determining the optimal page
size.  The latter will depend on your queries - eg a 64kb page size will
result in 64kb of i/o even if only one byte is needed from a page.  If you
use NTFS compression then it operates on units of 64kb so using a 64kb
page size would be optimal.

I recommend you have a deterministic repeatable representative set of data
and queries.  That way you try different settings like page size, file
system compression and operating system tuning (if applicable).

There isn't some secret magic wand that will suddenly make things faster -
instead you need to measure and tweak multiple places.

Roger

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlOGMfgACgkQmOOfHg372QT9IACfVvhc1LWG4X2IFBC0rKKNnrdw
UFIAoNhtFdh1EZKEo3fx7Kj9bkdKJRW4
=02fs
-----END PGP SIGNATURE-----
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to