On Fri, Jul 18, 2014 at 6:44 PM, David Canterbrie <[email protected]>
wrote:

> Hello,
>
> I've been tasked with trying to understand how much of a performance hit
> one would get if one had to scan a table in its entirety versus reading the
> same data stored as a new-line (or some sort like that) from a file.
>
> The hypothesis I suppose we're trying to understand is that reading
> sequentially from SQLite (without indices) should be comparable to reading
> from a file that has the same data +/- 1-2%
>

Hard to say.  There are a lot of dependencies.

In a test of reading BLOBs out of an SQLite database (seen at
http://www.sqlite.org/intern-v-extern-blob.html) we find that SQLite can be
up to 2.5x faster or 4x slower than direct file I/O depending on the BLOB
size and the page size of the database file.


>
> My first question is that does sound reasonable, and has someone ever done
> such a test?
>
> Secondly, is this even a valid test? Would someone want to store non
> indexed data in SQLite table?
>
> The next test is where it gets interesting -- i.e. if we were to add an
> index, how does full scan performance degrade -- does SQLite have an option
> to maintain an index as a separate file? So that all the data can be stored
> sequentially?
>
> Dave
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
[email protected]
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to