There is a problem with access to file in directory with big amount of files.
FS directory indicies are not really good. I did test 100 millions of 1k files
in SQLite and results were better than reading from set of directories in FS.
But for files about 1 Mb and more the SQLIte performance is not good.
Is any reason why SQLite big blobs reading may be slowly? The
performance of the BLOBS may limit performance of FTS and
other custom storage/index realizations (Spatialite, etc). And it's more
important I think. Especially when we need FTS index as fast hash index.

2011/9/21 Richard Hipp <d...@sqlite.org>:
> If you are storing large BLOBs in SQLite, can you read them faster if they
> are stored directly in the database file, or can you get to them quicker if
> you store just a filename in the database and read the BLOB content from a
> separate file?
>
> We did some experiments to try to answer this question, and the results
> seemed interesting enough to share with the community at large.  Bottom
> line:  On Linux workstations, it is faster to store BLOBs in the database if
> they are less than about 100KB in size, and faster to store them in a
> separate file if they are larger than about 100KB.  This is on Ubuntu with
> EXT4 and a fast SATA disk - your mileage may vary with different operating
> systems, filesystems, and hardware.
>
> The complete report is here:
> http://www.sqlite.org/intern-v-extern-blob.html
>
> --
> D. Richard Hipp
> d...@sqlite.org
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to