On 26 Jun 2012, at 2:07am, Bo Peng <ben....@gmail.com> wrote:

> My application needs to run a lot of queries from a large sqlite
> database (>100G) with many small tables (>10,000). The performance of
> the queries are acceptable if the database is on a SSD drive, but can
> be 50 times or more slower on a regular or network drive.

Most of the problem is that by keeping the data in separate tables, you are 
ensuring that you almost never get hits from cache.  Do these tables need to be 
separate or can you merge them on disk ?  Do they have the same columns ?

> Because some users have slow disks and plenty of RAM (e.g. on a
> high-mem cluster system), I would like to create an in-memory database
> to cache frequently accessed tables. For example, if the user tells me
> he can allocate 20G RAM to cache the tables, I will create a :memory:
> database and use it to cache tables until the database reaches 20G in
> size. The question is: How do I know the size of an in-memory database
> so that I can keep it under the specified size?

I believe you can multiply the page count by the page size:

PRAGMA page_count
PRAGMA page_size

Does anyone know if the format "PRAGMA memory.page_count" is what works ?

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to