On Fri, 2014-10-24 at 19:09 -0400, Ross Altman wrote:
> I'm currently working with a pretty gigantic database (116 Gb at the
> moment, and growing). Performing a simple SELECT routine with one filter
> takes between 7 and 15 minutes, which is starting to become a problem. The
> command I'm using is the following:
> 
> SELECT NVERTS FROM ToricCY WHERE H11=2;

Questions:

1) Are you querying a local database file, or pulling the data in over a
network connection?

2) Is there an index defined for the column H11?

> Pretty basic. I just want the NVERTS column of the table ToricCY where
> another column labeled H11 is 2, nothing fancy. Because of the huge size of
> the database, I would expect it to take a while, but I was wondering if you
> guys have any recommendations on how to speed it up (if possible).
> 
> Also, because this problem will only get worse as the database grows, I'm
> looking for alternative approaches to storing this data. Does anyone have
> experience working with databases this big?


116 GB seems to be at odds with a database system named "SQLite". :) But
most popular client/server RDBMS's should be able to handle it with no
problem. MySQL, for example, supports table partitions which can have an
enormous impact on performance. With SQLite, you can approximate
partitioning by splitting the database into several smaller ones and use
the ATTACH command to run queries over several databases at once.

Hope this helps!

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to