Make sure H11 has an index on it. Also ensure that ToricCY doesn't have blobs attached to it. Throw the blobs into a different table and use a 1:1 relationship to link them, then only pull the blobs when needed.
SQLite apparently will load an entire row of data out of the database, even if the query doesn't need the additional data, so if you have a huge blob, it'll read in that entire blob then toss the blob to /dev/null You might also want to try and run an analyze on the database as well. That MIGHT take a while (At 116gig, yeah, I'd say so -- Make sure you have PLENTY of drive space where the DB lives), so, if it being used in a live environment, you might need to take your package offline for maintenance. At this point, throwing an index on the database WILL take some time as well. Moving the blobs from one table to another is also going to take a long while as not only will you need to copy the data from one table to another (So you have two copies of the database), but SQLite doesn't support a ALTER TABLE DELETE FIELD type of statement, so, you'll have to reconstruct the table (Or use an IDE that'll do that work for you) which means a copy of your required data from one table to a new table, then a delete of the old table. On Fri, Oct 24, 2014 at 7:09 PM, Ross Altman <knowbody...@gmail.com> wrote: > Hi guys, > > I'm currently working with a pretty gigantic database (116 Gb at the > moment, and growing). Performing a simple SELECT routine with one filter > takes between 7 and 15 minutes, which is starting to become a problem. The > command I'm using is the following: > > SELECT NVERTS FROM ToricCY WHERE H11=2; > > Pretty basic. I just want the NVERTS column of the table ToricCY where > another column labeled H11 is 2, nothing fancy. Because of the huge size of > the database, I would expect it to take a while, but I was wondering if you > guys have any recommendations on how to speed it up (if possible). > > Also, because this problem will only get worse as the database grows, I'm > looking for alternative approaches to storing this data. Does anyone have > experience working with databases this big? > > Thanks! > > Best, > Ross > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users