>
> On 10/29/15, Jason H <jhihn at gmx.com> wrote:
> >>
> > If I were to try to work around...
> 
> Before we go any further, have you actually measured a performance
> problem?  Or are you optimizing without prior knowledge of where your
> application is spending time?

Currently, I have a SQLite database of around 10gig that takes 25 minutes to 
run a single query against no other activity (it's never queried in read/write, 
just read). I've created indexes the best I can. To get this lower, I'm looking 
at hadoop, or use it as an excuse to re architect something in SQLite. Hadoop 
has many non-ideal attributes. I want to stick at SQLite because that's what we 
know and my research indicates it's also still the fastest at reading. 

For this specific database, we join though about 12 tables on an average query 
(519 tables total, 319 code tables), most of which have over 2 dozen columns, 
some over 256 columns, max 384. the longest row is 16k in total.


I'm open to ideas, but I was going to use this as an excuse to invent something 
of a general tool.

Reply via email to