On Thu, Sep 13, 2012 at 9:50 AM, Maya Opperman <[email protected]>wrote:
> > >Get the statistics on the various runs of the sub-procedure- reads, > writes, fetches, and marks. > No writes. 47 000 indexed reads. > Lets try that one again. I'd like both reads and fetches, with statistics for a run of the subquery that's fast and one that's slow. Did you mean that computing one account balance involved forty-seven thousand indexed reads? That's some account! > > > There may be other statistics available now describing your cache hit > rate. You should also check the depth of indexes (gstat). > I'm not 100% sure you mean here... > Gstat will tell you how deep your indexes are. Firebird indexes are very broad based triangles, with a single page at the top, lots of pages on the next level down, and enormous numbers of pages on the level below that. The taller (or deeper) the triangle, the slower it is. The height and width are determined by page size and key size. An index with four or more levels is an indication that you should increase the page size for your database. Run gstat and search for the indexes used by this sub-procedure. I'm sure you mentioned it somewhere, but what is the database page size? And the cache size? > > > Which type of garbage collection are you using separate thread, > > cooperative, or blended? > Using 2.5 classic, so after googling, I take it, it must be cooperative > (By the way, I haven't changed any oconfig settings when installing either) > Just looking for something that might push a lot of pages out of cache. Probably not garbage collection. > > >How many tables and indexes are you touching in > >computing an account balance? > Just one. > Er, no actually two. > Having another look now at the main procedure, it's actually calculating 3 > balances using table A, and then one using table B. > Excet the order is: > TableA's balance > TableB's balance > Table A's other balance > Table A's third balance with different criteria. > > I changed the order to work with the three balances from table A first, > then table B, and voila - the whole thing went from well over a minute down > to 12 seconds ;-) I am guessing what's happening, is there is now a lot > less swapping out of data between buffer and disk. > That's good. > > I'm not sure if that is a sign something is wrong with my setup, or > whether I just need to be careful of that kind of thing when writing stored > procedures? Preserving locality of reference is always a good thing. I would also try a large cache size and possibly a large page size, but would first look for differences in the number of reads on fast and slow runs of the sub-procedure and index depth. Good luck, Ann > [Non-text portions of this message have been removed]
