On Friday, February 22, 2013, Carlo Stonebanks wrote:

> Hi Jeff, thanks for the reply.****
>
> ** **
>
> <<** **
>
> What is going on during the interregnum?  Whatever it is, it seems to be
> driving the log_2013_01_session_idx index out of the cache, but not the
> log_2013_01 table.  (Or perhaps the table visit is getting the benefit of
> effective_io_concurrency?)
> …****
>
> Rebuilding the index might help, as it would put all the leaf pages
> holding values for session_id=27 adjacent to each other, so they would read
> from disk faster.  But with a name like "session_id", I don't know how long
> such clustering would last though.****
>
> >>** **
>
> ** **
>
> Technically, nothing should be happening. We used to keep one massive
> audit log, and was impossible to manage due to its size. We then changed to
> a strategy where every month a new audit log would be spawned, and since
> log_2013_01 represents January, the log should be closed and nothing should
> have changed (it is technically possible that a long-running process would
> spill over into February, but not by this much). So, assuming that it’s
> stable, it should be a very good candidate for reindexing,  no?
>

Yes, assuming the problem is reading the index data from disk, that sounds
like a good candidate for reindexing (and maybe clustering as well).

> ****
>
> ** **
>
> Our effective_io_concurrency is 1, and last I heard the PG host was a
> LINUX 4 drive RAID10, so I don’t know if there is any benefit to  raising
> this number – and if there was any benfit, it would be to the Bitmap Scan,
> and the problem is the data building before the fact.****
>
> ** **
>
> >> the bitmap itself doesn't get cached.  But the data needed to
> construct the bitmap does get cached.  It gets cached by the generic
> caching methods of PG and the OS, not through something specific to bitmaps.
> <<****
>
> ** **
>
> This has always been a problem for me. I spend hours trying different
> strategies and think I’ve solved the problem, when in fact it seems like a
> cache has spun up, and then something else expires it and the problem is
> back. Is there a way around this problem, can I force the expiration of a
> cache?
>
You can clear the PG cache by restarting the instance.  To clear the OS
cache as well you can do this (Linux)

<stop postgres>
sync
sudo echo 3 >  /proc/sys/vm/drop_caches
<start postgres>


But I think it would be better just not to execute the same query
repeatedly.  For example, each time you execute it during testing, pick a
different session_id rather than using 27 repeatedly.  (It might also be a
good idea to change up the hard-coded in-list values you have, but with the
plans you are currently seeing that isn't important as those are being used
in a filter not a look-up)

Cheers,

Jeff

Reply via email to