I have a DB table with 25M rows, ~3K each (i.e. ~75GB), that together with
multiple indexes I use (an additional 15-20GB) will not fit entirely in
memory (64GB on machine). A typical query locates 300 rows thru an index,
optionally filters them down to ~50-300 rows using other indexes, finally
fetching the matching rows. Response times vary between 20ms on a warm DB to
20 secs on a cold DB. I have two related questions:

1. At any given time how can I check what portion (%) of specific tables and
indexes is cached in memory?

2. What is the best way to warm up the cache before opening the DB to
queries? E.g. "select *" forces a sequential scan (~15 minutes on cold DB)
but response times following it are still poor. Is there a built-in way to
do this instead of via queries?a

Thanks, feel free to also reply by email (i...@shauldar.com])

-- Shaul

Reply via email to