Hello !
Sorry if this has been discussed before, it is just hard to find in the
archives using the words "or" or "in" :-o
I use postgres-8.0 beta4 for windows.
I broke down my problem to a very simple table - two columns
"primary_key" and "secondary_key". Creates and Insert you will find below.
If your running Linux, and kernel 2.6.x, you can try playing with the:
/proc/sys/vm/swappiness
setting.
My understanding is that:
echo "0" > /proc/sys/vm/swappiness
Will try to keep all in-use application memory from being swapped out
when other processes query the disk a lot.
Although, since
John A Meinel wrote:
Martin Foster wrote:
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
[...]
I've seen this behavior before when restarting the web server during
heavy loads.Apache goes from zero connections to a solid 120,
causing PostgreSQL to spawn that many childre
Martin Foster wrote:
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
[...]
I've seen this behavior before when restarting the web server during
heavy loads.Apache goes from zero connections to a solid 120,
causing PostgreSQL to spawn that many children in a short order of
Here is a fresh run with 'explain analyze' run before and after the
VACUUM statement:
-- begin
% dropdb pkk
DROP DATABASE
% createdb pkk
CREATE DATABASE
% psql pkk < pkk_db.sql
ERROR: function pkk_offer_has_pending_purch(integer) does not exist
ERROR: function pkk_offer_has_pending_purch2(intege
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
Is there a way to restrict how much load a PostgreSQL server can take
before dropping queries in order to safeguard the server?I was
looking at the login.conf (5) man page and while it allows me to limit
by processor time t
Thanks - this is what I was afraid of, but I may have to do this
Is there a good way to monitor what's in the cache?
j
<[EMAIL PROTECTED]> writes:
> The caching appears to disappear overnight.
You've probably got cron jobs that run late at night and blow out your
kernel disk cache by accessing
On Wed, Nov 03, 2004 at 01:19:43PM -0700, [EMAIL PROTECTED] wrote:
> The caching appears to disappear overnight. The environment is not in
> production yet so I'm the only one on it.
Are you vacuuming at night? It grovels through the entire database,
and may bust your query out of the cache. Al
<[EMAIL PROTECTED]> writes:
> The caching appears to disappear overnight.
You've probably got cron jobs that run late at night and blow out your
kernel disk cache by accessing a whole lot of non-Postgres stuff.
(A nightly disk backup is one obvious candidate.) The most likely
solution is to run s
The caching appears to disappear overnight. The environment is not in
production yet so I'm the only one on it.
Is there a time limit on the length of time in cache? I believe there is
sufficient RAM, but maybe I need to look again.
s
-Original Message-
From: [EMAIL PROTECTED]
[mailto:
patrick ~ <[EMAIL PROTECTED]> writes:
> that if I 'createdb' and populate it with the "sanatized" data the
> query in question is quite fast; 618 rows returned in 864.522 ms.
> This was puzzling. Next I noticed that after a VACUUM the very same
> query would slow down to a crawl; 618 rows returned
<[EMAIL PROTECTED]> writes:
> I am working with some pretty convoluted queries that work very slowly the
> first time they're called but perform fine on the second call. I am fairly
> certain that these differences are due to the caching. Can someone point me
> in a direction that would allow me to
--
uh, you can always load a table in cache by doing a seq scan on it...
like select count(1) from table or something... this doesn't work for
indexes of course, but you can always look in the system catalogs, find
the filename for the index, then just open() it from an external program
On Wed, Nov 03, 2004 at 12:12:43PM -0700, [EMAIL PROTECTED] wrote:
> That's correct - I'd like to be able to keep particular indexes in RAM
> available all the time
If these are queries that run frequently, then the relevant cache
will probably remain populated[1]. If they _don't_ run frequently,
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
> Is there a way to restrict how much load a PostgreSQL server can take
> before dropping queries in order to safeguard the server?I was
> looking at the login.conf (5) man page and while it allows me to limit
> by processor time this seems t
Given that the plan doesn't change after an analyze, my guess would be that the first query is hitting cached data, then
you vacuum and that chews though all the cache with its own data pushing the good data out of the cache so it has to
be re-fetched from disk.
If you run the select a 2nd time
Title: Message
That’s correct – I’d
like to be able to keep particular indexes in RAM available all the time
The best way to get all the stuff needed
by a query into RAM is to run the query. Is it more that you want to
'pin' the data in RAM so it doesn't get overwritten by other quer
Greetings pgsql-performance :)
Yesterday I posted to the pgsql-sql list about an issue with VACUUM
while trying to track-down an issue with performance of a SQL SELECT
statement invovling a stored function. It was suggested that I bring
the discussion over to -performance.
Instread of reposting
Title: Message
The best way to get all the stuff needed by a query into
RAM is to run the query. Is it more that you want to 'pin' the data in RAM
so it doesn't get overwritten by other
queries?
-Original Message-From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[E
I am working with some pretty convoluted queries that work
very slowly the first time they’re called but perform fine on the second
call. I am fairly certain that these differences are due to the caching. Can
someone point me in a direction that would allow me to pre-cache the critical
inde
On Tue, Nov 02, 2004 at 11:52:12PM +, Martin Foster wrote:
> Is there a way to restrict how much load a PostgreSQL server can take
> before dropping queries in order to safeguard the server?I was
Well, you could limit the number of concurrent connections, and set
the query timeout to a r
21 matches
Mail list logo