I have a query which really should be lightning fast (limit 1 from
index), but which isn't. I've checked the pg_locks table, there are no
locks on the table. The database is not under heavy load at the moment,
but the query seems to draw CPU power. I checked the pg_locks view, but
found nothing
On Wed, Sep 27, 2006 at 02:17:23PM -0800, Matthew Schumacher wrote:
> Jim C. Nasby wrote:
> >
> > It can cause a race if another process could be performing those same
> > inserts or updates at the same time.
>
> There are inserts and updates running all of the time, but never the
> same data.
Jim C. Nasby wrote:
>
> It can cause a race if another process could be performing those same
> inserts or updates at the same time.
There are inserts and updates running all of the time, but never the
same data. I'm not sure how I can get around this since the queries are
coming from my radius
On Wed, Sep 27, 2006 at 01:33:09PM -0800, Matthew Schumacher wrote:
> Jim,
>
> Thanks for the help. I went and looked at that example and I don't see
> how it's different than the "INSERT into radutmp_tab" I'm already doing.
> Both raise an exception, the only difference is that I'm not doing
>
Jim,
Thanks for the help. I went and looked at that example and I don't see
how it's different than the "INSERT into radutmp_tab" I'm already doing.
Both raise an exception, the only difference is that I'm not doing
anything with it. Perhaps you are talking about the "IF (NOT FOUND)" I
put afte
Periodically taking longer is probably a case of some other process in
the database holding a lock you need, or otherwise bogging the system
down, especially if you're always running acctmessage from the same
connection (because the query plans shouldn't be changing then). I'd
suggest looking at wh
In response to Edoardo Ceccarelli <[EMAIL PROTECTED]>:
> Rod Taylor wrote:
> > On Wed, 2006-09-27 at 18:08 +0200, Edoardo Ceccarelli wrote:
> >
> >> I have read that autovacuum cannot check to see pg load before
> >> launching
> >> vacuum but is there any patch about it? that would sort out th
On Wed, Sep 27, 2006 at 10:51:26AM -0500, Dave Dutcher wrote:
> To make the planner prefer an index scan over a seq scan, I would first
> check the statistics again, and then you can try setting enable_seqscan to
> false (enable_seqscan is meant more for testing than production) or, you
> could try
I found a way to survive yet some more weeks :-)
One of the queries we've had most problems with today is principially
something like:
select A.*,sum(B.*) from A join B where A.created>x and ... order by
A.created desc limit 32 group by A.*
There is by average two rows in B for every row in
List,
I posted a little about this a while back to the general list, but never
really got any where with it so I'll try again, this time with a little
more detail and hopefully someone can send me in the right direction.
Here is the problem, I have a procedure that is called 100k times a day.
Mo
Csaba Nagy wrote:
On Wed, 2006-09-27 at 18:08, Edoardo Ceccarelli wrote:
How can I configure the vacuum to run after the daily batch insert/update?
Check out this:
http://www.postgresql.org/docs/8.1/static/catalog-pg-autovacuum.html
By inserting the right row you can disable autovacuu
On Wed, 2006-09-27 at 18:08 +0200, Edoardo Ceccarelli wrote:
>
> I have read that autovacuum cannot check to see pg load before
> launching
> vacuum but is there any patch about it? that would sort out the
> problem
> in a good and simple way.
In some cases the solution to high load is to vacu
[Edoardo Ceccarelli - Wed at 06:49:23PM +0200]
> ...another thing is, how could autovacuum check for machine load, this
> is something I cannot imagine right now...
One solution I made for our application, is to check the
pg_stats_activity view. It requires some config to get the stats
available
Rod Taylor wrote:
On Wed, 2006-09-27 at 18:08 +0200, Edoardo Ceccarelli wrote:
I have read that autovacuum cannot check to see pg load before
launching
vacuum but is there any patch about it? that would sort out the
problem
in a good and simple way.
In some cases the
Bill Moran wrote:
In response to Edoardo Ceccarelli <[EMAIL PROTECTED]>:
I have read that autovacuum cannot check to see pg load before launching
vacuum but is there any patch about it? that would sort out the problem
in a good and simple way.
Otherwise, which kind of set of par
In response to Edoardo Ceccarelli <[EMAIL PROTECTED]>:
> Hello,
>
> we are running a 7.3 postgres db with only a big table (avg
> 500.000records) and 7 indexes for a search engine.
> we have 2 of this databases and we can switch from one to another.
> Last week we decided to give a try to 8.1 on
On Wed, 2006-09-27 at 18:08, Edoardo Ceccarelli wrote:
> How can I configure the vacuum to run after the daily batch insert/update?
Check out this:
http://www.postgresql.org/docs/8.1/static/catalog-pg-autovacuum.html
By inserting the right row you can disable autovacuum to vacuum your big
tables,
[Edoardo Ceccarelli - Wed at 06:08:30PM +0200]
> We also activated the autovacuum feature to give it a try and that's
> were our problems started.
(...)
> How can I configure the vacuum to run after the daily batch insert/update?
I think you shouldn't use autovacuum in your case.
We haven't dare
On Wednesday 27 September 2006 09:08, Edoardo Ceccarelli <[EMAIL PROTECTED]>
wrote:
>
> How can I configure the vacuum to run after the daily batch
> insert/update?
>
If you really only want it to run then, you should disable autovacuum and
continue to run the vacuum manually.
You might also in
Hello,
we are running a 7.3 postgres db with only a big table (avg
500.000records) and 7 indexes for a search engine.
we have 2 of this databases and we can switch from one to another.
Last week we decided to give a try to 8.1 on one of them and everything
went fine, db is faster (about 2 or 3
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tim Truman
>
> Hi,
>
> I have the following query which has been running very slowly
> and after a
> lot of testing/trial and error I found an execution plan that
> ran the query
> in a fraction of th
[Scott Marlowe - Wed at 10:31:35AM -0500]
> And remember, you can always change any of those settings in session for
> just this one query to force the planner to make the right decision.
sure ... I could identify the most problematic queries, and hack up the
software application to modify the con
On Wed, 2006-09-27 at 10:26, Tobias Brox wrote:
> [Scott Marlowe - Wed at 10:19:24AM -0500]
> > So, by decreasing them, you should move away from nested loops then,
> > right? Has that not worked for some reason?
>
> I want to move to nested loops, they are empirically faster in many of
> our que
[Scott Marlowe - Wed at 10:19:24AM -0500]
> So, by decreasing them, you should move away from nested loops then,
> right? Has that not worked for some reason?
I want to move to nested loops, they are empirically faster in many of
our queries, and that makes sense since we've got quite big tables
On Wed, 2006-09-27 at 17:05 +0200, Tobias Brox wrote:
> [Scott Marlowe - Wed at 09:58:30AM -0500]
> > Have you tried chaning the cpu_* cost options to see how they affect
> > merge versus nested loop?
>
> As said in the original post, increasing any of them shifts the planner
> towards nested loop
[Scott Marlowe - Wed at 09:58:30AM -0500]
> Have you tried chaning the cpu_* cost options to see how they affect
> merge versus nested loop?
As said in the original post, increasing any of them shifts the planner
towards nested loops instead of merge_join. I didn't check which one of
the cost con
On Wed, 2006-09-27 at 11:48 +0200, Tobias Brox wrote:
> [Tom Lane - Tue at 06:09:56PM -0400]
> > If your tables are small enough to fit (mostly) in memory, then the
> > planner tends to overestimate the cost of a nestloop because it fails to
> > account for cacheing effects across multiple scans of
Tim Truman wrote:
Query:
SELECT count(*) as count FROM
(
SELECT *
FROM transaction t, merchant m
WHERE t.merchant_id = m.id
AND m.id = 198
AND t.transaction_date >= '20050101'
AND t.transaction_date <= '20060925'
[Tom Lane - Tue at 06:09:56PM -0400]
> If your tables are small enough to fit (mostly) in memory, then the
> planner tends to overestimate the cost of a nestloop because it fails to
> account for cacheing effects across multiple scans of the inner table.
> This is addressed in 8.2, but in earlier v
29 matches
Mail list logo