Hi Tom,
it would be really hard for us to change the underlying tables and the executed
query. Is there any other way for us to avoid the really bad query (e.g. a hint
for the planner)?
Regards,
Robert Hell
-Ursprüngliche Nachricht-
Von: Tom Lane [mailto:t...@sss.pgh.pa.us]
Gesendet:
Hi.
I have a transaction running at the database for around 20 hours .. still
isn't done. But during the last hours it has come to the point where it
really hurts performance of other queries.
Given pg_stat_activity output there seems to be no locks interfering but
the overall cpu-usage of all
I have a transaction running at the database for around 20 hours .. still
isn't done. But during the last hours it has come to the point where it
really hurts performance of other queries.
Given pg_stat_activity output there seems to be no locks interfering but
the overall cpu-usage of all
On Thu, Sep 24, 2009 at 9:27 AM, jes...@krogh.cc wrote:
Hi.
I have a transaction running at the database for around 20 hours .. still
isn't done. But during the last hours it has come to the point where it
really hurts performance of other queries.
Given pg_stat_activity output there seems
On Thu, Sep 24, 2009 at 2:27 AM, jes...@krogh.cc wrote:
Hi.
I have a transaction running at the database for around 20 hours .. still
isn't done. But during the last hours it has come to the point where it
really hurts performance of other queries.
What is your transaction doing during this
Euler Taveira de Oliveira-2 wrote:
solAris23 escreveu:
I am trying to index a field in my database of size about 16K rows, but i
m
getting this error.
Why are you want to index such a big field? BTW, it'll be worthless.
Index row requires 9324 bytes maximum size is 8191
That
Hello postgres wizards,
We recently upgraded from 8.1.5 to 8.4
We have a query (slow_query.sql) which took about 9s on 8.1.5
On 8.4, the same query takes 17.7 minutes.
The code which generated this query is written to support the
calculation of arbitrary arithmetic expressions across
On Thu, Sep 24, 2009 at 2:27 AM, jes...@krogh.cc wrote:
Hi.
I have a transaction running at the database for around 20 hours ..
still
isn't done. But during the last hours it has come to the point where it
really hurts performance of other queries.
What is your transaction doing during
On Thu, Sep 24, 2009 at 9:27 AM, jes...@krogh.cc wrote:
Hi.
I have a transaction running at the database for around 20 hours ..
still
isn't done. But during the last hours it has come to the point where it
really hurts performance of other queries.
Given pg_stat_activity output there
On Wed, Sep 23, 2009 at 10:35 PM, Tom Lane t...@sss.pgh.pa.us wrote:
One thing that is hobbling the performane on 8.4 is that you have
work_mem set to only 1MB
Other things you might try include increasing join_collapse_limit
to 12 or so, and reducing random_page_cost.
Another thing to
Hi
Today the load observed very high load . I am pasting the top.
*TOP *
top - 12:45:23 up 79 days, 14:42, 1 user, load average: 45.84, 33.13,
25.84
Tasks: 394 total, 48 running, 346 sleeping, 0 stopped, 0 zombie
Cpu(s): 49.2%us, 0.8%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.1%si,
For 'idle in transaction' issues, you have to fix your code. I faced this
issue couple of months back. How good is your exception handling? Are you
rollingback/comitting your transactions while exceptions are thrown, during
the course of db operations?
Honestly I wouldn't go for these scripts
Shiva Raman wrote:
Hi
Today the load observed very high load . I am pasting the top.
_*TOP *_
top - 12:45:23 up 79 days, 14:42, 1 user, load average: 45.84, 33.13,
25.84
Tasks: 394 total, 48 running, 346 sleeping, 0 stopped, 0 zombie
Cpu(s): 49.2%us, 0.8%sy, 0.0%ni, 0.0%id,
Bill Kirtley b...@actbluetech.com writes:
For what it's worth:
select xmin,* from pg_index where indexrelid =
'users_key_index'::regclass;
xmin | indexrelid | indrelid | indnatts | indisunique | indisprimary
| indisclustered | indisvalid | indcheckxmin | indisready | indkey |
Andy Colson wrote:
Shiva Raman wrote:
Hi
Today the load observed very high load . I am pasting the top.
_*TOP *_
top - 12:45:23 up 79 days, 14:42, 1 user, load average: 45.84,
33.13, 25.84
Tasks: 394 total, 48 running, 346 sleeping, 0 stopped, 0 zombie
Cpu(s): 49.2%us, 0.8%sy,
Bill Kirtley b...@actbluetech.com writes:
select xmin,* from pg_index where indexrelid =
'index_users_on_email'::regclass;
xmin | indexrelid | indrelid | indnatts | indisunique |
indisprimary | indisclustered | indisvalid | indcheckxmin | indisready
| indkey | indclass | indoption
From: Shiva Raman
Subject: Re: [PERFORM] High CPU load on Postgres Server during Peak
times
Andy Colson Wrote : ,
Eww. I think that's bad. A connection that has a transaction open will
cause lots of row versions,
which use up ram, and make it slower to step through the table (even with
Hi ,
I have one table my_test table . with on index created on one column .
I have turned off the sequential scans .
Now when ever i do refresh on this table or press F5 , It increase the
sequential scans count and
Sequential tuple read count .
Pls help me to understand what exactly is
Dave Dutcher wrote:
From: Shiva Raman
Subject: Re: [PERFORM] High CPU load on Postgres Server during Peak
not explicitly committed.
We have started updating the code on this.
You need a COMMIT for every BEGIN. If you just run a SELECT statement
without first beginning a
Dave Dutcher wrote:
You need a COMMIT for every BEGIN. If you just run a SELECT statement
without first beginning a transaction, then you should not end up with a
connection that is Idle in Transaction. If you are beginning a transaction,
doing a select, and then not committing, then yes that
Is there any practical limit to the number of shared buffers PG 8.3.7
can handle before more becomes counter-productive? I remember the
buffer management algorithm used to get unhappy with too many buffers
and past a certain point performance dropped with extra memory
pitched at Postgres.
My
Dan Sugalski d...@sidhe.org writes:
Is there any practical limit to the number of shared buffers PG 8.3.7
can handle before more becomes counter-productive?
Probably, but I've not heard any definitive measurements showing an
upper limit. The traditional wisdom of limiting it to 1G or so dates
22 matches
Mail list logo