I'm facing a very weird problem.
Recently our database run very slow when execute Delete/Select statement
for a few tables only..
The largest table only have 50K rows of data.
When I run the statement from pgAdmin although it is slow but not as slow
as run from webapp.
When I run the
Hi, Kah,
[EMAIL PROTECTED] wrote:
I already vacuum those tables with full option but it still the same.
What could be the possible causes of this problem?
How can I solve it?
CPU - Intel Xeon 2.40 GHz
Memory - 1.5G
Postgresql version: 7.2.2
First, you should consider to upgrade your
I have small database. However the following query takes 38 (!) seconds to
run.
How to speed it up (preferably not changing table structures but possibly
creating indexes) ?
Andrus.
set search_path to public,firma1;
explain analyze select bilkaib.summa from BILKAIB join KONTO CRKONTO ON
Andrus [EMAIL PROTECTED] writes:
I have small database. However the following query takes 38 (!) seconds to
run.
How to speed it up (preferably not changing table structures but possibly
creating indexes) ?
ANALYZE would probably help.
- Seq Scan on konto dbkonto
On May 6, 2006, at 10:53 AM, mcelroy, tim wrote:
development side than DBA by the way), but there is no better way
to learn
and understand better than actual day-to-day working experience.
Yeah, I prefer my surgeons to work this way too. training is for the
birds.
smime.p7s
- Seq Scan on konto dbkonto (cost=0.00..23.30
rows=1
width=44) (actual time=0.017..1.390 rows=219 loops=1)
Filter: (iseloom = 'A'::bpchar)
Anytime you see a row estimate that far off about a simple single-column
condition, it means your
Andrus [EMAIL PROTECTED] writes:
I see autovacuum: processing database mydb messages in log file and I have
stats_start_collector = on
stats_row_level = on
in config file. Why statistics was out-of-date ?
The default autovac thresholds are not very aggressive; this table was
probably not
On Mon, May 08, 2006 at 11:21:16AM +0200, Markus Schaber wrote:
Hi, Kah,
[EMAIL PROTECTED] wrote:
I already vacuum those tables with full option but it still the same.
What could be the possible causes of this problem?
How can I solve it?
CPU - Intel Xeon 2.40 GHz
Memory -
On Mon, May 08, 2006 at 08:03:38PM +0300, Andrus wrote:
The default autovac thresholds are not very aggressive; this table was
probably not large enough to get selected for analysis.
Tom,
thank you.
Excellent.
BTW, you might want to cut all the autovac thresholds in half; that's
what I
Why does this query take so long? (PostgreSQL 8.0.3, FC4)
Hopefully I have provided enough information below.
LOG: statement: SELECT * FROM x WHERE f IN
($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,\
On Mon, May 08, 2006 at 11:06:42AM -0400, Vivek Khera wrote:
On May 6, 2006, at 10:53 AM, mcelroy, tim wrote:
development side than DBA by the way), but there is no better way
to learn
and understand better than actual day-to-day working experience.
Yeah, I prefer my surgeons to work
What's EXPLAIN ANALYZE show?
On Mon, May 08, 2006 at 01:29:28PM -0400, Jeffrey Tenny wrote:
Why does this query take so long? (PostgreSQL 8.0.3, FC4)
Hopefully I have provided enough information below.
LOG: statement: SELECT * FROM x WHERE f IN
BTW, you might want to cut all the autovac thresholds in half; that's
what I typically do.
I added ANALYZE command to my procedure which creates and loads data to
postgres database
from other DBMS. This runs only onvce after installing my application. I
hope this is sufficient.
If default
Doing a SELECT with a large list of variables inside an IN runs slowly
on every database we've tested. We've tested mostly in Oracle and
PostgreSQL, and both get very slow very quickly (actually Oracle refuses
to process the query at all after it gets too many bind parameters).
In our case, we
On Mon, May 08, 2006 at 08:36:42PM +0300, Andrus wrote:
BTW, you might want to cut all the autovac thresholds in half; that's
what I typically do.
I added ANALYZE command to my procedure which creates and loads data to
postgres database
from other DBMS. This runs only onvce after
Jim C. Nasby [EMAIL PROTECTED] writes:
On Mon, May 08, 2006 at 10:42:21AM -0700, Mark Lewis wrote:
Doing a SELECT with a large list of variables inside an IN runs slowly
on every database we've tested. We've tested mostly in Oracle and
PostgreSQL, and both get very slow very quickly (actually
The only reason for being so conservative that I'm aware of was that it
was a best guess. Everyone I've talked to cuts the defaults down by at
least a factor of 2, sometimes even more.
Can we ask that Tom will change default values to 2 times smaller in 8.1.4 ?
BTW, these parameters are
On Monday 08 May 2006 14:10, Andrus wrote:
The only reason for being so conservative that I'm aware of was that it
was a best guess. Everyone I've talked to cuts the defaults down by at
least a factor of 2, sometimes even more.
Can we ask that Tom will change default values to 2 times
On May 8, 2006, at 1:30 PM, Jim C. Nasby wrote:
Yeah, I prefer my surgeons to work this way too. training is for the
birds.
I think you read too quickly past the part where Tim said he'd
taking a
week-long training class.
s/training/apprenticeship/g;
---(end
Well, since I don't know the exact parameter values, just substituting
1-650 for $1-$650, I get:
Index Scan using testindex2 on x (cost=0.00..34964.52 rows=1503
width=16) (actual time=0.201..968.252 rows=677 loops=1)
Filter: ((f = 1) OR (f = 2) OR (f = 3) OR (f = 4) ...
So index usage
Jan de Visser [EMAIL PROTECTED] writes:
On Monday 08 May 2006 14:10, Andrus wrote:
I created empty table konto and loaded more that 219 records to it during
database creation.
So it seems that if table grows from zero to more than 219 times larger
then it was still not processed.
That's
Mark Lewis wrote:
Doing a SELECT with a large list of variables inside an IN runs slowly
on every database we've tested. We've tested mostly in Oracle and
PostgreSQL, and both get very slow very quickly (actually Oracle refuses
to process the query at all after it gets too many bind
Tom Lane wrote:
Jan de Visser [EMAIL PROTECTED] writes:
On Monday 08 May 2006 14:10, Andrus wrote:
I created empty table konto and loaded more that 219 records to it during
database creation.
So it seems that if table grows from zero to more than 219 times larger
then it was still not
Jeffrey Tenny [EMAIL PROTECTED] writes:
Well, since I don't know the exact parameter values, just substituting
1-650 for $1-$650, I get:
Index Scan using testindex2 on x (cost=0.00..34964.52 rows=1503
width=16) (actual time=0.201..968.252 rows=677 loops=1)
Filter: ((f = 1) OR (f =
The original set of indexes were:
Indexes:
x_c_idx btree (c)
x_f_idx btree (f)
testindex2 btree (f, c)
I dropped the multicolumn index 'testindex2', and a new explain analyze
looks like this:
Sort (cost=35730.71..35768.28 rows=1503 width=16) (actual
time=962.555..964.467
Jeffrey Tenny [EMAIL PROTECTED] writes:
I dropped the multicolumn index 'testindex2', and a new explain analyze
looks like this:
Sort (cost=35730.71..35768.28 rows=1503 width=16) (actual
time=962.555..964.467 rows=677 loops=1)
Sort Key: f, c
- Seq Scan on x
On Mon, May 08, 2006 at 03:38:23PM -0400, Vivek Khera wrote:
On May 8, 2006, at 1:30 PM, Jim C. Nasby wrote:
Yeah, I prefer my surgeons to work this way too. training is for the
birds.
I think you read too quickly past the part where Tim said he'd
taking a
week-long training class.
I tried the seqscan disabling and got what sounds like the desired plan:
Sort (cost=54900.62..54940.29 rows=1587 width=16) (actual time=20.208..22.138
rows=677 loops=1)
Sort Key: f, c
- Index Scan using x_f_idx, x_f_idx, ...
(cost=0.00..54056.96 rows=1587 width=16) (actual
re my question here: what would be the JDBC-proper technique,
my app is all jdbc.
Jeffrey Tenny wrote:
1) is there a way to enable that for a single query in a multi-query
transaction?
---(end of broadcast)---
TIP 3: Have you checked our
Jeffrey Tenny [EMAIL PROTECTED] writes:
I tried the seqscan disabling and got what sounds like the desired plan:
Sort (cost=54900.62..54940.29 rows=1587 width=16) (actual
time=20.208..22.138 rows=677 loops=1)
Sort Key: f, c
- Index Scan using x_f_idx, x_f_idx, ...
Tom Lane wrote:
Jeffrey Tenny [EMAIL PROTECTED] writes:
I tried the seqscan disabling and got what sounds like the desired plan:
Sort (cost=54900.62..54940.29 rows=1587 width=16) (actual
time=20.208..22.138 rows=677 loops=1)
Sort Key: f, c
- Index Scan using x_f_idx, x_f_idx, ...
Jeffrey Tenny [EMAIL PROTECTED] writes:
The server was already running with random_page_cost=2 today for all tests,
because of
the mods I've made to improve other problem queries in the past (my settings
noted below, and
before in another msg on this topic).
So to nail this particular
On Mon, 08 May 2006 19:37:37 -0400, Tom Lane [EMAIL PROTECTED] wrote:
Jeffrey Tenny [EMAIL PROTECTED] writes:
The server was already running with random_page_cost=2 today for all tests,
because of
the mods I've made to improve other problem queries in the past (my
settings noted below,
Hi,
I have a query that generates two different plans when there's only a
change in the category_id used in the query.
The first query has category_id = 1001573 and return 3117 rows from
the category_product table.
The second query has category_id = 1001397 and returns 27889 rows
from
Actually now I already work to upgrade Postgresql version to 8.1 but not
yet finish.
Yesterday I did re-create the affected tables indices, it does improve the
performance but still need 2-5 mins to execute the query.
Is this 'normal' for a table with 40K rows of records?
Anyway thanks for
Hello,
I have tried the query SELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50; and it is working great.
EXPLAIN ANALYSE of the above query is:
pdb=# EXPLAIN ANALYZE select * from wan order by stime desc limit 50 ;NOTICE: QUERY PLAN:
Limit (cost=0.00..12.10 rows=50 width=95) (actual
36 matches
Mail list logo