I'm running 7.4.2 on an older Linux box (450MHzAMD K-6-III, 450M RAM)
running kernel 2.6.5. My client is a Java/JDBC program on Windows.
I'm having trouble seeing where the bottleneck in my performance is.
The client uses about 30% CPU. The server uses 70% CPU plus 1.5% I/O
wait. The I/O wai
=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[EMAIL PROTECTED]> writes:
>> In 7.4 a VACUUM should be sufficient ... or at least, if it isn't
> Atfer VACUUM:
> Better, but..
... but not much :-(. Okay, could we see VACUUM VERBOSE results for
this table?
regards, tom lane
[ Ah, I just got to your message with the VACUUM VERBOSE results ... ]
=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[EMAIL PROTECTED]> writes:
> dps=# VACUUM VERBOSE url;
> INFO: index "url_next_index_time" now contains 349519 row versions in
> 3534 pages
> DETAIL: 229515 index row versions were r
"Jeremy Dunn" <[EMAIL PROTECTED]> writes:
> Agreed. However, given that count(*) is a question that can be answered
> _solely_ using the index (without reference to the actual data blocks),
As Bruno noted, that is not the case in Postgres; we must visit the
table rows anyway.
> When I just tried
On Mon, Apr 12, 2004 at 15:05:02 -0400,
Jeremy Dunn <[EMAIL PROTECTED]> wrote:
>
> Agreed. However, given that count(*) is a question that can be answered
> _solely_ using the index (without reference to the actual data blocks),
> I'd expect that the break-even point would be considerably highe
Hi,
In 7.4 a VACUUM should be sufficient ... or at least, if it isn't
Atfer VACUUM:
dps=# explain analyze select next_index_time from url order by
next_index_time desc limit 1;
QUERY PLAN
--
> "Jeremy Dunn" <[EMAIL PROTECTED]> writes:
> > The question: why does the planner consider a sequential scan to be
> > better for these top 10 values?
>
> At some point a seqscan *will* be better. In the limit, if
> the key being sought is common enough to occur on every page
> of the table,
Hi,
How to optimize the last query ? (~ 2000 times slower than the first
one)
I suppose there is some odd distribution of data in the index ?
Looks to me like a whole lot of dead rows at the left end of the index.
Have you VACUUMed this table lately?
From pg_autovacuum:
[2004-04-10 05:45:39 AM] Pe
"Jeremy Dunn" <[EMAIL PROTECTED]> writes:
> The question: why does the planner consider a sequential scan to be
> better for these top 10 values?
At some point a seqscan *will* be better. In the limit, if the key
being sought is common enough to occur on every page of the table,
it's certain that
On Mon, 12 Apr 2004, Jeremy Dunn wrote:
>explain analyze select count(*) from xxx where cid=6223341;
>Aggregate (cost=74384.19..74384.19 rows=1 width=0) (actual
> time=11614.89..11614.89 rows=1 loops=1)
> -> Index Scan using xxx_cid on emailrcpts (cost=0.00..74329.26
> rows=21974
Sorry I should have written that we do VACUUM VERBOSE ANALYZE every
night.
- Jeremy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bill Moran
Sent: Monday, April 12, 2004 12:09 PM
To: [EMAIL PROTECTED]
Cc: Postgresql Performance
Subject: Re: [PERFORM] in
Quick bit of input, since you didn't mention it.
How often do you run ANALYZE? I found it interesting that a database I
was doing tests on sped up by a factor of 20 after ANALYZE. If your
data changes a lot, you should probably schedule ANALYZE to run with
VACUUM.
Jeremy Dunn wrote:
I've searche
Title: Message
I've searched the
archives and can't find an answer to this seemingly simple question.
Apologies if it's too common.
The table in
question has ~1.3M rows. It has 85 columns, 5 of which have single-column
indexes.
The column in
question (CID) has 183 distinct values. Fo
The index is
Indexes:
"forecastelement_rwv_idx" btree (region_id, wx_element, valid_time)
-Original Message-
From: Shea,Dan [CIS] [mailto:[EMAIL PROTECTED]
Sent: Monday, April 12, 2004 10:39 AM
To: Postgres Performance
Subject: [PERFORM] Deleting certain duplicates
We have a large da
We have a large database which recently increased dramatically due to a
change in our insert program allowing all entries.
PWFPM_DEV=# select relname,relfilenode,reltuples from pg_class where relname
= 'forecastelement';
relname | relfilenode | reltuples
-+-+--
=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[EMAIL PROTECTED]> writes:
> How to optimize the last query ? (~ 2000 times slower than the first
> one)
> I suppose there is some odd distribution of data in the index ?
Looks to me like a whole lot of dead rows at the left end of the index.
Have you VA
On Mon, Apr 12, 2004 at 17:24:17 +0530,
Rajesh Kumar Mallah <[EMAIL PROTECTED]> wrote:
>
> Greetings,
>
> Is there any performance penalty of having too many columns in
> a table in terms of read and write speeds.
>
> To order to keep operational queries simple (avoid joins) we plan to
> add c
Greetings,
Is there any performance penalty of having too many columns in
a table in terms of read and write speeds.
To order to keep operational queries simple (avoid joins) we plan to
add columns in the main customer dimension table.
Adding more columns also means increase in concurrency in the
Hi,
I test many times the foolowing query.
dps=# explain analyze select next_index_time from url order by
next_index_time desc limit 1;
QUERY PLAN
---
19 matches
Mail list logo