On Fri, 2005-11-04 at 13:21 -0500, Bruce Momjian wrote:
David Fetter wrote:
On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:
I'm inclined to treat this as an outright bug, not just a minor
performance issue, because it implies that a sufficiently long psql
script would
SELECT v_barcode, count(v_barcode) FROM lead GROUP BY v_barcode HAVING
count(*) 1;
This is a pretty good example of the place where 8.1 seems to be quite
broken. I understand that this query will want to do a full table scan
(even through v_barcode is indexed). And the table is largish, at
Hi,
I am experiencing very long update queries and I want to know if it
reasonable to expect them to perform better.
The query below is running for more than 1.5 hours (5500 seconds) now,
while the rest of the system does nothing (I don't even type or move a
mouse...).
- Is that to be
PostgreSQL [EMAIL PROTECTED] writes:
This is a pretty good example of the place where 8.1 seems to be quite
broken.
That's a bit of a large claim on the basis of one data point.
Did you remember to re-ANALYZE after loading the table into the
new database?
regards, tom
Joost Kraaijeveld [EMAIL PROTECTED] writes:
I am experiencing very long update queries and I want to know if it
reasonable to expect them to perform better.
Does that table have any triggers that would fire on the update?
regards, tom lane
Hello,
I have some strange performance problems with quering a table.It has
5282864, rows and contains the following columns : id
,no,id_words,position,senpos and sentence all are integer non null.
Index on :
* no
* no,id_words
* id_words
* senpos, sentence, no)
*
PostgreSQL [EMAIL PROTECTED] writes:
...
As I post this, the query is approaching an hour of run time. I've listed
an explain of the query and my non-default conf parameters below. Please
advise on anything I should change or try, or on any information I can
provide that could help
On Sun, 2005-11-06 at 12:17 -0500, Tom Lane wrote:
Does that table have any triggers that would fire on the update?
Alas, no trigger, constrainst, foreign keys, indixes (have I forgotten
something?)
All queries are slow. E.g (after vacuum):
select objectid from prototype.orders
Explain analyse
Greg,
Increasing memory actually slows down the current sort performance.
We're working on a fix for this now in bizgres.
Luke
--
Sent from my BlackBerry Wireless Device
-Original Message-
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: PostgreSQL [EMAIL
Hi Tom,
On Sun, 2005-11-06 at 15:26 -0500, Tom Lane wrote:
I'm confused --- where's the 82sec figure coming from, exactly?
From actually executing the query.
From PgAdmin:
-- Executing query:
select objectid from prototype.orders
Total query runtime: 78918 ms.
Data retrieval runtime: 188822
Now *I* am confused. What does PgAdmin do more than giving the query to
the database?
It builds it into the data grid GUI object.
Chris
---(end of broadcast)---
TIP 4: Have you searched our list archives?
On Mon, 2005-11-07 at 12:37 +0800, Christopher Kings-Lynne wrote:
Now *I* am confused. What does PgAdmin do more than giving the query to
the database?
It builds it into the data grid GUI object.
Is that not the difference between the total query runtime and the data
retrieval runtime (see
Hi Christopher,
On Mon, 2005-11-07 at 12:37 +0800, Christopher Kings-Lynne wrote:
Now *I* am confused. What does PgAdmin do more than giving the query to
the database?
It builds it into the data grid GUI object.
But my initial question was about a query that does not produce data at
all
It affect my application since the
database server starts to slow down. Hence a very slow in return of functions.
Any more ideas about this everyone?
Please.
From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alex Turner
Sent: Friday, October 21, 2005
3:42 PM
On Sun, 6 Nov 2005, PostgreSQL wrote:
SELECT v_barcode, count(v_barcode) FROM lead GROUP BY v_barcode HAVING
count(*) 1;
This is a dual Opteron box with 16 Gb memory and a 3ware SATA raid
runing 64bit SUSE. Something seems badly wrong.
GroupAggregate (cost=9899282.83..10285434.26
Here are the configuration of our database server:
port = 5432
max_connections = 300
superuser_reserved_connections = 10
authentication_timeout = 60
shared_buffers = 48000
sort_mem = 32168
sync = false
Do you think this is enough? Or
Does Creating Temporary
table in a function and NOT dropping them affects the performance of the
database?
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
17 matches
Mail list logo