Re: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware

2005-11-07 Thread Dave Page
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joost Kraaijeveld Sent: 07 November 2005 04:26 To: Tom Lane Cc: Pgsql-Performance Subject: Re: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware Hi Tom, On Sun, 2005-11-06 at

Re: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware

2005-11-07 Thread Joost Kraaijeveld
Hi Dave, On Mon, 2005-11-07 at 08:51 +, Dave Page wrote: On Sun, 2005-11-06 at 15:26 -0500, Tom Lane wrote: I'm confused --- where's the 82sec figure coming from, exactly? From actually executing the query. From PgAdmin: -- Executing query: select objectid from

Re: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware

2005-11-07 Thread Dave Page
-Original Message- From: Joost Kraaijeveld [mailto:[EMAIL PROTECTED] Sent: 07 November 2005 09:03 To: Dave Page Cc: Tom Lane; Pgsql-Performance Subject: RE: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware Nothing - it just uses libpq's pqexec function. The speed

Re: [PERFORM] Performance problem with pg8.0

2005-11-07 Thread Richard Huxton
Jeroen van Iddekinge wrote: Hello, I have some strange performance problems with quering a table.It has 5282864, rows and contains the following columns : id ,no,id_words,position,senpos and sentence all are integer non null. Index on : * no * no,id_words * id_words * senpos,

Re: [PERFORM] Temporary Table

2005-11-07 Thread Alvaro Herrera
Christian Paul B. Cosinas wrote: Does Creating Temporary table in a function and NOT dropping them affects the performance of the database? The system will drop it automatically, so it shouldn't affect. What _could_ be affecting you if you execute that function a lot, is accumulated bloat in

Re: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware Raid5 / Debian??

2005-11-07 Thread Alex Turner
Where are the pg_xlog and data directories with respect to each other? From this IOStat it looks like they might be on the same partition, which is not ideal, and actualy surprising that throughput is this good. You need to seperate pg_xlog and data directories to get any kind of reasonable

Re: [PERFORM] 8.1 iss

2005-11-07 Thread PostgreSQL
My most humble apologies to the pg development team (pg_lets?). I took Greg Stark's advice and set: shared_buffers = 1 # was 5 work_mem = 1048576# 1Gb - was 16384 Also, I noticed that the EXPLAIN ANALYZE consistently thought reads would take longer than they actually did, so I

Re: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware

2005-11-07 Thread Andreas Pflug
Dave Page wrote: Now *I* am confused. What does PgAdmin do more than giving the query to the database? Nothing - it just uses libpq's pqexec function. The speed issue in pgAdmin is rendering the results in the grid which can be slow on some OS's due to inefficiencies in some grid

[PERFORM] Index + mismatching datatypes [WAS: index on custom function; explain]

2005-11-07 Thread Enrico Weigelt
* Yann Michel [EMAIL PROTECTED] wrote: TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match I've got a similar problem: I have to match different datatypes, ie. bigint vs. integer vs. oid. Of

Re: [PERFORM] Index + mismatching datatypes [WAS: index on custom

2005-11-07 Thread Neil Conway
On Mon, 2005-07-11 at 19:07 +0100, Enrico Weigelt wrote: I've got a similar problem: I have to match different datatypes, ie. bigint vs. integer vs. oid. Of course I tried to use casted index (aka ON (foo::oid)), but it didn't work. Don't include the cast in the index definition, include

[PERFORM] Figuring out which command failed

2005-11-07 Thread Ralph Mason
Hi, I have a transaction that has multiple separate command in it (nothing unusual there). However sometimes one of the sql statements will fail and so the whole transaction fails. In some cases I could fix the failing statement if only I knew which one it was. Can anyone think of any

Re: [PERFORM] Temporary Table

2005-11-07 Thread Ralph Mason
Alvaro Herrera wrote: Christian Paul B. Cosinas wrote: Does Creating Temporary table in a function and NOT dropping them affects the performance of the database? The system will drop it automatically, so it shouldn't affect. What _could_ be affecting you if you execute that function

Re: [PERFORM] Temporary Table

2005-11-07 Thread Christian Paul B. Cosinas
In what directory in my linux server will I find these 3 tables? -Original Message- From: Alvaro Nunes Melo [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 26, 2005 10:49 AM To: Christian Paul B. Cosinas Subject: Re: [PERFORM] Temporary Table Christian Paul B. Cosinas wrote: I am

Re: [PERFORM] Temporary Table

2005-11-07 Thread Christian Paul B. Cosinas
I try to run this command in my linux server. VACUUM FULL pg_class; VACUUM FULL pg_attribute; VACUUM FULL pg_depend; But it give me the following error: -bash: VACUUM: command not found I choose Polesoft Lockspam to fight spam, and you? http://www.polesoft.com/refer.html

Re: [PERFORM] Temporary Table

2005-11-07 Thread Joshua D. Drake
Christian Paul B. Cosinas wrote: I try to run this command in my linux server. VACUUM FULL pg_class; VACUUM FULL pg_attribute; VACUUM FULL pg_depend; But it give me the following error: -bash: VACUUM: command not found That needs to be run from psql ... I choose Polesoft

Re: [PERFORM] Temporary Table

2005-11-07 Thread Christopher Kings-Lynne
Ummm...they're SQL commands. Run them in PostgreSQL, not on the unix command line... Christian Paul B. Cosinas wrote: I try to run this command in my linux server. VACUUM FULL pg_class; VACUUM FULL pg_attribute; VACUUM FULL pg_depend; But it give me the following error: -bash:

Re: [PERFORM] Temporary Table

2005-11-07 Thread Christopher Kings-Lynne
In what directory in my linux server will I find these 3 tables? Directory? They're tables in your database... ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings

Re: [PERFORM] Temporary Table

2005-11-07 Thread Christian Paul B. Cosinas
I see. But How Can I put this in the Cron of my Linux Server? I really don't have an idea :) What I want to do is to loop around all the databases in my server and execute the vacuum of these 3 tables in each tables. -Original Message- From: Joshua D. Drake [mailto:[EMAIL PROTECTED]

Re: [PERFORM] Temporary Table

2005-11-07 Thread Christopher Kings-Lynne
Or you could just run the 'vacuumdb' utility... Put something like this in cron: # Vacuum full local pgsql database 30 * * * * postgres vacuumdb -a -q -z You really should read the manual. Chris Christian Paul B. Cosinas wrote: I see. But How Can I put this in the Cron of my Linux

Re: [PERFORM] Temporary Table

2005-11-07 Thread Jeff Frost
You can use the vacuumdb external command. Here's an example: vacuumdb --full --analyze --table mytablename mydbname On Tue, 8 Nov 2005, Christian Paul B. Cosinas wrote: But How Can I put this in the Cron of my Linux Server? I really don't have an idea :) What I want to do is to loop

[PERFORM] Expensive function and the optimizer

2005-11-07 Thread Craig A. James
I have a function, call it myfunc(), that is REALLY expensive computationally. Think of it like, If you call this function, it's going to telephone the Microsoft Help line and wait in their support queue to get the answer. Ok, it's not that bad, but it's so bad that the optimizer should

Re: [PERFORM] Temporary Table

2005-11-07 Thread Andrew McMillan
On Tue, 2005-11-08 at 10:22 +, Christian Paul B. Cosinas wrote: I see. But How Can I put this in the Cron of my Linux Server? I really don't have an idea :) What I want to do is to loop around all the databases in my server and execute the vacuum of these 3 tables in each tables. I

Re: [PERFORM] Expensive function and the optimizer

2005-11-07 Thread Tom Lane
Craig A. James [EMAIL PROTECTED] writes: Is there some way to explain this cost to the optimizer in a permanent way, Nope, sorry. One thing you could do in the particular case at hand is to rejigger the WHERE clause involving the function so that it requires values from both tables and