-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Joost Kraaijeveld
Sent: 07 November 2005 04:26
To: Tom Lane
Cc: Pgsql-Performance
Subject: Re: [PERFORM] Performance PG 8.0 on dual opteron /
4GB / 3ware
Hi Tom,
On Sun, 2005-11-06 at
Hi Dave,
On Mon, 2005-11-07 at 08:51 +, Dave Page wrote:
On Sun, 2005-11-06 at 15:26 -0500, Tom Lane wrote:
I'm confused --- where's the 82sec figure coming from, exactly?
From actually executing the query.
From PgAdmin:
-- Executing query:
select objectid from
-Original Message-
From: Joost Kraaijeveld [mailto:[EMAIL PROTECTED]
Sent: 07 November 2005 09:03
To: Dave Page
Cc: Tom Lane; Pgsql-Performance
Subject: RE: [PERFORM] Performance PG 8.0 on dual opteron /
4GB / 3ware
Nothing - it just uses libpq's pqexec function. The speed
Jeroen van Iddekinge wrote:
Hello,
I have some strange performance problems with quering a table.It has
5282864, rows and contains the following columns : id
,no,id_words,position,senpos and sentence all are integer non null.
Index on :
* no
* no,id_words
* id_words
* senpos,
Christian Paul B. Cosinas wrote:
Does Creating Temporary table in a function and NOT dropping them affects
the performance of the database?
The system will drop it automatically, so it shouldn't affect.
What _could_ be affecting you if you execute that function a lot, is
accumulated bloat in
Where are the pg_xlog and data directories with respect to each other?
From this IOStat it looks like they might be on the same partition,
which is not ideal, and actualy surprising that throughput is this
good. You need to seperate pg_xlog and data directories to get any
kind of reasonable
My most humble apologies to the pg development team (pg_lets?).
I took Greg Stark's advice and set:
shared_buffers = 1 # was 5
work_mem = 1048576# 1Gb - was 16384
Also, I noticed that the EXPLAIN ANALYZE consistently thought reads would
take longer than they actually did, so I
Dave Page wrote:
Now *I* am confused. What does PgAdmin do more than giving
the query to
the database?
Nothing - it just uses libpq's pqexec function. The speed issue in
pgAdmin is rendering the results in the grid which can be slow on some
OS's due to inefficiencies in some grid
* Yann Michel [EMAIL PROTECTED] wrote:
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
I've got a similar problem: I have to match different datatypes,
ie. bigint vs. integer vs. oid.
Of
On Mon, 2005-07-11 at 19:07 +0100, Enrico Weigelt wrote:
I've got a similar problem: I have to match different datatypes,
ie. bigint vs. integer vs. oid.
Of course I tried to use casted index (aka ON (foo::oid)), but
it didn't work.
Don't include the cast in the index definition, include
Hi,
I have a transaction that has multiple separate command in it (nothing
unusual there).
However sometimes one of the sql statements will fail and so the whole
transaction fails.
In some cases I could fix the failing statement if only I knew which one
it was. Can anyone think of any
Alvaro Herrera wrote:
Christian Paul B. Cosinas wrote:
Does Creating Temporary table in a function and NOT dropping them affects
the performance of the database?
The system will drop it automatically, so it shouldn't affect.
What _could_ be affecting you if you execute that function
In what directory in my linux server will I find these 3 tables?
-Original Message-
From: Alvaro Nunes Melo [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 26, 2005 10:49 AM
To: Christian Paul B. Cosinas
Subject: Re: [PERFORM] Temporary Table
Christian Paul B. Cosinas wrote:
I am
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute;
VACUUM FULL pg_depend;
But it give me the following error:
-bash: VACUUM: command not found
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
Christian Paul B. Cosinas wrote:
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute;
VACUUM FULL pg_depend;
But it give me the following error:
-bash: VACUUM: command not found
That needs to be run from psql ...
I choose Polesoft
Ummm...they're SQL commands. Run them in PostgreSQL, not on the unix
command line...
Christian Paul B. Cosinas wrote:
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute;
VACUUM FULL pg_depend;
But it give me the following error:
-bash:
In what directory in my linux server will I find these 3 tables?
Directory? They're tables in your database...
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
I see.
But How Can I put this in the Cron of my Linux Server?
I really don't have an idea :)
What I want to do is to loop around all the databases in my server and
execute the vacuum of these 3 tables in each tables.
-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED]
Or you could just run the 'vacuumdb' utility...
Put something like this in cron:
# Vacuum full local pgsql database
30 * * * * postgres vacuumdb -a -q -z
You really should read the manual.
Chris
Christian Paul B. Cosinas wrote:
I see.
But How Can I put this in the Cron of my Linux
You can use the vacuumdb external command. Here's an example:
vacuumdb --full --analyze --table mytablename mydbname
On Tue, 8 Nov 2005, Christian Paul B. Cosinas wrote:
But How Can I put this in the Cron of my Linux Server?
I really don't have an idea :)
What I want to do is to loop
I have a function, call it myfunc(), that is REALLY expensive computationally. Think
of it like, If you call this function, it's going to telephone the Microsoft Help line and
wait in their support queue to get the answer. Ok, it's not that bad, but it's so bad that
the optimizer should
On Tue, 2005-11-08 at 10:22 +, Christian Paul B. Cosinas wrote:
I see.
But How Can I put this in the Cron of my Linux Server?
I really don't have an idea :)
What I want to do is to loop around all the databases in my server and
execute the vacuum of these 3 tables in each tables.
I
Craig A. James [EMAIL PROTECTED] writes:
Is there some way to explain this cost to the optimizer in a permanent
way,
Nope, sorry. One thing you could do in the particular case at hand is
to rejigger the WHERE clause involving the function so that it requires
values from both tables and
23 matches
Mail list logo