On Thu, 10 Mar 2005 10:24:46 +1000, David Brown [EMAIL PROTECTED]
wrote:
What concerns me is that this all depends on the correlation factor, and
I suspect that the planner is not giving enough weight to this.
The planner does the right thing for correlations very close to 1 (and
-1) and for
On Mon, 14 Mar 2005 21:23:29 -0500, Tom Lane [EMAIL PROTECTED] wrote:
I think that the reduce random_page_cost mantra
is not an indication that that parameter is wrong, but that the
cost models it feeds into need more work.
One of these areas is the cost interpolation depending on correlation.
On Wed, 16 Mar 2005 22:19:13 -0500, Tom Lane [EMAIL PROTECTED] wrote:
calculate the correlation explicitly for each index
May be it's time to revisit an old proposal that has failed to catch
anybody's attention during the 7.4 beta period:
May be it's time to revisit an old proposal that has failed to catch
anybody's attention during the 7.4 beta period:
http://archives.postgresql.org/pgsql-hackers/2003-08/msg00937.php
I'm not sure I'd store index correlation in a separate table today.
You've invented something better for functional
Title: Melding
Hello
all.
I am having a couple
of tables with couple of hundre millions records in them. Thetables
containsa timestamp column.
I am almost always
interested in getting datas from a specific day or month. Each day contains
aprox. 400.000 entries.
When I do such
queries
Greetings everyone,
I am about to migrate to Postgres from MySQL. My DB isn't enormous (
1gb), consists mostly of just text, but is accessed quite heavily.
Because size isn't a huge issue, but performance is, I am willing to
normalize as necessary.
Currently I have a table Entries containing
On Thu, Mar 17, 2005 at 10:56:10AM -0500, Alexander Ranaldi wrote:
Most of my queries return rows based on UserID, and also only if
Private is FALSE. Would it be in the interest of best performance to
split this table into two tables: EntriesPrivate,
EntriesNotPrivate and remove the Private
Alexander Ranaldi wrote:
Greetings everyone,
I am about to migrate to Postgres from MySQL. My DB isn't enormous (
1gb), consists mostly of just text, but is accessed quite heavily.
Because size isn't a huge issue, but performance is, I am willing to
normalize as necessary.
Currently I have a table
Hi,
Thanks for your reply.
I have made this test without any user connect and after vacuum and all
index recteated and tables analyzed.
Well, produt.codpro is SERIAL
And movest.codpro is NUMBER(8)
Thanks
Rodrigo
-Mensagem original-
De: Michael Fuhr [mailto:[EMAIL PROTECTED]
Enviada
Ales,
is there some utilities for PG for tunnig database performance. To see
top 10 sql commands and so?
Look up PQA on www.pgFoundry.org
--
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)---
TIP 2: you can get off
Josh Berkus josh@agliodbs.com writes:
Yes it is. I ran experiments back in the late 90s to derive it.
Check the archives.
H ... which list?
-hackers, no doubt. -performance didn't exist then.
regards, tom lane
---(end of
Manfred Koizar [EMAIL PROTECTED] writes:
On Wed, 16 Mar 2005 22:19:13 -0500, Tom Lane [EMAIL PROTECTED] wrote:
calculate the correlation explicitly for each index
May be it's time to revisit an old proposal that has failed to catch
anybody's attention during the 7.4 beta period:
On Thu, Mar 17, 2005 at 09:54:29AM -0800, Josh Berkus wrote:
Yes it is. I ran experiments back in the late 90s to derive it.
Check the archives.
H ... which list?
These look like relevant threads:
http://archives.postgresql.org/pgsql-hackers/2000-01/msg00910.php
13 matches
Mail list logo