Hi all,
I've got a doubt about how to create an index and a primary key. Lets
say I have the following table:
CREATE TABLE blacklist
(
telephoneVARCHAR(15),
customer_idINT4
CONSTRAINT fk_blacklist_customerid REFERENCES
customers(
Arnau wrote:
Hi all,
I've got a doubt about how to create an index and a primary key. Lets
say I have the following table:
The country_id column can have maybe 100 - 250 different values.
The customer_id column can have as much several hundred values (less
than 1000).
The
On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:
On 1/9/07, Simon Riggs [EMAIL PROTECTED] wrote:
If you are doing date range partitioning it should be fairly
simple to
load data into the latest table directly. That was the way I
originally
On Wed, 10 Jan 2007 15:30:16 -0600, Scott Marlowe [EMAIL PROTECTED] wrote:
[...]
And I don't think the mysql partition supports tablespaces either.
MySQL supports distributing partitions over multiple disks via the SUBPARTITION
clause [1].
I leave it to you, wether their syntax is
On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
Take a look at the set of partitioning functions I wrote shortly
after
the 8.1 release:
http://www.studenter.hb.se/~arch/files/part_functions.sql
You could probably work something out using those functions (as-is,
or
Well - whether or not MySQL's implementation of partitioning has some
deficiency, it sure is a lot easier to set up than PostgreSQL. And I
don't think there is any technical reason that setting up partitioning
on Postgres couldn't be very easy and still be robust.
On Thu, 11 Jan 2007 13:59:20
Each partition can have its own disk, without using subpartitions.
CREATE TABLE th (id INT, name VARCHAR(30), adate DATE)
PARTITION BY LIST(YEAR(adate))
(
PARTITION p1999 VALUES IN (1995, 1999, 2003)
DATA DIRECTORY = '/var/appdata/95/data'
INDEX DIRECTORY = '/var/appdata/95/idx',
On Thu, 11 Jan 2007 08:18:39 -0600, Adam Rich [EMAIL PROTECTED] wrote:
Subpartitions are just a way to break (parent) partitions up into
smaller pieces. Those of course can be moved to other disks
just like the main partitions.
Ah, didn't know that (i just wondered why i need a
Hello all!
Running a vac on an 8.2 client connecting to an 8.2 database (following
example was for a 14 row static table) - vacuums will sit (for lack of a
better word) for anywhere from ten to twenty minutes before taking a
lock out on the table and doing work there. Once the vacuum does
Kim wrote:
snip
OS: Solaris 10
write transactions/hr: 1.5 million
size of pg_class: 535,226
number of relations: 108,694
That is a huge pg_class. I remember some discussion recently about
problems with 8.2 and the way it scans pg_class. I also believe it's
fixed in 8.2.1. Are you running
Hi,
I know that the problem with the following SQL is the LOG.CODCEP =
ENDE.CODCEP||CODLOG condition, but what can I
do to improve the performance?
Is there a type of index that could help or is there another way to build
this SQL?
Thank you in advance!
explain analyze
SELECT ENDE.* ,
Kim [EMAIL PROTECTED] writes:
Running a vac on an 8.2 client connecting to an 8.2 database (following
example was for a 14 row static table) - vacuums will sit (for lack of a
better word) for anywhere from ten to twenty minutes before taking a
lock out on the table and doing work there.
Kim [EMAIL PROTECTED] writes:
size of pg_class: 535,226
number of relations: 108,694
Oh, I shoulda read all the way to the bottom of your email :-(. What
version of PG were you running before? I would think that pretty much
any version of pgstat_vacuum_tabstats would have had a performance
Carlos H. Reimer wrote:
Hi,
I know that the problem with the following SQL is the LOG.CODCEP =
ENDE.CODCEP||CODLOG condition, but what can I
do to improve the performance?
I wouldn't say it's the join condition. There is a nested loop join on
500k+ rows.
Is it possible to put an index
Carlos H. Reimer [EMAIL PROTECTED] writes:
I know that the problem with the following SQL is the LOG.CODCEP =
ENDE.CODCEP||CODLOG condition, but what can I
do to improve the performance?
Seems the problem is not using an index for tt_log. Do you have an
index on tt_log.codcep? If so, maybe
Hey Tom,
We were running on 8.1.1 previous to upgrading to 8.2, and yes, we
definitely have a heafty pg_class. The inheritance model is heavily used
in our schema (the results of the group by you wanted to see are down
below). However, no significant problems were seen with vacs while we
Kim [EMAIL PROTECTED] writes:
We were running on 8.1.1 previous to upgrading to 8.2, and yes, we
definitely have a heafty pg_class. The inheritance model is heavily used
in our schema (the results of the group by you wanted to see are down
below). However, no significant problems were seen
Tom Lane wrote:
What I think we need to do about this is
(1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking
of using a hash table for the OIDs instead of a linear list. Should be
a pretty small change; I'll work on it today.
(2) Reconsider whether last-vacuum-time
On Thu, 2007-01-11 at 14:45 -0500, Tom Lane wrote:
Kim [EMAIL PROTECTED] writes:
We were running on 8.1.1 previous to upgrading to 8.2, and yes, we
definitely have a heafty pg_class. The inheritance model is heavily used
in our schema (the results of the group by you wanted to see are
On Thu, Jan 11, 2007 at 12:15:50PM +, Simon Riggs wrote:
On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:
On 1/9/07, Simon Riggs [EMAIL PROTECTED] wrote:
If you are doing date range partitioning it should be fairly
simple to
load data into the latest
Simon Riggs [EMAIL PROTECTED] writes:
It's not clear to me how this fix will alter the INSERT issue Kim
mentions.
I didn't say that it would; we have no information on the INSERT issue,
so I'm just concentrating on the problem that he did provide info on.
(BTW, I suppose the slow-\d issue is
Yes, I do have an index on tt_log.codcep.
Indexes I´ve on both tables:
tt_end
Indexes:
pk_end PRIMARY KEY, btree (filcli, codcli, codfil, numend)
ak_end_numdoc UNIQUE, btree (numdoc)
i_fk_end_darc btree (codarc, tiparc)
i_fk_end_dend btree (tipend)
i_fk_end_dfil btree
I wrote:
(2) Reconsider whether last-vacuum-time should be sent to the collector
unconditionally.
Actually, now that I look, the collector already contains this logic:
/*
* Don't create either the database or table entry if it doesn't already
* exist. This avoids bloating the
I wrote:
What I think we need to do about this is
(1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking
of using a hash table for the OIDs instead of a linear list. Should be
a pretty small change; I'll work on it today.
I've applied the attached patch to 8.2 to do the
Our pgstats.stat file is 40M for 8.2, on 8.1 it was 33M. Our schema size
hasn't grown *that* much in the two weeks since we upgraded
I'm not sure if this sheds any more light on the situation, but in
scanning down through the process output from truss, it looks like the
first section of
On Thu, 2007-01-11 at 16:11 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
It's not clear to me how this fix will alter the INSERT issue Kim
mentions.
I didn't say that it would; we have no information on the INSERT issue,
so I'm just concentrating on the problem that he did
Hi, Carlos,
Wouldn't it be better if you used INT in 'codcep' in both tables (as
CEP/ZIP numbers are [0-9]{8})? Casting as Tom Lane suggested is also a
good alternative, yet I think it'd be much better if you used int in
both columns.
Regards,
Cesar
Let's see the query:
SELECT ENDE.* ,
Simon Riggs wrote:
Temp relations still make pg_class entried don't they? Is that on the
TODO list to change?
Yeah, and pg_attribute entries as well, which may be more problematic
because they are a lot. Did we get rid of pg_attribute entries for
system attributes already?
Can we actually
Alvaro Herrera [EMAIL PROTECTED] writes:
Can we actually get rid of pg_class entries for temp tables. Maybe
creating a temp pg_class which would be local to each session? Heck,
it doesn't even have to be an actual table -- it just needs to be
somewhere from where we can load entries into the
29 matches
Mail list logo