Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Can we actually get rid of pg_class entries for temp tables. Maybe
> creating a "temp pg_class" which would be local to each session? Heck,
> it doesn't even have to be an actual table -- it just needs to be
> somewhere from where we can load entries i
Simon Riggs wrote:
> Temp relations still make pg_class entried don't they? Is that on the
> TODO list to change?
Yeah, and pg_attribute entries as well, which may be more problematic
because they are a lot. Did we get rid of pg_attribute entries for
system attributes already?
Can we actually g
Hi, Carlos,
Wouldn't it be better if you used INT in 'codcep' in both tables (as
CEP/ZIP numbers are [0-9]{8})? Casting as Tom Lane suggested is also a
good alternative, yet I think it'd be much better if you used int in
both columns.
Regards,
Cesar
Let's see the query:
SELECT ENDE.* , DEN
On Thu, 2007-01-11 at 16:11 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > It's not clear to me how this fix will alter the INSERT issue Kim
> > mentions.
>
> I didn't say that it would; we have no information on the INSERT issue,
> so I'm just concentrating on the problem
Our pgstats.stat file is 40M for 8.2, on 8.1 it was 33M. Our schema size
hasn't grown *that* much in the two weeks since we upgraded
I'm not sure if this sheds any more light on the situation, but in
scanning down through the process output from truss, it looks like the
first section of output
I wrote:
> What I think we need to do about this is
> (1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking
> of using a hash table for the OIDs instead of a linear list. Should be
> a pretty small change; I'll work on it today.
I've applied the attached patch to 8.2 to do the
I wrote:
> (2) Reconsider whether last-vacuum-time should be sent to the collector
> unconditionally.
Actually, now that I look, the collector already contains this logic:
/*
* Don't create either the database or table entry if it doesn't already
* exist. This avoids bloating the s
For 8.1, we did have stats_block_level and stats_row_level on, so thats
not it either :-/ However, I did go on to an alternate database of ours
on the same machine, using the same install, same postmaster - that
holds primarily static relations, and not many of those (16 relations
total). The r
On Thu, Jan 11, 2007 at 04:49:28PM -0300, Alvaro Herrera wrote:
> Tom Lane wrote:
>
> > What I think we need to do about this is
> >
> > (1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking
> > of using a hash table for the OIDs instead of a linear list. Should be
> > a prett
Yes, I do have an index on tt_log.codcep.
Indexes I´ve on both tables:
tt_end
Indexes:
"pk_end" PRIMARY KEY, btree (filcli, codcli, codfil, numend)
"ak_end_numdoc" UNIQUE, btree (numdoc)
"i_fk_end_darc" btree (codarc, tiparc)
"i_fk_end_dend" btree (tipend)
"i_fk_end_dfil" btre
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> It's not clear to me how this fix will alter the INSERT issue Kim
> mentions.
I didn't say that it would; we have no information on the INSERT issue,
so I'm just concentrating on the problem that he did provide info on.
(BTW, I suppose the slow-\d issue
On Thu, Jan 11, 2007 at 12:15:50PM +, Simon Riggs wrote:
> On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:
> > On 1/9/07, Simon Riggs <[EMAIL PROTECTED]> wrote:
> > If you are doing date range partitioning it should be fairly
> > simple to
> > load data into the
On Thu, 2007-01-11 at 14:45 -0500, Tom Lane wrote:
> Kim <[EMAIL PROTECTED]> writes:
> > We were running on 8.1.1 previous to upgrading to 8.2, and yes, we
> > definitely have a heafty pg_class. The inheritance model is heavily used
> > in our schema (the results of the group by you wanted to see
Tom Lane wrote:
> What I think we need to do about this is
>
> (1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking
> of using a hash table for the OIDs instead of a linear list. Should be
> a pretty small change; I'll work on it today.
>
> (2) Reconsider whether last-vacuum
Kim <[EMAIL PROTECTED]> writes:
> We were running on 8.1.1 previous to upgrading to 8.2, and yes, we
> definitely have a heafty pg_class. The inheritance model is heavily used
> in our schema (the results of the group by you wanted to see are down
> below). However, no significant problems were
Hey Tom,
We were running on 8.1.1 previous to upgrading to 8.2, and yes, we
definitely have a heafty pg_class. The inheritance model is heavily used
in our schema (the results of the group by you wanted to see are down
below). However, no significant problems were seen with vacs while we
were
"Carlos H. Reimer" <[EMAIL PROTECTED]> writes:
> I know that the problem with the following SQL is the "LOG.CODCEP =
> ENDE.CODCEP||CODLOG" condition, but what can I
> do to improve the performance?
Seems the problem is not using an index for tt_log. Do you have an
index on tt_log.codcep? If so,
Carlos H. Reimer wrote:
Hi,
I know that the problem with the following SQL is the "LOG.CODCEP =
ENDE.CODCEP||CODLOG" condition, but what can I
do to improve the performance?
I wouldn't say it's the join condition. There is a nested loop join on
500k+ rows.
Is it possible to put an index
Kim <[EMAIL PROTECTED]> writes:
> size of pg_class: 535,226
> number of relations: 108,694
Oh, I shoulda read all the way to the bottom of your email :-(. What
version of PG were you running before? I would think that pretty much
any version of pgstat_vacuum_tabstats would have had a performance
Kim <[EMAIL PROTECTED]> writes:
> Running a vac on an 8.2 client connecting to an 8.2 database (following
> example was for a 14 row static table) - vacuums will sit (for lack of a
> better word) for anywhere from ten to twenty minutes before taking a
> lock out on the table and doing work there
Hi,
I know that the problem with the following SQL is the "LOG.CODCEP =
ENDE.CODCEP||CODLOG" condition, but what can I
do to improve the performance?
Is there a type of index that could help or is there another way to build
this SQL?
Thank you in advance!
explain analyze
SELECT ENDE.* , DEND.DE
Kim wrote:
OS: Solaris 10
write transactions/hr: 1.5 million
size of pg_class: 535,226
number of relations: 108,694
That is a huge pg_class. I remember some discussion recently about
problems with 8.2 and the way it scans pg_class. I also believe it's
fixed in 8.2.1. Are you running that.
Hello all!
Running a vac on an 8.2 client connecting to an 8.2 database (following
example was for a 14 row static table) - vacuums will sit (for lack of a
better word) for anywhere from ten to twenty minutes before taking a
lock out on the table and doing work there. Once the vacuum does
comp
On Thu, 11 Jan 2007 08:18:39 -0600, "Adam Rich" <[EMAIL PROTECTED]> wrote:
>
> Subpartitions are just a way to break (parent) partitions up into
> smaller pieces. Those of course can be moved to other disks
> just like the main partitions.
Ah, didn't know that (i just wondered why i need a s
Each partition can have its own disk, without using subpartitions.
CREATE TABLE th (id INT, name VARCHAR(30), adate DATE)
PARTITION BY LIST(YEAR(adate))
(
PARTITION p1999 VALUES IN (1995, 1999, 2003)
DATA DIRECTORY = '/var/appdata/95/data'
INDEX DIRECTORY = '/var/appdata/95/idx',
PART
Well - whether or not MySQL's implementation of partitioning has some
deficiency, it sure is a lot easier to set up than PostgreSQL. And I
don't think there is any technical reason that setting up partitioning
on Postgres couldn't be very easy and still be robust.
On Thu, 11 Jan 2007 13:59:20 +01
> On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
> >> Take a look at the set of partitioning functions I wrote shortly
after
> >> the 8.1 release:
> >>
> >> http://www.studenter.hb.se/~arch/files/part_functions.sql
> >>
> >> You could probably work something out using those funct
On Wed, 10 Jan 2007 15:30:16 -0600, Scott Marlowe <[EMAIL PROTECTED]> wrote:
[...]
>
> And I don't think the mysql partition supports tablespaces either.
>
MySQL supports distributing partitions over multiple disks via the SUBPARTITION
clause [1].
I leave it to you, wether their syntax is
On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:
> On 1/9/07, Simon Riggs <[EMAIL PROTECTED]> wrote:
> If you are doing date range partitioning it should be fairly
> simple to
> load data into the latest table directly. That was the way I
> originally
>
Arnau wrote:
Hi all,
I've got a doubt about how to create an index and a primary key. Lets
say I have the following table:
The country_id column can have maybe 100 - 250 different values.
The customer_id column can have as much several hundred values (less
than 1000).
The telephone
Hi all,
I've got a doubt about how to create an index and a primary key. Lets
say I have the following table:
CREATE TABLE blacklist
(
telephoneVARCHAR(15),
customer_idINT4
CONSTRAINT fk_blacklist_customerid REFERENCES
customers( custom
31 matches
Mail list logo