down
So, I'd like my cake and eat it too... :-)
I'd like to have my indexes built as rows are inserted into the
partition so help with the drill down...
-Original Message-
From: Bucky Jordan [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 26, 2006 5:26 PM
To: Marc Morin; Tom
Our application has
a number of inserters posting rows of network statistics into a database.
This is occuring continously. The following is an example of a stats table
(simplified but maintains key concepts).
CREATE TABLE stats
(
logtime timestamptz,
key
int,
stat1
bigint,
stat2
Would it be possible to change the primary key to
(logtime,key)? This could help keeping the working window small.
No, the application accessing the data wants all the rows between start
and end time for a particular key value.
Secondly, the real working set is smaller, as the rows are
The bottom line here is likely to be you need more RAM :-(
Yup. Just trying to get a handle on what I can do if I need more than
16G
Of ram... That's as much as I can put on the installed based of
servers 100s of them.
I wonder whether there is a way to use table partitioning to
Wondering if
Update firma1.rid set toode=null where toode is not null and not
exists(select 1 from firma1.toode where toode=rid.toode);
Would be faster... Problem appears to be the seqscan of seqscan... No?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
Explain analyze would be nice ;-)
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andrus
Sent: Tuesday, April 04, 2006 3:37 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Query runs too long for indexed tables
I have relatively
We have large tables that hold statistics based on time. They are of the
form.
CREATE TABLE stats (
id serial primary key,
logtime timestamptz,
d1 int,
s1 bigint
);
CREATE INDEX idx on stats(logtime);
Some of these tables have new data inserted at a rate of 500,000+ rows /
: Friday, March 10, 2006 1:31 PM
To: Marc Morin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Trouble managing planner for
timestamptz columns
Marc Morin [EMAIL PROTECTED] writes:
We tend to analyze these tables every day or so and this doesn't
always prove to be sufficient
We've done a lot of testing on large DB's with a lot of inserts and
have a few comments.
The updates are treated as a large insert as we all know from pg's
point of view.
We've run into 2 classes of problems: excessing WAL checkpoints and
affects of low correlation.
WAL log write's full 8K
To: Marc Morin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] partitioning and locking problems
Hi, Marc,
Marc Morin wrote:
1- long running report is running on view
2- continuous inserters into view into a table via a rule
3- truncate or rule change occurs, taking
[mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 01, 2006 12:50 AM
To: Marc Morin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] partitioning and locking problems
Marc Morin [EMAIL PROTECTED] writes:
Would like to understand the implications of changing postgres'
code
We have a large database system designed around partitioning. Our
application is characterized with
- terabytes of data
- billions of rows in dozens of base tables (and 100s of paritions)
- 24x7 insert load of new data that cannot be stopped, data is time
sensitive.
- periodic reports that can
We have a large DB
with partitioned tables in postgres. We have had trouble with a
ORDER/LIMIT type query. The order and limit are not pushed down to the
sub-tables
CREATE TABLE base
(
foo int
);
CREATE TABLE
bar_0
extra int
) INHERITS
(base);
ALTER TABLE bar ADD
PRIMARY KEY
I have run into this type of query problem as well. I solved it in my
application by the following type of query.
SELECT tlid
FROM completechain AS o
WHERE not exists (
SELECT 1
FROM completechain
WHERE tlid=o.tlid and ogc_fid!=o.ogc_fid
);
Assumes of course that you
14 matches
Mail list logo