Decibel! wrote:
On Thu, Sep 06, 2007 at 11:26:46AM +0200, Willo van der Merwe wrote:
Richard Huxton wrote:
Willo van der Merwe wrote:
Hi guys,
I'm have the rare opportunity to spec the hardware for a new database
server. It's going to replace an older one, driving a social
Get yourself the ability to benchmark your application. This is
invaluable^W a requirement for any kind of performance tuning.
I'm pretty happy with the performance of the database at this stage.
Correct me if I'm wrong, but AFAIK a load of 3.5 on a quad is not
overloading it. It also
Decibel! escribió:
On Tue, Sep 11, 2007 at 09:49:37AM +0200, Ruben Rubio wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[EMAIL PROTECTED] escribi?:
Last time I had this problem i solved it stopping website, restarting
database, vacuumm it, run again website. But I guess this is
El-Lotso skrev:
I'm on the verge of giving up... the schema seems simple and yet there's
so much issues with it. Perhaps it's the layout of the data, I don't
know. But based on the ordering/normalisation of the data and the one to
many relationship of some tables, this is giving the planner a
El-Lotso [EMAIL PROTECTED] writes:
I'm really at my wits end here.
Try to merge the multiple join keys into one, somehow. I'm not sure why
the planner is overestimating the selectivity of the combined join
conditions, but that's basically where your problem is coming from.
A truly brute-force
Jean-David Beyer escribió:
Gregory Stark wrote (in part):
The extra spindles speed up sequential i/o too so the ratio between sequential
and random with prefetch would still be about 4.0. But the ratio between
sequential and random without prefetch would be even higher.
I never
On 9/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Decibel! escribió:
On Tue, Sep 11, 2007 at 09:49:37AM +0200, Ruben Rubio wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[EMAIL PROTECTED] escribi?:
Last time I had this problem i solved it stopping website, restarting
Scott Marlowe wrote:
I'm getting more and more motivated to rewrite the vacuum docs. I
think a rewrite from the ground up might be best... I keep seeing
people doing vacuum full on this list and I'm thinking it's as much
because of the way the docs represent vacuum full as anything. Is
that
On 9/12/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On 9/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Try a REINDEX. VACUUM FULL is especially hard on the indexes, and it's
easy for them to seriously bloat.
Reindex is done everyday after VACUUM FULL VERBOSE ANALYZE. I save also
On 9/12/07, Mikko Partio [EMAIL PROTECTED] wrote:
On 9/12/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On 9/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Try a REINDEX. VACUUM FULL is especially hard on the indexes, and it's
easy for them to seriously bloat.
Reindex is done
On Sep 12, 2007, at 9:07 PM, Scott Marlowe wrote:
On 9/12/07, Mikko Partio [EMAIL PROTECTED] wrote:
…
Aren't you mixing up REINDEX and CLUSTER?
…
Either one does what a vacuum full did / does, but generally does
it better.
On topic of REINDEX / VACUUM FULL versus a CLUSTER / VACUUM
On 9/12/07, Frank Schoep [EMAIL PROTECTED] wrote:
On Sep 12, 2007, at 9:07 PM, Scott Marlowe wrote:
On 9/12/07, Mikko Partio [EMAIL PROTECTED] wrote:
…
Aren't you mixing up REINDEX and CLUSTER?
…
Either one does what a vacuum full did / does, but generally does
it better.
On topic
Scott Marlowe escribió:
Aren't you mixing up REINDEX and CLUSTER?
I don't think so. reindex (which runs on tables and indexes, so the
name is a bit confusing, I admit) basically was originally a repair
operation that rewrote the whole relation and wasn't completely
transaction safe (way
My PG server came to a screeching halt yesterday. Looking at top saw a very
large number of startup waiting tasks. A pg_dump was running and one of my
scripts had issued a CREATE DATABASE command. It looks like the CREATE DATABASE
was exclusive but was having to wait for the pg_dump to
On Sep 12, 2007, at 2:19 PM, Frank Schoep wrote:
On Sep 12, 2007, at 9:07 PM, Scott Marlowe wrote:
On 9/12/07, Mikko Partio [EMAIL PROTECTED] wrote:
…
Aren't you mixing up REINDEX and CLUSTER?
…
Either one does what a vacuum full did / does, but generally does
it better.
On topic of
I'm designing a system that will be doing over a million inserts/deletes
on a single table every hour. Rather than using a single table, it is
possible for me to partition the data into multiple tables if I wanted
to, which would be nice because I can just truncate them when I don't
need
On 9/12/07, Matt Chambers [EMAIL PROTECTED] wrote:
I'm designing a system that will be doing over a million inserts/deletes on
a single table every hour. Rather than using a single table, it is possible
for me to partition the data into multiple tables if I wanted to, which
would be nice
Dan Harris [EMAIL PROTECTED] writes:
My PG server came to a screeching halt yesterday. Looking at top saw a very
large number of startup waiting tasks. A pg_dump was running and one of
my
scripts had issued a CREATE DATABASE command. It looks like the CREATE
DATABASE
was exclusive
On Wed, 12 Sep 2007, Scott Marlowe wrote:
I'm getting more and more motivated to rewrite the vacuum docs. I think
a rewrite from the ground up might be best... I keep seeing people
doing vacuum full on this list and I'm thinking it's as much because of
the way the docs represent vacuum full
19 matches
Mail list logo