Carlo Stonebanks wrote:
We urgently need a major performance improvement. We are running the
PostgreSQL 8.1.4 on a Windows 2003 x64 Server on a dual processor,
dual core
3.2Ghz Xeon box with 4gb RAM and a RAID (sorry, I don't know what
type) disc
subsystem. Sorry about the long intro, but
You have 2 seqscans on albumjoin table, you first make a simple join:
...and albumjoin.album = public.album.id ...
that generates the first
- Seq Scan on albumjoin (cost=0.00..88642.18 rows=5107318 width=8)
and then you group values from same table counting them with
... (select album from
On Thu, 2006-09-28 at 12:44 -0400, Carlo Stonebanks wrote:
At this early stage in the project, we are initializing our portal's
database with millions of rows of imported data in over 50 different
flattened tables; each table's structure is unique to the data provider.
This requires a
On Thu, 2006-09-28 at 21:41 +0200, Cedric Boudin wrote:
I would like to share with you a side effect that I discovered today on
our postgresql 8.1 server.
We ve been running this instance with PITR for now 2 months without any
problems.
The wal's are copied to a remote machine with the
On Thu, 2006-09-28 at 15:18 -0700, Ben wrote:
distinct public.album.id
group by public.album.id
You can remove the distinct clause for starters...
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of
Hi, Carlo,
Carlo Stonebanks wrote:
We urgently need a major performance improvement.
Did you think about putting the whole data into PostgreSQL using COPY in
a nearly unprocessed manner, index it properly, and then use SQL and
stored functions to transform the data inside the database to the
Hi, Carlo,
Carlo Stonebanks wrote:
From what I can see, autovacuum is hitting the db's in question about once
every five minutes. Does this imply an ANALYZE is being done automatically
that would meet the requirements we are talking about here? Is there any
benefit ot explicitly
In response to Carlo Stonebanks [EMAIL PROTECTED]:
indexes. I don't know whether autovacuum will also analyze tables
for you automagically, but it would be a good idea to analyze the table
It does.
So, I have checked my log and I see an autovacuum running once every minute
on our
Simon Riggs [EMAIL PROTECTED] writes:
We discussed putting PreallocXlogFiles() in bgwriter once before, but I
think last time we discussed that idea it was rejected, IIRC.
We already do that: it's called a checkpoint. If the rate of WAL
generation was more than checkpoint_segments per
On 9/29/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
For reasons I've exlained elsewhere, the import process is not well suited
to breaking up the data into smaller segments. However, I'm interested in
what can be indexed. I am used to the idea that indexing only applies to
expressions that
On Fri, 2006-09-29 at 10:29 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
We discussed putting PreallocXlogFiles() in bgwriter once before, but I
think last time we discussed that idea it was rejected, IIRC.
We already do that: it's called a checkpoint.
Yes, but not
There's no join criteria for umdb.node... is that really what you want?
On Thu, Sep 28, 2006 at 03:18:56PM -0700, Ben wrote:
Hey guys, I've got a query that is inherently expensive, because it has to
do some joins against some large tables. But it's currently *very*
expensive (at least for a
Simon Riggs [EMAIL PROTECTED] writes:
PreallocXlogFiles() adds only a *single* xlog file, sometimes.
Hm, you are right. I wonder why it's so unaggressive ... perhaps
because under normal circumstances we soon settle into a steady
state where each checkpoint recycles the right number of files.
[EMAIL PROTECTED] writes:
I need to increase performance on postgres,
Well, for starters, have you ANALYZEd your tables? That EXPLAIN output
looks suspiciously like default estimates. Then post EXPLAIN ANALYZE
(not just EXPLAIN) results for your problem query.
If I run the query below with
I'm experiencing a problem with our postgres database. Queries that
normally take seconds suddenly start taking hours, if they complete at
all.
This isn't a vacuuming or analyzing problem- I've been on this list long
enough that they were my first response, and besides it doesn't happen
On Fri, Sep 29, 2006 at 03:24:14PM -0400, Brian Hurt wrote:
I'm experiencing a problem with our postgres database. Queries that
normally take seconds suddenly start taking hours, if they complete at
all.
The first thing I'd do is EXPLAIN and EXPLAIN ANALYSE on the queries
in question.
The
Brian Hurt [EMAIL PROTECTED] writes:
I'm experiencing a problem with our postgres database. Queries that
normally take seconds suddenly start taking hours, if they complete at
all.
Are they waiting? Consuming CPU? Consuming I/O? top and vmstat will
help you find out.
Hi,
How do I optimize postgres8.1?
I have 'vacuum full analyze'
I have posted output of 'explain analyze select ..'
I have created some indexes
I am running Mixed-Mode server,4GB ram running FC5(64bit), postgresql 8.1 AND
My configs are;(Are these good number?)
---
[EMAIL PROTECTED] writes:
How do I optimize postgres8.1?
8 msec for a five-way join doesn't sound out of line to me. What were
you expecting?
regards, tom lane
---(end of broadcast)---
TIP 1: if posting/reading through
Brian Hurt [EMAIL PROTECTED] wrote:
I'm experiencing a problem with our postgres database. Queries that
normally take seconds suddenly start taking hours, if they complete at
all.
Do you have any long running transactions? I have noticed that with Postgres
8.1.x, a long running transaction
20 matches
Mail list logo