On Thu, Aug 11, 2005 at 07:13:27PM -0400, Jeffrey Tenny wrote:
The system for testing was 512MB
That's definately *not* a "large ram" system. If you're reading a subset
of data that totals 70MB I'm going to guess that your data set is larger
than or at least a large fraction of 512MB.
addition
On Thu, Aug 11, 2005 at 10:18:44AM -0700, Mark Lewis wrote:
Actually, it seems to me that with the addition of the WAL in PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed write
cache would make a really
Alvaro Herrera wrote:
> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
>
...
>
> This is a known problem, solved in 8.1. A workaround for previous
> releases is to defer FK checks until commit:
So I don't know exactly what the fix was, but I just tested, and my
problem is indeed
John A Meinel wrote:
> Well, first off, the general recommendation is probably that 7.3 is
really old, and you should try to upgrade to at least 7.4, though
recommended to 8.0.
There have been issues with each release that led me to wait.
Even now I'm waiting for some things to settle in the 8
Jeffrey Tenny wrote:
> I have a largely table-append-only application where most transactions
> are read-intensive and many are read-only. The transactions may span
> many tables, and in some cases might need to pull 70 MB of data out of a
> couple of the larger tables.
>
>
> In 7.3, I don't seem
I have a largely table-append-only application where most transactions
are read-intensive and many are read-only. The transactions may span
many tables, and in some cases might need to pull 70 MB of data out of a
couple of the larger tables.
In 7.3, I don't seem to see any file system or oth
Alvaro Herrera wrote:
> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
>
>>I'm having an odd case where my system is locking such that if I insert
>>into a table during a transaction, if I start a new connection and
>>transaction, it blocks while trying to do a similar insert until
On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
> I'm having an odd case where my system is locking such that if I insert
> into a table during a transaction, if I start a new connection and
> transaction, it blocks while trying to do a similar insert until the
> first transaction is
> Actually, it seems to me that with the addition of the WAL in
PostgreSQL
> and the subsequent decreased need to fsync the data files themselves
> (only during checkpoints?), that the only time a battery-backed write
> cache would make a really large performance difference would be on the
> drive(
I'm having an odd case where my system is locking such that if I insert
into a table during a transaction, if I start a new connection and
transaction, it blocks while trying to do a similar insert until the
first transaction is committed or rolled back.
The schema is rather complex (currently 157
(Musing, trying to think of a general-purpose performance-tuning rule
that applies here):
Actually, it seems to me that with the addition of the WAL in PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed wr
I think the T-3 RAID at least breaks some of these rules -- I've got 2
T-3's, 1 configured as RAID-10 and the other as RAID5, and they both
seem to perform about the same. I use RAID5 with a hot spare, so it's
using 8 spindles.
I got a lot of performance improvement out of mount the fs noatim
On Fri, 2005-08-12 at 08:47 +, Steve Poe wrote:
> Paul,
>
> Before I say anything else, one online document which may be of
> assistance to you is:
> http://www.powerpostgresql.com/PerfList/
>
> Some thoughts I have:
>
> 3) You're shared RAM setting seems overkill to me. Part of the challeng
On Thu, Aug 11, 2005 at 11:25:27AM -0400, Alan Stange wrote:
> why mmap and munmap each time?mmap and munmap are fairly expensive
> operations (on some systems), especially on multi cpu machines. munmap
> in particular generally needs to issue cross calls to the other cpus to
> ensure any
Thanks for all the great ideas. I have more options to evaluate now.
-Dan
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Hello all,
I just was running strace in the writer process and I noticed this pattern:
select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
getppid() = 4240
time(NULL) = 1123773324
mmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|
On Thu, Aug 11, 2005 at 01:23:21PM +0100, Paul Johnson wrote:
I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to
a single spindle disk?
In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data f
Well I have change the next setting in the postgresql.conf
shared_buffers= 16384
work_mem =32768
maintenance_work_mem= 65536
bgwriter_delay =800
bgwriter_maxpages= 100
wal_buffers =64
efective_cache_size= 2
The rest of the settings are the default.
Thanks,
Luis
Richard Huxton escribió:
On Wed, 2005-08-10 at 18:55, Tom Lane wrote:
> Ian Westmacott <[EMAIL PROTECTED]> writes:
> > In a nutshell, I have a LIMIT query where the planner
> > seems to favor a merge join over a nested loop.
>
> The planner is already estimating only one row out of the join, and so
> the LIMIT doesn't aff
Paul,
Before I say anything else, one online document which may be of
assistance to you is:
http://www.powerpostgresql.com/PerfList/
Some thoughts I have:
3) You're shared RAM setting seems overkill to me. Part of the challenge
is you're going from 1000 to 262K with no assessment in between. Eac
Luis Cornide Arce wrote:
Hi everyone,
I have some problems with a quite long query and the plan postgreSQL is
choosing. The query joins 12 tables and in the WHERE clause I use a IN
expression with a lot of identifiers (up to 2000). The problem is that
the planner is proposing a seq_scan on tw
Paul Johnson wrote:
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.
The system is for the sole use of a couple of data warehouse
Am Donnerstag, den 11.08.2005, 14:08 +0200 schrieb PFC:
>
> > You could lock the count table to prevent the problem
> > where 2 competing transactions do an insert, read the
> > start value and add 1 to it and then write the result
> > - which is n+1 rather then n+2 - so you are off by one.
> > Th
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.
The system is for the sole use of a couple of data warehouse developers,
hence we a
You could lock the count table to prevent the problem
where 2 competing transactions do an insert, read the
start value and add 1 to it and then write the result
- which is n+1 rather then n+2 - so you are off by one.
Think of the same when one transaction inserts 100
and the other 120. Then yo
Hi everyone,
I have some problems with a quite long query and the plan postgreSQL is
choosing. The query joins 12 tables and in the WHERE clause I use a IN
_expression_ with a lot of identifiers (up to 2000). The problem is that
the planner is proposing a seq_scan on two tables 2M rows each (i
Am Donnerstag, den 11.08.2005, 20:36 +1000 schrieb Gavin Sherry:
> On Thu, 11 Aug 2005, Tino Wildenhain wrote:
>
> > Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
> > > Here's a trigger I wrote to perform essentially the same purpose. The
> > > nice
> > > thing about this is it
On Thu, 11 Aug 2005, Tino Wildenhain wrote:
> Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
> > Here's a trigger I wrote to perform essentially the same purpose. The nice
> > thing about this is it keeps the number up to date for you, but you do incur
> > slight overhead.
> ...
Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
> Here's a trigger I wrote to perform essentially the same purpose. The nice
> thing about this is it keeps the number up to date for you, but you do incur
> slight overhead.
...
>
> CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON te
- Original Message -
From: "Qingqing Zhou" <[EMAIL PROTECTED]>
To:
Sent: Thursday, August 11, 2005 2:37 PM
Subject: Re: [PERFORM] it is always delete temp table will slow down the
postmaster?
""Chun Yit(Chronos)"" <[EMAIL PROTECTED]> writes
hi, i got one situation here, i creat
30 matches
Mail list logo