Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
Here's a trigger I wrote to perform essentially the same purpose. The nice
thing about this is it keeps the number up to date for you, but you do incur
slight overhead.
...
CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test
On Thu, 11 Aug 2005, Tino Wildenhain wrote:
Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
Here's a trigger I wrote to perform essentially the same purpose. The nice
thing about this is it keeps the number up to date for you, but you do incur
slight overhead.
...
Am Donnerstag, den 11.08.2005, 20:36 +1000 schrieb Gavin Sherry:
On Thu, 11 Aug 2005, Tino Wildenhain wrote:
Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
Here's a trigger I wrote to perform essentially the same purpose. The
nice
thing about this is it keeps the
Hi everyone,
I have some problems with a quite long query and the plan postgreSQL is
choosing. The query joins 12 tables and in the WHERE clause I use a IN
_expression_ with a lot of identifiers (up to 2000). The problem is that
the planner is proposing a seq_scan on two tables 2M rows each
You could lock the count table to prevent the problem
where 2 competing transactions do an insert, read the
start value and add 1 to it and then write the result
- which is n+1 rather then n+2 - so you are off by one.
Think of the same when one transaction inserts 100
and the other 120. Then
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.
The system is for the sole use of a couple of data warehouse developers,
hence we
Am Donnerstag, den 11.08.2005, 14:08 +0200 schrieb PFC:
You could lock the count table to prevent the problem
where 2 competing transactions do an insert, read the
start value and add 1 to it and then write the result
- which is n+1 rather then n+2 - so you are off by one.
Think of the
Paul Johnson wrote:
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.
The system is for the sole use of a couple of data warehouse
Luis Cornide Arce wrote:
Hi everyone,
I have some problems with a quite long query and the plan postgreSQL is
choosing. The query joins 12 tables and in the WHERE clause I use a IN
expression with a lot of identifiers (up to 2000). The problem is that
the planner is proposing a seq_scan on
Paul,
Before I say anything else, one online document which may be of
assistance to you is:
http://www.powerpostgresql.com/PerfList/
Some thoughts I have:
3) You're shared RAM setting seems overkill to me. Part of the challenge
is you're going from 1000 to 262K with no assessment in between.
On Wed, 2005-08-10 at 18:55, Tom Lane wrote:
Ian Westmacott [EMAIL PROTECTED] writes:
In a nutshell, I have a LIMIT query where the planner
seems to favor a merge join over a nested loop.
The planner is already estimating only one row out of the join, and so
the LIMIT doesn't affect its
Well I have change the next setting in the postgresql.conf
shared_buffers= 16384
work_mem =32768
maintenance_work_mem= 65536
bgwriter_delay =800
bgwriter_maxpages= 100
wal_buffers =64
efective_cache_size= 2
The rest of the settings are the default.
Thanks,
Luis
Richard Huxton escribió:
On Thu, Aug 11, 2005 at 01:23:21PM +0100, Paul Johnson wrote:
I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to
a single spindle disk?
In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data
Hello all,
I just was running strace in the writer process and I noticed this pattern:
select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
getppid() = 4240
time(NULL) = 1123773324
mmap2(NULL, 528384, PROT_READ|PROT_WRITE,
Thanks for all the great ideas. I have more options to evaluate now.
-Dan
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On Thu, Aug 11, 2005 at 11:25:27AM -0400, Alan Stange wrote:
why mmap and munmap each time?mmap and munmap are fairly expensive
operations (on some systems), especially on multi cpu machines. munmap
in particular generally needs to issue cross calls to the other cpus to
ensure any
On Fri, 2005-08-12 at 08:47 +, Steve Poe wrote:
Paul,
Before I say anything else, one online document which may be of
assistance to you is:
http://www.powerpostgresql.com/PerfList/
Some thoughts I have:
3) You're shared RAM setting seems overkill to me. Part of the challenge
is
I think the T-3 RAID at least breaks some of these rules -- I've got 2
T-3's, 1 configured as RAID-10 and the other as RAID5, and they both
seem to perform about the same. I use RAID5 with a hot spare, so it's
using 8 spindles.
I got a lot of performance improvement out of mount the fs
(Musing, trying to think of a general-purpose performance-tuning rule
that applies here):
Actually, it seems to me that with the addition of the WAL in PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed
I'm having an odd case where my system is locking such that if I insert
into a table during a transaction, if I start a new connection and
transaction, it blocks while trying to do a similar insert until the
first transaction is committed or rolled back.
The schema is rather complex (currently
Actually, it seems to me that with the addition of the WAL in
PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed write
cache would make a really large performance difference would be on the
drive(s)
On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
I'm having an odd case where my system is locking such that if I insert
into a table during a transaction, if I start a new connection and
transaction, it blocks while trying to do a similar insert until the
first transaction is
Alvaro Herrera wrote:
On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
I'm having an odd case where my system is locking such that if I insert
into a table during a transaction, if I start a new connection and
transaction, it blocks while trying to do a similar insert until the
I have a largely table-append-only application where most transactions
are read-intensive and many are read-only. The transactions may span
many tables, and in some cases might need to pull 70 MB of data out of a
couple of the larger tables.
In 7.3, I don't seem to see any file system or
Jeffrey Tenny wrote:
I have a largely table-append-only application where most transactions
are read-intensive and many are read-only. The transactions may span
many tables, and in some cases might need to pull 70 MB of data out of a
couple of the larger tables.
In 7.3, I don't seem to see
John A Meinel wrote:
Well, first off, the general recommendation is probably that 7.3 is
really old, and you should try to upgrade to at least 7.4, though
recommended to 8.0.
There have been issues with each release that led me to wait.
Even now I'm waiting for some things to settle in the
Alvaro Herrera wrote:
On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
...
This is a known problem, solved in 8.1. A workaround for previous
releases is to defer FK checks until commit:
So I don't know exactly what the fix was, but I just tested, and my
problem is indeed
On Thu, Aug 11, 2005 at 10:18:44AM -0700, Mark Lewis wrote:
Actually, it seems to me that with the addition of the WAL in PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed write
cache would make a
On Thu, Aug 11, 2005 at 07:13:27PM -0400, Jeffrey Tenny wrote:
The system for testing was 512MB
That's definately *not* a large ram system. If you're reading a subset
of data that totals 70MB I'm going to guess that your data set is larger
than or at least a large fraction of 512MB.
29 matches
Mail list logo