Jim C. Nasby wrote:
On Fri, Jun 09, 2006 at 06:20:21PM -0700, Trent Shipley wrote:
Ideally, the transaction management system would be proportional to the
marginal change in size of the database rather than the gross size of the
database. That is VACCUM being O(N) should be replaced (or
Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
That's why people suggest partitions. Then you only vacuum the
partitions that are new and the old ones never need to be touched...
This will all work a lot better once we track XID wraparound risk on a
per-table rather than
On Fri, Jun 09, 2006 at 06:20:21PM -0700, Trent Shipley wrote:
VACCUM needs to be run for two reasons.
1) To recover the transaction counter.
2) To recover records marked for deletion.
VACCUM needs to be run over the entire database. If the data in the database
is N, then VACCUM is O(N).
Martijn van Oosterhout kleptog@svana.org writes:
That's why people suggest partitions. Then you only vacuum the
partitions that are new and the old ones never need to be touched...
This will all work a lot better once we track XID wraparound risk on a
per-table rather than per-database basis.
On Tuesday 2006-06-06 20:11, Mark Woodward wrote:
Mark Woodward wrote:
OK, here's my problem, I have a nature study where we have about 10
video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
Maybe if you grouped multiple
On Fri, Jun 09, 2006 at 06:20:21PM -0700, Trent Shipley wrote:
Ideally, the transaction management system would be proportional to the
marginal change in size of the database rather than the gross size of the
database. That is VACCUM being O(N) should be replaced (or there should be
an
Ühel kenal päeval, K, 2006-06-07 kell 17:45, kirjutas Jim C. Nasby:
Plus, if the only issue here is in fact the long-running transaction for
vacuum, there's other ways to address that which would be a lot less
intrusive than doing something like going to 64 bit XIDs. IIRC, in 8.2
vacuum will
On Wed, Jun 07, 2006 at 07:07:55PM -0400, Mark Woodward wrote:
I guess what I am saying is that PostgreSQL isn't smooth, between
checkpoints and vacuum, it is near impossible to make a product that
performs consistently under high load.
Have you tuned the bgwriter and all the vacuum_cost
Mark Woodward wrote:
On Wed, Jun 07, 2006 at 07:07:55PM -0400, Mark Woodward wrote:
I guess what I am saying is that PostgreSQL isn't smooth, between
checkpoints and vacuum, it is near impossible to make a product that
performs consistently under high load.
Have you tuned the bgwriter and all
Joshua D. Drake [EMAIL PROTECTED] writes:
I was recently involved in a project where we had to decrease the
checkpoint_timeout . The problem was, that the database was performing
so many transactions that if we waiting for 5 minutes, checkpoint would
take entirely too long.
Seems like the
Hannu Krosing [EMAIL PROTECTED] writes:
If the aim is to *only* avoid transaction wraparound, then maybe we
could introduce VACUUM FREEZE ONLY; which never removes any old tuples,
but instead just marks them by setting xmin=xmax for them, in addition
to its freezing of live-and-visible-to-all
Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
I was recently involved in a project where we had to decrease the
checkpoint_timeout . The problem was, that the database was performing
so many transactions that if we waiting for 5 minutes, checkpoint would
take entirely too long.
On Thu, Jun 08, 2006 at 09:13:33AM -0700, Joshua D. Drake wrote:
Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
I was recently involved in a project where we had to decrease the
checkpoint_timeout . The problem was, that the database was performing
so many transactions that if we
Ühel kenal päeval, N, 2006-06-08 kell 12:09, kirjutas Tom Lane:
Hannu Krosing [EMAIL PROTECTED] writes:
If the aim is to *only* avoid transaction wraparound, then maybe we
could introduce VACUUM FREEZE ONLY; which never removes any old tuples,
but instead just marks them by setting
Koichi Suzuki wrote:
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
transaction ID has to pay about a couple of percent of performance.
If 64bit transaction ID is a reasonable fix, I've already posted
Koichi Suzuki wrote:
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
transaction ID has to pay about a couple of percent of performance. If
64bit transaction ID is a reasonable fix, I've already posted
On Wed, Jun 07, 2006 at 01:48:50PM +0200, Zdenek Kotala wrote:
Koichi Suzuki wrote:
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
transaction ID has to pay about a couple of percent of performance.
On Wed, Jun 07, 2006 at 11:47:45AM -0400, Tom Lane wrote:
Zdenek Kotala [EMAIL PROTECTED] writes:
Koichi Suzuki wrote:
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX).
Did you check performance on 32-bit or 64-bit systems
Ühel kenal päeval, T, 2006-06-06 kell 13:53, kirjutas Christopher
Browne:
We have triggers that fire is something interesting is found on insert.
We want this thing to run for a log time.
From the numbers, you can see the PostgreSQL database is VERY loaded.
Running VACUUM may not always be
On Thu, Jun 08, 2006 at 01:30:42AM +0300, Hannu Krosing wrote:
??hel kenal p??eval, T, 2006-06-06 kell 13:53, kirjutas Christopher
Browne:
We have triggers that fire is something interesting is found on insert.
We want this thing to run for a log time.
From the numbers, you can see the
On Wed, Jun 07, 2006 at 11:47:45AM -0400, Tom Lane wrote:
Zdenek Kotala [EMAIL PROTECTED] writes:
Koichi Suzuki wrote:
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX).
Did you check performance on 32-bit or 64-bit systems
On Wed, Jun 07, 2006 at 07:07:55PM -0400, Mark Woodward wrote:
I guess what I am saying is that PostgreSQL isn't smooth, between
checkpoints and vacuum, it is near impossible to make a product that
performs consistently under high load.
Have you tuned the bgwriter and all the vacuum_cost
Tom Lane wrote:
Zdenek Kotala [EMAIL PROTECTED] writes:
Koichi Suzuki wrote:
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX).
Did you check performance on 32-bit or 64-bit systems and 64-bit binary
version of PGSQL? I think
OK, here's my problem, I have a nature study where we have about 10 video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
We want to keep about a years worth of data at any specific time.
We have triggers that fire is something interesting
On Tue, 6 Jun 2006, Mark Woodward wrote:
OK, here's my problem, I have a nature study where we have about 10 video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
I would suggest doing all the inserts of one frame in one transaction.
Ühel kenal päeval, T, 2006-06-06 kell 08:42, kirjutas Mark Woodward:
OK, here's my problem, I have a nature study where we have about 10 video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
We want to keep about a years worth of data at
Mark Woodward wrote:
OK, here's my problem, I have a nature study where we have about 10 video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
Maybe if you grouped multiple operations on bigger transactions, the I/O
savings could be
Clinging to sanity, [EMAIL PROTECTED] (Hannu Krosing) mumbled into her beard:
Ühel kenal päeval, T, 2006-06-06 kell 08:42, kirjutas Mark Woodward:
OK, here's my problem, I have a nature study where we have about 10 video
cameras taking 15 frames per second.
For each frame we make a few
On Tue, 2006-06-06 at 13:53 -0400, Christopher Browne wrote:
Clinging to sanity, [EMAIL PROTECTED] (Hannu Krosing) mumbled into her beard:
Ühel kenal päeval, T, 2006-06-06 kell 08:42, kirjutas Mark Woodward:
OK, here's my problem, I have a nature study where we have about 10 video
cameras
Rod Taylor wrote:
With one potential snafu -- it blocks new SELECTs against the parent
table while truncate runs on the child (happens with constraint
exclusion as well).
If your transactions are short then it won't be an issue. If you have
mixed length transactions (many short which the
Mark Woodward wrote:
OK, here's my problem, I have a nature study where we have about 10
video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
Maybe if you grouped multiple operations on bigger transactions, the I/O
savings could be
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
transaction ID has to pay about a couple of percent of performance. If
64bit transaction ID is a reasonable fix, I've already posted this
patch. Anyone
32 matches
Mail list logo