Josh Berkus wrote:
> Simon,
> One of the things I love about doing informal online user support in the 
> PostgreSQL community, and formal user support for Sun's customers, is the 
> almost-ironclad guarentee that if a user has a corrupt database or data loss, 
> one of three things is true:
> a) they didn't apply some recommended PG update;
> b) they have a bad disk controller or disk config;
> c) they have bad ram.

That is pretty spot on.

> It seriously narrows down the problem space to know that PostgreSQL does 
> *not* 
> allow data loss if it's physically possible to prevent it.

But we do don't we? fsync = off, full_page_writes = off?

> Therefore, if we're going to arm a foot-gun as big as COMMIT NOWAIT for 
> PostgreSQL, I'd like to see the answers to two questions:

I agree with this.

> a) Please give some examples of performance gain on applications using COMMIT 
> NOWAIT.  The performance gain needs to be substantial (like, 50% to 100%) to 
> justify a compromise like this.

WOAH... that seems excessive. There are a couple of things going on here.

1. We have a potential increase in performance for certain workloads.
This is good, but must be proven. IS that proof 50%? Bah.. let's talk

2. We have to accept that not everyone wants IRON clad data integrity.
We have many, many options for dealing with that now, including PITR and

> b) Why this and not global temporary tables or queuing?

/me would love global temp tables.

Much of the PostgreSQL Users out there today, will happily loose a 15
minutes of data if it means their data is served 25% faster.


Joshua D. Drake


      === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997

Donate to the PostgreSQL Project:
PostgreSQL Replication:

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at


Reply via email to