> > To get decent I/O you need 1MB fundamental units all the way down the
> > stack.
> It would also be a good idea to have an application that isn't likely
> to change a single bit in a 1MB range and then expect you to record
> that change. This pretty much lets Postgres out of the picture.
We're looking at this pretty much just for data warehousing, where you
constantly have gigabytes of data which don't change from month to month or
even year to year. I agree that it would *not* be an optimization for OLTP
systems. Which is why a build-time option would be fine.
> Ummm... I don't see anything here which will be a win for Postgres. The
> transactional semantics we're interested in are fairly complex:
> 1) Modifications to multiple objects can become visible to the system
> 2) On error, a series of modifications which had been grouped together
> within a transaction can be rolled back
> 3) Using object version information, determine which version of which
> object is visible to a given session
> 4) Using version information and locking, detect and resolve read/write
> and write/write conflicts
I wasn't thinking of database transactions. I was thinking specifically of
using Reiser4 transactions (and other transactional filesytems) to do things
like eliminate the need for full page writes in the WAL. Filesystems are
low-level things which should take care of low-level needs, like making sure
an 8K page got written to disk even in the event of a system failure.
Aglio Database Solutions
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster