On Mon, Jan 3, 2011 at 4:02 AM, Jim Nasby <j...@nasby.net> wrote:
> FWIW, last time I looked at how Oracle handled compression, it would only 
> compress existing data. As soon as you modified a row, it ended up 
> un-compressed, presumably in a different page that was also un-compressed.

IIUC, InnoDB basically compresses a block as small as it'll go, and
then stores it in a regular size block.  That leaves free space at the
end, which can be used to cram additional tuples into the page.
Eventually that free space is exhausted, at which point you try to
recompress the whole page and see if that gives you room to cram in
even more stuff.

I thought that was a pretty clever approach.

> I wonder if it would be feasible to use a fork to store where a compressed 
> page lives inside the heap... if we could do that I don't see any reason why 
> indexes wouldn't work. The changes required to support that might not be too 
> horrific either...

At first blush, that sounds like a recipe for large amounts of
undesirable random I/O.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to