On Jan 2, 2011, at 5:36 PM, Simon Riggs wrote:
> On Tue, 2010-12-28 at 09:10 -0600, Andy Colson wrote:
> 
>> I know its been discussed before, and one big problem is license and 
>> patent problems.
> 
> Would like to see a design for that. There's a few different ways we
> might want to do that, and I'm interested to see if its possible to get
> compressed pages to be indexable as well.
> 
> For example, if you compress 2 pages into 8Kb then you do one I/O and
> out pops 2 buffers. That would work nicely with ring buffers.
> 
> Or you might try to have pages > 8Kb in one block, which would mean
> decompressing every time you access the page. That wouldn't be much of a
> problem if we were just seq scanning.
> 
> Or you might want to compress the whole table at once, so it can only be
> read by seq scan. Efficient, but not indexes.

FWIW, last time I looked at how Oracle handled compression, it would only 
compress existing data. As soon as you modified a row, it ended up 
un-compressed, presumably in a different page that was also un-compressed.

I wonder if it would be feasible to use a fork to store where a compressed page 
lives inside the heap... if we could do that I don't see any reason why indexes 
wouldn't work. The changes required to support that might not be too horrific 
either...
--
Jim C. Nasby, Database Architect                   j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to