"Tom Lane" <[EMAIL PROTECTED]> writes:

> Gregory Stark <[EMAIL PROTECTED]> writes:
>> The scenario I was describing was having, for example, 20 fields each
>> of which are char(100) and store 'x' (which are padded with 99
>> spaces). So the row is 2k but the fields are highly compressible, but
>> shorter than the 256 byte minimum.
>
> To be blunt, the solution to problems like that is sending the DBA to a
> re-education camp.  I don't think we should invest huge amounts of
> effort on something that's trivially fixed by using the correct datatype
> instead of the wrong datatype.

Sorry, there was a bit of a mixup here. The scenario I described above is what
it would take to get Postgres to actually try to compress a small string given
the way the toaster works. 

In the real world interesting cases wouldn't be so extreme. Having a single
CHAR(n) or a text field which contains any other very compressible string
could easily not be compressed currently due to being under 256 bytes.

I think the richer target here is doing some kind of cross-record compression.
For example, xml text columns often contain the same tags over and over again
in successive records but any single datum wouldn't be compressible.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to