One of the things I was thinking about was whether we could use up those
cycles more effectively. If we were to include a compression routine
before we calculated the CRC that would - reduce the size of the blocks to be written, hence reduce size of xlog
- reduce the following CRC calculation
I was thinking about using a simple run-length encoding to massively shrink half-empty blocks with lots of zero padding, but we've already got code to LZW the data down also.
Best Regards, Simon Riggs
---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Simon,
I think having a compression routine in there could make real sense.
We have done some major I/O testing involving compression for a large customer some time ago. We have seen that compressing / decompressing on the fly is in MOST cases much faster than uncompressed I/O (try a simple "cat file | ..." vs." zcat file.gz | ...") - the zcat version will be faster on all platforms we have tried (Linux, AIX, Sun on some SAN system, etc. ...).
Also, when building up a large database within one transaction the xlog will eat a lot of storage - this can be quite annoying when you have to deal with a lot of data).
Are there any technical reasons which would prevent somebody from implementing compression?
Best regards,
Hans
-- Cybertec Geschwinde u Schoenig Schoengrabern 134, A-2020 Hollabrunn, Austria Tel: +43/660/816 40 77 www.cybertec.at, www.postgresql.at
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq