On Tue, May 11, 2010 at 7:13 AM, Adam GROSZER <agros...@gmail.com> wrote:
> Hello Jim,
> Tuesday, May 11, 2010, 12:33:04 PM, you wrote:
> JF> On Tue, May 11, 2010 at 3:16 AM, Adam GROSZER <agros...@gmail.com> wrote:
>>> Hello Jim,
>>> Monday, May 10, 2010, 1:27:00 PM, you wrote:
>>> JF> On Sun, May 9, 2010 at 4:59 PM, Roel Bruggink <r...@fourdigits.nl>
>>>>> That's really interesting! Did you notice any issues performance wise, or
>>>>> didn't you check that yet?
>>> JF> I didn't check performance. I just iterated over a file storage file,
>>> JF> checking compressed and uncompressed pickle sizes.
>>> I'd say some checksum is then also needed to detect bit failures that
>>> mess up the compressed data.
> JF> Why?
> I think the gzip algo compresses to a bit-stream, where even one bit
> has an error the rest of the uncompressed data might be a total mess.
> If that one bit is relatively early in the stream it's fatal.
> Salvaging the data is not a joy either.
> I know at this level we should expect that the OS and any underlying
> infrastructure should provide error-free data or fail.
> Tho I've seen some magic situations where the file copied without
> error through a network, but at the end CRC check failed on it :-O
How would a checksum help? All it would do is tell you your hosed.
It wouldn't make you any less hosed.
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org