You can test in a test environment, sure. But I think that doesn't qualify
as "in practice", unless you leave your rogue code in.

I agree that 256-bit hash is unlikely to have accidental collisions
compared to other causes of error (see my first message on this subject). I
don't really trust it has a full 256-bits of security against intentional
collisions; almost every 'secure' hash has been at least partially
compromised.



On Fri, Sep 27, 2013 at 11:33 AM, Robbert van Dalen <
[email protected]> wrote:

> I believe such code can be tested in practice with great confidence.
> If I would test such kind of code, I would replace the SHA256 code with a
> rogue version that emits equal hashes for certain bit patterns.
>
> As a side note, I also don't trust my macbook's 16 gig of internal
> computer memory - there is a (rather big) chance that bit-errors will
> silently corrupt state.
> Even EEC memory suffers that fate (but of course with a much lower chance).
>
> It is impossible to build a system that achieves 100% data correctness:
> SHA256 will do fine for now.
>
> On Sep 27, 2013, at 4:37 PM, David Barbour <[email protected]> wrote:
>
> > The usual problem with this sort of "handle this super-rare event when
> it happens" code is that it is poorly tested in practice. Should you trust
> it?
> >
> >
> > On Thu, Sep 26, 2013 at 11:46 PM, Robbert van Dalen <
> [email protected]> wrote:
> > Hi,
> >
> > ZFS has de-duplication built on top of SHA256 hashes.
> > If the verify option is also enabled, it is possible for ZFS to work
> around detect hash collisions (although this option slows things down
> further).
> >
> > But ZFS can be considered as a kind of 'central authority' so its
> de-duplication scheme may not work in a distributed setting.
> >
> > Regards,
> > Robbert.
> >
> > On Sep 27, 2013, at 2:50 AM, Wolfgang Eder <[email protected]> wrote:
> >
> > > hi,
> > > in recent discussions on this list, the idea of using hashes to
> > > identify or even name things is often mentioned.
> > > in this context, hashes are treated as being unique;
> > >
> > > albeit unlikely, it *is* possible that hashes are equal
> > > for two distinct things. are there ideas about
> > > how to handle such a situation?
> > >
> > > thanks and kind regards
> > > wolfgang
> > > _______________
>
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to