W dniu wto, 24.10.2017 o godzinie 06∶04 +0200, użytkownik Michał Górny
napisał:
> W dniu pon, 23.10.2017 o godzinie 21∶00 +0000, użytkownik Robin H.
> Johnson napisał:
> > On Mon, Oct 23, 2017 at 01:33:15PM +0200, Michał Górny wrote:
> > > Dnia 23 października 2017 10:16:38 CEST, "Robin H. Johnson" 
> > > <robb...@gentoo.org> napisał(a):
> > > > On Fri, Oct 20, 2017 at 05:21:47PM -0500, R0b0t1 wrote:
> > > > > In general I do not mind updating the algorithms used, but I do feel
> > > > > it is important to keep at least three present. Without at least
> > > > 
> > > > three
> > > > > (or a larger odd number) it is not possible to break a tie.
> > > > > 
> > > > > That may ultimately be beside the point, as any invalid hashes should
> > > > > result in the user contacting the developers or doing something else,
> > > > > but it is hard to know.
> > > > 
> > > > I'm dropping the rest of your email about about exactly which hashes
> > > > we're bike-shedding, to focus on the number of hashes.
> > > > 
> > > > I agree with your opinion to have three hashes present, and I've give a
> > > > solid rationale with historical references.
> > > > 
> > > > The major reason to have 3 hashes, is as a tie-breaker, to detect if
> > > > there was a bug in the hash somehow (implementation,
> > > > compiler/assembler,
> > > > interpreter), and not the distfile. This also strongly suggests that 3
> > > > hashes should have different construction.
> > > 
> > > 1. How are you planning to distinguish a successful attack against two 
> > > hashes from a bug in one of them?
> > > 
> > > 2. Even if you do, what's the value of knowing that?
> > 
> > [BOBO06] is relevant research here, I cited it in the work that went into
> > GLEP59, the last time we updated the hashes. The less-technical explanation 
> > of it is:
> > "If you can express the output of H1(x)H2(x) in LESS bits than the combined
> > output size of H1,H2, then the attacks get a little bit easier"
> > 
> > Some important pieces from it:
> > [J04] "showed that the concatenation of two Merkle-Damgard functions is not
> > much more secure than the individual functions.", but this holds ONLY if
> > the hash functions chosen are of the same construction (MD).
> > Choosing hashes with different constructions (Merkle-Damgard, HAIFA,
> > Sponge) is important, and given a black box environment, 
> > 
> > The original mail reached the same approximate decision, just to look
> > for diverse hashes, but decided that 2 was enough.
> > 
> > Q: What are the odds of a simultaneous successful attack against two 
> > hashes? 
> > A: IDK, but if the hash functions are truly independent, it must be provably
> >    lower than the odds of an attack against a single hash.
> 
> We're talking about really huge (→∞) numbers here. It's not a 'random'
> attack against one hash. It's an attack that allows to sneak a malicious
> code with unchanged file size (since we store that too), and no apparent
> side effects (what's the point of spending up that much resources
> if the user is going to notice?).
> 
> > Q: What's the big difference between a bug and a successful attack in a 
> > hash?
> > A: Bugs are more likely initially, and attacks come later.
> 
> Sounds like an entirely abstract point in time ;-).
> 
> > All of that said, is there really a significant long-term gain in
> > multiple hashes? (setting aside the short-term advantage in a transition
> > period for changing hashes)
> 
> No, and that's my point. One hash is perfectly fine.
> 
> Two hashes are useful for transition purposes. If we take two fast
> hashes (e.g. proposed SHA512 + BLAKE2B which have similar speed),
> we can use 2 threads to prevent the speed loss (except for old single-
> core machines).
> 
> Three hashes don't give any noticeable advantage. If we want a diverse
> construct, we take SHA3. SHA3 is slower than SHA2 + BLAKE2 combined, so
> even with 3 threaded computation it's going to be slower.

Oh, and most notably, the speed loss will be mostly visible to users.
An attacker would have to compute the additional hashes only
if the fastest hash already matched, i.e. rarely. Users will have to
compute them all the time.

-- 
Best regards,
Michał Górny


Reply via email to