- Forwarded message from Sašo Kiselkov skiselkov...@gmail.com -
From: Sašo Kiselkov skiselkov...@gmail.com
Date: Wed, 03 Oct 2012 15:39:39 +0200
To: z...@lists.illumos.org
CC: Eugen Leitl eu...@leitl.org
Subject: Re: [cryptography] [zfs] SHA-3 winner announced
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:7.0.1) Gecko/20110929
Thunderbird/7.0.1
Well, it's somewhat difficult to respond to cross-posted e-mails, but
here goes:
On 10/03/2012 03:15 PM, Eugen Leitl wrote:
- Forwarded message from Adam Back a...@cypherspace.org -
From: Adam Back a...@cypherspace.org
Date: Wed, 3 Oct 2012 13:25:06 +0100
To: Eugen Leitl eu...@leitl.org
Cc: cryptography@randombit.net, Adam Back a...@cypherspace.org
Subject: Re: [cryptography] [zfs] SHA-3 winner announced
User-Agent: Mutt/1.5.21 (2010-09-15)
(comment to Saso's email forwarded by Eugen):
Well I think it would be fairer to say SHA-3 was initiatied more in the
direction of improving on the state of art in security of hash algorithms
[snip]
In that you see the selection of Keecak, focusing more on its high security
margin, and new defenses against existing known types of attacks.
At no point did I claim that the NIST people chose badly. I always said
that NIST's requirements need not align perfectly with ZFS' requirements.
If the price of that is slower, so be it - while fast primitives are very
useful, having things like MD5 full break and SHA-1 significant weakening
take the security protocols industry by surprise is also highly undesirable
and expensive to fix. To some extent for the short/mid term almost
unfixable given the state of software update, and firmware update realities.
Except in ZFS, where it's a simple zfs set command. Remember, Illumos'
ZFS doesn't use the hash as a security feature at all - that property is
not the prime focus.
So while I am someone who pays attention to protocol, algorithm and
implementation efficiency, I am happy with Keecak.
ZFS is not a security protocol, therefore the security margin of the
hash is next to irrelevant. Now that is not to say that it's entirely
pointless - it's good to have some security there, just for the added
peace of mind, but it's crazy to focus on it as primary concern.
And CPUs are geting
faster all the time, the Q3 2013 ivybridge (22nm) intel i7 next year is
going to be available in 12-core (24 hyperthreads) with 30GB cache. Just
chuck another core at if if you have problems. ARMs are also coming out in
more cores.
Aaah, the good old but CPUs are getting faster every day! argument. So
should people hold off for a few years before purchasing new equipment
for problems they have now? And if these new super-duper CPUs are so
much higher performing, why not use a more efficient algo and push even
higher numbers with them? If I could halve my costs by simply switching
to a faster algorithm, I'd do it in a heartbeat!
And AMD 7970 GPU has 2048 cores.
Are you suggesting we run ZFS kernel code on GPUs? How about driver
issues? Or simultaneous use by graphical apps/games? Who's going to
implement and maintain this? It's easy to propose theoretical models,
but unless you plan to invest the energy in this, it'll most likely
remain purely theoretical.
For embedded and portable
use, a reasonably fast and energy frugal 32-bit or 64-bit processor is
really cheap these days. OK I know there's always a market for 8-bit
processors, on the extreme cheap/low power end, but the validity of the cost
complaint is diminishing even there.
My point in recommending a higher-speed hash is not for low-power
systems - these wouldn't even come near dedup for the foreseeable future.
Cheers,
--
Saso
- End forwarded message -
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography