Re: Fast MAC algorithms?

2009-07-23 Thread Joseph Ashwood

--
From: Nicolas Williams nicolas.willi...@sun.com
Sent: Tuesday, July 21, 2009 10:43 PM
Subject: Re: Fast MAC algorithms?


But that's not what I'm looking for here.  I'm looking for the fastest
MACs, with extreme security considerations (e.g., warning, warning!
must rekey every 10 minutes)


There's a reason everyone is ignoring that requirement rekeying in any 
modern system is more or less trivial. As an example take AES, rekeying 
every 10 minutes will have a throughput of 99.999% of the original, there 
will be bigger differences depending on whether or not you move the mouse.



being possibly OK, depending on just how
extreme -- the sort of algorithm that one would not make REQUIRED to
implement, but which nonetheless one might use in some environments
simply because it's fast.


I would NEVER recommend it, let me repeat that I would NEVER recommend it, 
but Panama is a higher performing design, IIRC about 8x the speed of the 
good recommendations, but DON'T USE PANAMA. You wanted a bad recommendation, 
Panama is a bad recommendation.


If you want a good recommendation that is faster, Poly1305-AES. You'll get 
some extra speed without compromising security.



For example, many people use arcfour in SSHv2 over AES because arcfour
is faster than AES.


I would argue that they use it because they are stupid. ARCFOUR should have 
been retired well over a decade ago, it is weak, it meets no reasonable 
security requirements, and in most situations it is not actually faster due 
to the cache thrashing it frequently induces due to the large key expansion.



In the crypto world one never designs weak-but-fast algorithms on
purpose, only strong-and-preferably-fast ones.  And when an algorithm is
successfully attacked it's usually deprecated,


The general preference is to permanently retire them. The better algorithms 
are generally at least as fast, that's part of the problem you seem to be 
having, you're not understanding that secure is not the same word as slow, 
in fact everyone has worked very hard in making the secure options at least 
as fast as the insecure.



new
ones tend to be slower because resistance against new attacks tends to
require more computation.


New ones tend to be faster than the old.
New ones are designed with more recent CPUs in mind.
New ones are designed with the best available knowledge on how to build 
security

New ones are simpler by design
New ones make use of everything that has been learned.


I realized this would make my question seem a
bit pointless, but hoped I might get a surprising answer :(


I think the answer surprised you more than you expected. You had hoped for 
some long forgotten extremely fast algorithm, what you've instead learned is 
that the long forgotten algorithms were not only forgotten because of 
security, but that they were eclipsed on speed as well.


I've moved this to the end to finish on the point

The SSHv2 AES-based ciphers ought to be RTI and
default choice, IMO, but that doesn't mean arcfour should not be
available.


I very strongly disagree. One of the fundamental assumptions of creating 
secure protocols is that sooner or later someone will bet their life on your 
work. This isn't an idle overstatement, instead it is an observation.
How many people bet their life and lost because Twitter couldn't protect 
their information in Iran?

How many people bet their life's savings on SSL/TLS?
How many people trusted various options with their complete medical history?
How many people bet their life or freedom on the ability of PGP to protect 
them?


People bet their life on security all the time, it is a part of the job to 
make sure that bet is safe.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: New Technology to Make Digital Data Disappear, on Purpose

2009-07-23 Thread Jerry Leichter

On Jul 21, 2009, at 10:48 PM, Perry E. Metzger wrote:


d...@geer.org writes:
The pieces of the key, small numbers, tend to =93erode=94 over  
time as
they gradually fall out of use. To make keys erode, or timeout,  
Vanish

takes advantage of the structure of a peer-to-peer file system. Such
networks are based on millions of personal computers whose Internet
addresses change as they come and go from the network.


One would imagine that as IPv6 rolls out, the need
for DHCP goes to zero excepting for mobile devices
attaching to public (not carrier) nets.  Yes?


Off topic, but actually DHCP is still needed. A machine needs to
configure a lot more than just its address and router in common cases
(it wants things like DNS servers, NTP servers, etc.) and in large
deployments, it is often far easier to let machines autoconfigure  
these

things during boot using DHCP even on comparatively hard wired
networks.

And with that, lets return to crypto...
The proposal makes use of an incidental property of existing DHT  
implementations:  Because many nodes are running on machines with  
dynamic IP addresses, rehashes - which cause the table to change and  
this leads to the loss of bits.  It's not actually clear from the  
paper how much of the bit loss is actually due to IP address changes  
and how much to other phenomena.  In any case, if this idea catches on  
and there isn't enough noise in the network naturally to give an  
adequate bit drop rate, it would be reasonable to add an explicit bit- 
dropping mechanism to some new release.  You'd need one to add IPv6  
support anyway!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-23 Thread Peter Gutmann
mhey...@gmail.com mhey...@gmail.com writes:

2) If you throw TCP processing in there, unless you are consistantly going to
have packets on the order of at least 1000 bytes, your crypto algorithm is
almost _irrelevant_.
[...]
for a Linux 2.2.14 kernel, remember, this was 10 years ago.

Could the lack of support for TCP offload in Linux have skewed these figures
somewhat?  It could be that the caveat for the results isn't so much this was
done ten years ago as this was done with a TCP stack that ignores the
hardware's advanced capabilities.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-23 Thread mhey...@gmail.com
On Thu, Jul 23, 2009 at 1:34 AM, Peter Gutmannpgut...@cs.auckland.ac.nz wrote:
 mhey...@gmail.com mhey...@gmail.com writes:

2) If you throw TCP processing in there, unless you are consistantly going to
have packets on the order of at least 1000 bytes, your crypto algorithm is
almost _irrelevant_.
[...]
for a Linux 2.2.14 kernel, remember, this was 10 years ago.

 Could the lack of support for TCP offload in Linux have skewed these figures
 somewhat?  It could be that the caveat for the results isn't so much this was
 done ten years ago as this was done with a TCP stack that ignores the
 hardware's advanced capabilities.

TCP offload would, of course, help reduce CPU load and make crypto
algorithm choice have more of an effect. With our tests, however, to
actually show an effect, we had to use large packet sizes which
reduced the impact of TCP - I know we were using 64K packets for some
tests. Boosting the packet size also affected cycles-per-byte for
NMAC-style algorithms because the outer function gets run less often
for a given amount of data (IPSec processing occurs outbound prior to
fragmentation).

We needed to reduce the impact of TCP because it still remained that
when doing something with the data, the cycles-per-byte of that
processing greatly impacts the percentage of slowdown your MAC
algorithm choice will have.

To throw another monkey wrench into the works, obviously, you may
think But what if I have a low power application, trying to be green,
you know. So I want to use less processor intensive cryptography to
save energy? Well, I sat in the middle of a group of people doing
work for another DARPA project (SensIT) shortly after the ACSA
project. The SensIT project was for low energy wireless sensors in
which we experimented with different key exchange/agreement techniques
in an attempt to economize energy. As a throw-in result, the SensIT
people found it takes 3 orders of magnitude more energy to transmit or
receive data on a per-bit basis than it does to do AES+HMAC-SHA1 (it
came as a surprise to me back then that reception and transmission
take similar amounts of energy). Moral: don't scrimp on crypto to save
energy - at least for wireless, I don't know what it costs to send a
bit down a twisted pair or fiber.

The SensIT final report is available here:
http://www.cs.umbc.edu/courses/graduate/CMSC691A/Spring04/papers/nailabs_report_00-010_final.pdf.

-Michael Heyman

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-23 Thread Nicolas Williams
On Thu, Jul 23, 2009 at 05:34:13PM +1200, Peter Gutmann wrote:
 mhey...@gmail.com mhey...@gmail.com writes:
 2) If you throw TCP processing in there, unless you are consistantly going to
 have packets on the order of at least 1000 bytes, your crypto algorithm is
 almost _irrelevant_.
 [...]
 for a Linux 2.2.14 kernel, remember, this was 10 years ago.
 
 Could the lack of support for TCP offload in Linux have skewed these figures
 somewhat?  It could be that the caveat for the results isn't so much this was
 done ten years ago as this was done with a TCP stack that ignores the
 hardware's advanced capabilities.

How much NIC hardware does both, ESP/AH and TCP offload?  My guess: not
much.  A shame, that.

Once you've gotten a packet off the NIC to do ESP/AH processing, you've
lost the opportunity to use TOE.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-23 Thread james hughes
Note for Moderator. This is not crypto but TOE being the solution to  
networking performance problems is a perception that is dangerous to  
leave in the crypto community.


On Jul 23, 2009, at 11:45 PM, Nicolas Williams wrote:


On Thu, Jul 23, 2009 at 05:34:13PM +1200, Peter Gutmann wrote:

mhey...@gmail.com mhey...@gmail.com writes:
2) If you throw TCP processing in there, unless you are  
consistantly going to
have packets on the order of at least 1000 bytes, your crypto  
algorithm is

almost _irrelevant_.
[...]
for a Linux 2.2.14 kernel, remember, this was 10 years ago.


Could the lack of support for TCP offload in Linux have skewed  
these figures
somewhat?  It could be that the caveat for the results isn't so  
much this was
done ten years ago as this was done with a TCP stack that ignores  
the

hardware's advanced capabilities.


How much NIC hardware does both, ESP/AH and TCP offload?  My guess:  
not

much.  A shame, that.

Once you've gotten a packet off the NIC to do ESP/AH processing,  
you've

lost the opportunity to use TOE.


IPSEC offload can have value. TOE are far more controversial.

TOEs that are implemented in a slow processor in a NIC card have been  
shown many times to be ineffective compared to keeping TCP in the  
fastest CPU (where it is now). For vendors that can't optimize their  
TCP implementation (because it is just too complicated for then?) TOE  
is a siren call that detracts them from their real problem. Look at  
Van Jacobson post of May 2000 entitled TCP in 30 instructions.

http://www.pdl.cmu.edu/mailinglists/ips/mail/msg00133.html
There was a paper about this, but I am at a loss to find it. One can  
go even farther back to An Analysis of TCP Processing Overhead,   
Clark, Jacobson, Romkey and Salwen in 1989 which states The protocol  
itself is a small fraction of the problem.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.75.5741

Back to crypto please.


Nico
--

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com