Re: Fast MAC algorithms?

2009-07-24 Thread John Gilmore
 2) If you throw TCP processing in there, unless you are consistantly going to
 have packets on the order of at least 1000 bytes, your crypto algorithm is
 almost _irrelevant_.

This is my experience, too.  And I would add and lots of packets.
The only crypto overhead that really mattered in a real application
was the number of round-trip times it took to negotiate protocols and
keys.  Crypto's CPU time is very very seldom the limiting factor in
real end-user application performance.

 Could the lack of support for TCP offload in Linux have skewed these figures
 somewhat?  It could be that the caveat for the results isn't so much this was
 done ten years ago as this was done with a TCP stack that ignores the
 hardware's advanced capabilities.

I have never seen a network card or chip whose advanced capabilities
included the ability to speed up TCP.  Most such advanced designs
actually ran slower than merely doing TCP in the Linux kernel using an
uncomplicated chip.  I saw a Patent Office procurement of Suns in the
'80s that demanded these slow TCP offload boards (I had to write the
bootstrap code for the project) even though the motherboard came with
an Ethernet chip and software stack that could run TCP *at wire speed*
all day and night -- for free.  The super whizzo board couldn't even
send back-to-back packets, as I recall.  Some government contractor
had added the TCP offload requirement, presumably to inflate the
price that they were adding a percentage markup to.

As a crypto-relevant aside, last year I looked at using the crypto
offload engine in the AMD Geode cpu chip to speed up Linux crypto
operations in the OLPC.  There was even a nice driver for it.
Summary: useless.  It had been designed by somebody who had no idea of
the architecture of modern software.  The crypto engine used DMA for
speed, used physical rather than virtual addresses, and stored the
keys internally in its registers -- so it couldn't work with virtual
memory, and couldn't conveniently be shared between two different
processes.  It was SO much faster to do your crypto by hand in a
shared library in a user process, than to cross into the kernel, copy
the data to be in contiguous memory locations (or manually translate
the addresses and lock down those pages into physical memory), copy
the keys and IVs into the accelerator, do the crypto, copy the results
back into virtual memory, and reschedule the user process.  In typical
applications (which don't always use the same key) you'd need to do
this dance once for every block encrypted, or perhaps if you were
lucky, for every packet.  Even kernel crypto wasn't worth doing
through the thing.  And the software libraries were not only faster,
they were also portable, running on anything, not just one obsolete
chip.

Hardware guys are just jerking off unless they spend a lot of time
with software guys AT THE DESIGN STAGE before they lay out a single
gate.  One stupid design decision can take away all the potential gain.
Every TCP offloader I've seen has had at least one.

John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-24 Thread Peter Gutmann
[I realise this isn't crypto, but it's arguably security-relevant and arguably
 interesting :-)].

James Hughes hugh...@mac.com writes:

TOEs that are implemented in a slow processor in a NIC card have been shown
many times to be ineffective compared to keeping TCP in the fastest CPU
(where it is now).

The problem with statements like this is that they smack of the Linux
religious zealotry against TCP offload support in the kernel, TOE's are bad
because we say they are, and we'll keep asserting this until you go away.  A
decade ago, during the Win2K development, Microsoft were measuring a 1/3
reduction in CPU usage just from TCP checksum offload.  Given the time frame
this was probably on 300MHz PII's, but then again it'd be with late-90s
vintage NICs.  On the other hand I've seen even more impressive figures with
their more recent TCP chimney offload (which just moves more of the NDIS stack
onto the NIC, I think it came out around Server 2003).

Does this mean that MS have figured out (a decade or so ago) how to make TOE
work while the OSS community has been too occupied telling everyone it doesn't
to do anything about it?  There must be some reason for the difference between
the two camps.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-24 Thread james hughes


On Jul 24, 2009, at 1:30 PM, Peter Gutmann wrote:

[I realise this isn't crypto, but it's arguably security-relevant  
and arguably

interesting :-)].


As long as we think this is interesting, (although I respectfully  
disagree that there are any inherent security problems with TOE. Maybe  
there are insecure implementation...).



James Hughes hugh...@mac.com writes:

TOEs that are implemented in a slow processor in a NIC card have  
been shown
many times to be ineffective compared to keeping TCP in the fastest  
CPU

(where it is now).


The problem with statements like this is that they smack of the Linux
religious zealotry against TCP offload support in the kernel, TOE's  
are bad
because we say they are, and we'll keep asserting this until you go  
away.


There were a dozen or so protocol offload research projects that the  
US government funded in the 90s. All failed. Is the people who say  
TOE's are bad  because of zealotry or standing on the shoulders of  
the people that ran those projects. At Network Systems, we partnered  
with HT Kung of CMU at the time to move TCP out of a really slow  
Decstation. Result? A accelerator that cost as much as the workstation  
that was faster until the next processor version was available. Yes,  
we could have reduced it to a chip but it wasn't. The take away was  
that improving the software is the gift that keeps on giving. Moore's  
law means you get a faster TCP every time the clock ticks.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1138

BTW, I am not a Linux bigot, just someone that got caught up in this  
issue more than a decade ago. I do not agree with your assertion or  
the Wikipedia page that this is linux bigotry. I find that page  
horribly inaccurate and self serving to the TOE manufacturing community.


What I learned from participating in a project that spent $5M of tax  
payer money was that The protocol itself is a small fraction of the  
problem.



A
decade ago, during the Win2K development, Microsoft were measuring a  
1/3
reduction in CPU usage just from TCP checksum offload.  Given the  
time frame
this was probably on 300MHz PII's, but then again it'd be with  
late-90s
vintage NICs.  On the other hand I've seen even more impressive  
figures with
their more recent TCP chimney offload (which just moves more of the  
NDIS stack

onto the NIC, I think it came out around Server 2003).

Does this mean that MS have figured out (a decade or so ago) how to  
make TOE
work while the OSS community has been too occupied telling everyone  
it doesn't
to do anything about it?  There must be some reason for the  
difference between

the two camps.


Offloading features like checksumming, fragmentation/reassembly (aka  
Large Segment Offload), packet categorization, slitting flows to  
different threads, etc. is not TOE.


TOE is offloading of the TCP stack. The thin line that is crossed is  
where is the TCP state kept. If the state is kept in the card, then  
the protocol to get the data reliably to the application is has more  
corner cases (hence complexity) since the IP layer can be lossy and  
the socket layer can not. In all the research, this has always been  
the case.


If there is something windows has not learned could be that processing  
TCP should be simple and quick. Since the source code is not  
available, I don't know if their software falls into the too  
complicated camp or not... In the case of Chimney partial stack  
offload, the state is in both places. Sounds simple straight forward,  
right?


The case of iSCSI where a complete protocol conversion is done (the  
card looks like a SCSI card, but the data goes out over TCP/IP) it is  
a different story (which is also arguably still about solving the OS  
vendor's lack of software agility with hardware), but that is not the  
intent of this discussion.


I fully agree that offloading features that makes the TCP processing  
easier is a good thing.


Back to crypto?


Peter.


Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Hacker Says iPhone 3GS Encryption Is ‘Useless’ f or Businesses

2009-07-24 Thread mhey...@gmail.com
From http://www.wired.com/gadgetlab/2009/07/iphone-encryption/:

   the supposedly enterprise-friendly encryption included with the iPhone 3GS
   is so weak it can be cracked in two minutes with a few pieces of
readily available
   freeware...“I don’t think any of us [developers] have ever seen encryption
   implemented so poorly before, which is why it’s hard to describe
why it’s such a
   big threat to security.”...Wondering where the encryption comes
into play? It
   doesn’t. Strangely, once one begins extracting data from an iPhone 3GS,
   the iPhone begins to decrypt the data on its own

-Michael

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


cleversafe says: 3 Reasons Why Encryption is Overrated

2009-07-24 Thread Zooko Wilcox-O'Hearn

[cross-posted to tahoe-...@allmydata.org and cryptogra...@metzdowd.com]

Disclosure:  Cleversafe is to some degree a competitor of my Tahoe- 
LAFS project.  On the other hand, I tend to feel positive towards  
them because they open-source much of their work.  Our Related  
Projects page has included a link to cleversafe for years now, I  
briefly collaborated with some of them on a paper about erasure  
coding last year, and I even spoke briefly with them about the idea  
of becoming an employee of their company this year.  I am tempted to  
ignore this idea that they are pushing about encryption being  
overrated, because they are wrong and it is embarassing.  But I've  
decided not to ignore it, because people who publicly spread this  
kind of misinformation need to be publicly contradicted, lest they  
confuse others.


Cleversafe has posted a series of blog entries entitled 3 Reasons  
Why Encryption is Overrated.


http://dev.cleversafe.org/weblog/?p=63 # 3 Reasons Why Encryption is  
Overrated
http://dev.cleversafe.org/weblog/?p=95 # Response Part 1: Future  
Processing Power
http://dev.cleversafe.org/weblog/?p=111 # Response Part 2:  
Complexities of Key Management
http://dev.cleversafe.org/weblog/?p=178 # Response Part 3: Disclosure  
Laws


It begins like this:


When it comes to storage and security, discussions traditionally  
center on encryption.  The reason encryption – or the use of a  
complex algorithm to encode information – is accepted as a best  
practice rests on the premise that while it’s possible to crack  
encrypted information, most malicious hackers don’t have access to  
the amount of computer processing power they would need to decrypt  
information.


But not so fast.  Let’s take a look at three reasons why encryption  
is overrated.



Ugh.

The first claim -- the today's encryption is vulnerable to tomorrow's  
processing power -- is a common goof, which is easy to make by  
conflating historical failures of cryptosystems due to having too  
small of a crypto value with failures due to weak algorithms.   
Examples of the former are DES, which failed because its 56-bit key  
was small enough to fall to brute force, and the bizarre 40-bit  
security policies of the U.S. Federal Government in the 90's.  An  
example of the latter is SHA1, whose hash output size is *not* small  
enough to brute-force, but which is insecure because, as it turns  
out, the SHA1 algorithm allows the generation of colliding inputs  
much quicker than a brute force search would.


Oh boy, I see that in the discussion following the article Future  
Processing Power, the author writes:



I don’t think symmetric ciphers such as AES-256 are under any threat  
of being at risk to brute force attacks any time this century.



What?  Then why is he spreading this Fear, Uncertainty, and Doubt?   
Oh and then it gets *really* interesting: it turns out that  
cleversafe uses AES-256 in an All-or-Nothing Transform as part of  
their Information Dispersal algorithm.  Okay, I would like to  
understand better the cryptographic effects of that (and in  
particular, whether this means that the cleversafe architecture is  
just as susceptible to AES-256 failing as an encryption scheme such  
as is used in the Tahoe-LAFS architecture).


But, it is time for me to stop reading about cryptography and get  
ready to go to work.  :-)


Regards

Zooko
---
Tahoe, the Least-Authority Filesystem -- http://allmydata.org
store your data: $10/month -- http://allmydata.com/?tracking=zsig
I am available for work -- http://zooko.com/résumé.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com