Re: [Cryptography] Sha3

2013-10-05 Thread james hughes

On Oct 3, 2013, at 9:27 PM, David Johnston d...@deadhat.com wrote:

 On 10/1/2013 2:34 AM, Ray Dillinger wrote:
 What I don't understand here is why the process of selecting a standard 
 algorithm for cryptographic primitives is so highly focused on speed. ~
 
 What makes you think Keccak is faster than the alternatives that were not 
 selected? My implementations suggest otherwise.
 I thought the main motivation for selecting Keccak was Sponge good.

I agree: Sponge Good, Merkle–Damgård Bad. Simple enough. 

I believe this thread is not about the choice of Keccak for SHA3, it is about 
NIST's changes of Keccak for SHA3. 

[Instead of pontificating at length based on conjecture and conspiracy theories 
and smearing reputations based on nothing other than hot air] Someone on this 
list must know the authors of Keccak. Why not ask them. They are the ones that 
know the most about the algorithm, why the parameters are what they are and 
what the changes mean for their vision. 

Here is my question for them: Given the light of the current situation, what 
is your current opinion of NIST's changes of Keccak as you specified it to 
SHA-3 as NIST standardized it? 

If the Keccak authors are OK with the changes, who are we to argue about these 
chances? 

If the Keccak authors don't like the changes, given the situation NIST is in, I 
bet NIST will have no recourse but to re-open the SHA3 discussion.

Jim

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread james hughes
On Oct 5, 2013, at 12:00 PM, John Kelsey crypto@gmail.com wrote:

 http://keccak.noekeon.org/yes_this_is_keccak.html

From the authors: NIST's current proposal for SHA-3 is a subset of the Keccak 
family, one can generate the test vectors for that proposal using the Kecca 
kreference code. and this shows that the [SHA-3] cannot contain internal 
changes to the algorithm.

The process of setting the parameters is an important step in standardization. 
NIST has done this and the authors state that this has not crippled the 
algorithm. 

I bet this revelation does not make it to Slashdot… 

Can we put this to bed now? 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread james hughes

On Oct 2, 2013, at 7:46 AM, John Kelsey crypto@gmail.com wrote:

 Has anyone tried to systematically look at what has led to previous crypto 
 failures?  T

In the case we are now, I don't think that it is actually crypto failures 
(RSA is still secure, but 1024 bit is not. 2048 DHE is still secure, but no one 
uses it, AES is secure, but not with an insecure key exchange) but standards 
failures. These protocol and/or implementation failures are either because the 
standards committee said to the cryptographers prove it (the case of WEP) and 
even when an algorithm is dead, they refuse to deprecate it (MD5 certificate 
mess) or just use bad RND (too many examples to cite). 

The antibodies in the standards committees need to read this and think about it 
really hard. 

 (1)  Overdesign against cryptanalysis (have lots of rounds)
 (2)  Overdesign in security parameters (support only high security levels, 
 use bigger than required RSA keys, etc.) 
 (3)  Don't accept anything without a proof reducing the security of the whole 
 thing down to something overdesigned in the sense of (1) or (2).

and (4) Assume algorithms fall faster than Moore's law and, in the standard, 
provide a sunset date.

I completely agree. 


rhetoric
The insane thing is that it is NOT the cryppies that are complaining about 
moving to RSA 2048 and 2048 bit DHE, it is the standards wonks that complain 
that a 3ms key exchange is excessive. 

Who is the CSO of the Internet? We have Vince Cerf,  Bob Kahn or Sir Tim, but 
what about security? Who is responsible for the security of eCommerce? Who will 
VISA turn to? It was NIST (effectively). Thank you NSA, because of you NIST now 
has lost most of its credibility. (Secrets are necessary, but many come to 
light over time. Was the probability of throwing NIST under the bus 
[http://en.wikipedia.org/wiki/Throw_under_the_bus] part of the challenge in 
finesse? Did NSA consider backing down when the Shumow, Ferguson presentation 
(which Schneier blogged about) came to light in 2007?).  

We have a mess. Who is going to lead? Can the current IETF Security Area step 
into the void? They have cryptographers on the Directorate list, but history 
has shown that they are not incredibly effective at implementing a 
cryptographic vision. One can easily argue that vision is rarely provided by a 
committee oversight committee. 
/rhetoric


John: Thank you. These are absolutely the right criteria. 

Now what? 

Jim

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-28 Thread james hughes
http://www.nytimes.com/2013/09/27/opinion/have-a-nice-day-nsa.html

On Sep 25, 2013, at 3:14 PM, John Kelsey crypto@gmail.com wrote:

 Right now, there is a lot of interest in finding ways to avoid NSA 
 surveillance.  In particular, Germans and Brazilians and Koreans would 
 presumably rather not have their data made freely available to the US 
 government under what appear to be no restrictions at all.  If US companies 
 would like to keep the business of Germans and Brazilians and Koreans, they 
 probably need to work out a way to convincingly show that they will safeguard 
 that data even from the US government. 

I think we are in agreement, but I am focused on what this list -can- do and 
-can-not- do.

All the large banks have huge systems and processes that protect the privacy of 
their customers. It works most of the time, but no large bank can say they will 
never have an employee go bad. 

My point is that this thread was moving towards the statement that citizens of 
country X should use service providers that eliminate the need for trust. 
Because of subpoenas and collaboration this statement is true in whatever the 
country the service provider is in and who the 3rd parties are. In essence, 
this is a tautology that has nothing to do with Cryptography. Even if a service 
provider could convince you that they _can't_ betray you, it would either be 
naiveté or simply be marketing. 

The only real way to eliminate the need for trust from any service provider 
of any kind, or any country (your's or some other country), is to not use them. 

The one problem that this list (cryptography@metzdowd.com) -can- focus on is 
that the bar has been set too low for the governments to be able to break a few 
keys and gain access to a lot of information. This is the violation of trust in 
the internet that, in part, has been enabled by weak cryptographic standards 
(short keys, non-ephemeral keys, subverted algorithms, etc.). I am not certain 
that Google could have done anything differently. Stated differently, Google 
(and all the world's internet service providers) are collateral damage.

The thing that this list can effect is the creation of standards with a 
valuable respect for Moore's law and increases of mathematical understanding. 
Stated differently, just enough security is the problem. This past attitude 
did not respect the very probably future that became a reality. 

Are we going to continue this behavior? IMHO, based on what I have been seeing 
on the TLS list, probably. 

Jim

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-25 Thread james hughes
Je n'ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la 
faire plus courte.

On Sep 23, 2013, at 12:45 PM, John Kelsey crypto@gmail.com wrote:
 On Sep 18, 2013, at 3:27 PM, Kent Borg kentb...@borg.org wrote:
 
 You foreigners actually have a really big vote here.  
 
 It needs to be in their business interest to convince you that they *can't* 
 betray you in most ways.  


Many, if not all, service providers can provide the government valuable 
information regarding their customers. This is not limited to internet service 
providers. It includes banks, health care providers, insurance companies, 
airline companies, hotels, local coffee shops, book sellers, etc. where 
providing a service results in personal information being exchanged. The US has 
no corner on the ability to get information from almost any type of service 
provider. This is the system that the entire world uses, and should not be our 
focus.

This conversation should be on the ability for honest companies to communicate 
securely to their customers. Stated differently, it is valuable that these 
service providers know the information they have given to the government. 
Google is taking steps to be transparent. What Google can not say is anything 
about the traffic that was possibly decrypted without Google's knowledge.

Many years ago (1995?), I personally went to a Swiss bank very well known for 
their high levels of security and their requirement that -all- data leaving 
their datacenter, in any form (including storage), must be encrypted. I asked 
the chief information security officer of the bank if he would consider using 
Clipper enabled devices -if- the keys were escrowed by the Swiss government. 
His answer was both unexpected and still echoes with me today. He said We have 
auditors crawling all over the place. All the government has to do is to 
[legally] ask and they will be given what they ask for. There is absolutely no 
reason for the government to access our network traffic without our knowledge. 
We ultimately declined to implement Clipper.

Service providers are, and will always be, required to respond to legal 
warrants. A company complying with a warrant knows what they provided. They can 
fight the warrants, they can lobby their government, they can participate in 
the discussion (even if that participation takes place behind closed doors). 

The real challenge facing us at the moment is to restore confidence in the 
ability of customers to privately communicate with their service providers and 
for service providers to know the full extent of the information they are 
providing governments. 


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-10 Thread james hughes


On Sep 9, 2013, at 9:10 PM, Tony Arcieri basc...@gmail.com wrote:

 On Mon, Sep 9, 2013 at 9:29 AM, Ben Laurie b...@links.org wrote:
 And the brief summary is: there's only one ciphersuite left that's good, and 
 unfortunately its only available in TLS 1.2:
 
 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
 
 A lot of people don't like GCM either ;) 

Yes, GCM does have implementation sensitivities particularly around the IV 
generation. That being said, the algorithm is better than most and the 
implementation sensitivity obvious (don't ever reuse an IV).___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [TLS] New Version Notification for draft-sheffer-tls-bcp-00.txt

2013-09-10 Thread james hughes
On Sep 9, 2013, at 7:30 PM, Michael Ströder michael at stroeder.com wrote:
 
  Peter Gutmann wrote:
 
  Do you have numbers about the relative and absolute performance impact?
  Personally I don't see performance problems but I can't prove my position 
  with
  numbers.
 
 MBA-2:tmp synp$ openssl speed dsa1024 dsa2048
[…]
  signverifysign/s verify/s
 dsa 1024 bits 0.000445s 0.000515s   2247.6   1941.8
 dsa 2048 bits 0.001416s 0.001733s706.4577.2

We are arguing about a key exchange that goes from ~1ms to ~3ms (where the 
cracking goes from yes to no). Yes, this is more but this is absolutely not 
a problem for PCs or even phones or tablets especially in the light of session 
keep alive and other techniques that allow a key exchange to last a while. 

Is the complaint that the server load is too high? 

Lastly, going a partial step seems strange also. Why do we what to put 
ourselves through this again so soon? The French government suggests 2048 now 
(for both RSA and DHE), and will only last 6 years. From 
http://www.ssi.gouv.fr/IMG/pdf/RGS_B_1.pdf

 La taille minimale du module est de 2048 bits, pour une utilisation ne devant 
 pas depasser lannee 2020.
The minimum size of the modulus is 2048 bits for use not to exceed 2020.

 Pour une utilisation au-dela de 2020, la taille minimale du module est de 
 4096 bits
For use beyond a 2020, the minimum module size is 4096 bits


Pardon the bad cut/paste and google translate, but I believe you get the point. 

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-09 Thread james hughes

On Sep 9, 2013, at 9:29 AM, Ben Laurie b...@links.org wrote:

 Perry asked me to summarise the status of TLS a while back ... luckily I 
 don't have to because someone else has:
 
 http://tools.ietf.org/html/draft-sheffer-tls-bcp-00
 
 In short, I agree with that draft. And the brief summary is: there's only one 
 ciphersuite left that's good, and unfortunately its only available in TLS 1.2:
 
 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

+1 

I have read the document and it does not mention key lengths. I would suggest 
that 2048 bit is large enough for the next ~5? years or so. 2048 bit for both 
D-H and RSA. How are the key lengths specified? 


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-09 Thread james hughes

On Sep 9, 2013, at 2:49 PM, Stephen Farrell stephen.farr...@cs.tcd.ie wrote:

 On 09/09/2013 05:29 PM, Ben Laurie wrote:
 Perry asked me to summarise the status of TLS a while back ... luckily I
 don't have to because someone else has:
 
 http://tools.ietf.org/html/draft-sheffer-tls-bcp-00
 
 In short, I agree with that draft. And the brief summary is: there's only
 one ciphersuite left that's good, and unfortunately its only available in
 TLS 1.2:
 
 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
 
 I don't agree the draft says that at all. It recommends using
 the above ciphersuite. (Which seems like a good recommendation
 to me.) It does not say anything much, good or bad, about any
 other ciphersuite.
 
 Claiming that all the rest are no good also seems overblown, if
 that's what you meant.


I retract my previous +1 for this ciphersuite. This is hard coded 1024 DHE 
and 1024bit RSA. 

From 
http://en.wikipedia.org/wiki/Key_size
 As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in 
 strength to 80-bit symmetric keys

80 bit strength. Hard coded key sizes. Nice. 

AES 128 with a key exchange of 80 bits. What's a factor of 2^48 among friends…. 

additionally, as predicted in 2003… 
 1024-bit keys are likely to become crackable some time between 2006 and 2010 
 and that
 2048-bit keys are sufficient until 2030.
 3072 bits should be used if security is required beyond 2030

They were off by 3 years.

What now? ___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-08 Thread james hughes


On Sep 7, 2013, at 6:30 PM, James A. Donald jam...@echeque.com wrote:

 On 2013-09-08 4:36 AM, Ray Dillinger wrote:
 
 But are the standard ECC curves really secure? Schneier sounds like he's got
 some innovative math in his next paper if he thinks he can show that they
 aren't.
 
 Schneier cannot show that they are trapdoored, because he does not know where 
 the magic numbers come from.
 
 To know if trapdoored, have to know where those magic numbers come from.

That will not work

When the community questioned the source of the DES S boxes, Don Coppersmith 
and Walt Tuchman if IBM at the time openly discussed the how they were 
generated and it still did not quell the suspicion. I bet there are many that 
still believe DES has an yet to be determined backdoor. 

There is no way to prove the absence of a back door, only to prove or argue 
that a backdoor exists with (at least) a demonstration or evidence one is being 
used. Was there any hint in the purloined material to this point? There seems 
to be the opposite. TLS using ECC is not common on the Internet (See Ron was 
wrong, Whit is right). If there is a vulnerability in ECC it is not the source 
of today's consternation. (ECC is common on ssh, see Mining Your Ps and Qs: 
Detection of Widespread Weak Keys in Network Devices)

I will be looking forward to Bruce's next paper.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread james hughes


On Sep 7, 2013, at 8:16 PM, Marcus D. Leech mle...@ripnet.com wrote:

 But it's not entirely clear to me that it will help enough in the scenarios 
 under discussion.  If we assume that mostly what NSA are doing is acquiring a 
 site
RSA key (either through donation on the part of the site, or through 
 factoring or other means), then yes, absolutely, PFS will be a significant 
 roadblock.
If, however, they're getting session-key material (perhaps through 
 back-doored software, rather than explicit cooperation by the target 
 website), the
PFS does nothing to help us.  And indeed, that same class of compromised 
 site could just as well be leaking plaintext.  Although leaking session
keys is lower-profile.

I think we are growing closer to agreement, PFS does help, the question is how 
much in the face of cooperation. 

Let me suggest the following. 

With RSA, a single quiet donation by the site and it's done. The situation 
becomes totally passive and there is no possibility knowing what has been read. 
 The system administrator could even do this without the executives knowing. 

With PFS there is a significantly higher profile interaction with the site. 
Either the session keys need to be transmitted  in bulk, or the RNG cribbed. 
Both of these have a significantly higher profile,  higher possibility of 
detection and increased difficulty to execute properly. Certainly a more risky 
think for a cooperating site to do. 

PFS does improve the situation even if cooperation is suspect. IMHO it is just 
better cryptography. Why not? 

It's better. It's already in the suites. All we have to do is use it... 

I am honestly curious about the motivation not to choose more secure modes that 
are already in the suites?



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread james hughes


On Sep 8, 2013, at 1:47 PM, Jerry Leichter leich...@lrw.com wrote:

 On Sep 8, 2013, at 3:51 PM, Perry E. Metzger wrote:
 
 In summary, it would appear that the most viable solution is to make
 the end-to-end encryption endpoint a piece of hardware the user owns
 (say the oft mentioned $50 Raspberry Pi class machine on their home
 net) and let the user interact with it over an encrypted connection
 (say running a normal protocol like Jabber client to server
 protocol over TLS, or IMAP over TLS, or https: and a web client.)
 
 It is a compromise, but one that fits with the usage pattern almost
 everyone has gotten used to. It cannot be done with the existing
 cloud model, though -- the user needs to own the box or we can't
 simultaneously maintain current protocols (and thus current clients)
 and current usage patterns.

 I don't see how it's possible to make any real progress within the existing 
 cloud model, so I'm with you 100% here.  (I've said the same earlier.)

Could cloud computing be a red herring? Banks and phone companies all give up 
personal information to governments (Verizon?) and have been doing this long 
before and long after cloud computing was a fad. Transport encryption 
(regardless of its security) is no solution either. 

The fact is, to do business, education, health care, you need to share 
sensitive information. There is no technical solution to this problem. Shared 
data is shared data. This is arguably the same as the analogue gap between 
content protected media and your eyes and ears. Encryption is not a solution 
when the data needs to be shared with the other party in the clear. 

I knew a guy one that quipped link encryptors are iron pipes rats run 
through. 

If compromised end points are your threat model, cloud computing is not your 
problem. 

The only solution is the Ted Kazinski technology rejection principal (as long 
as you also kill your brother).



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-07 Thread james hughes

On Sep 7, 2013, at 1:50 PM, Peter Fairbrother zenadsl6...@zen.co.uk wrote:

 On 07/09/13 02:49, Marcus D. Leech wrote:
 It seems to me that while PFS is an excellent back-stop against NSA
 having/deriving a website RSA key, it does *nothing* to prevent the kind of
   cooperative endpoint scenario that I've seen discussed in other
 forums, prompted by the latest revelations about what NSA has been up to.
 
 True.
 
 But does it matter much? A cooperative endpoint can give plaintext no matter 
 what encryption is used, not just session keys.

+1. 

Cooperative endpoints offer no protection to any cryptography because they have 
all the plaintext. One can argue that the subpoenas are just as effective as 
cooperative endpoints. The reductio ad absurdum argument is that PFS is not 
good enough in the face of subpoenas? I don't think cooperative endpoints is a 
relevant point. 

Passive monitoring and accumulation of cyphertext is a good SIGINT strategy. 
Read about the VENONA project. 
http://en.wikipedia.org/wiki/Venona_project
 Most decipherable messages were transmitted and intercepted between 1942 and 
 1945. […] These messages were slowly and gradually decrypted beginning in 
 1946 and continuing […] through 1980,

Clearly, the traffic was accumulated during which time there was no known 
attack.

While reusing OTP is not the fault here, PFS makes recovering information with 
future key recovery harder, since a single key being recovered with whatever 
means, does not make old traffic more vulnerable. 

This is not a new idea. The separation of key exchange from authentication 
allows this. A router I did the cryptography for (first produced by Network 
Systems Corporation in the 1994) was very careful not to allow any old (i.e. 
recorded) traffic to be vulnerable even if one or both end points were stolen 
and all the key material extracted. The router used DH (both sides ephemeral) 
for the key exchange and RSA for authentication and integrity. This work 
actually predates IPSEC and is still being used.

http://www.blueridge.com/index.php/products/borderguard/borderguard-overview

I am getting from the list that there have been or are arguments that doing two 
public key operations is too much. Is it really? 

PFS may not be a panacea but does help.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption

2013-09-05 Thread james hughes
The following is from a similar list in Europe. Think this echoes much on this 
list but has an interesting twist about PFS cipher suites.

Begin forwarded message:
 
 From: Paterson, Kenny [kenny.pater...@rhul.ac.uk]
 Sent: Friday, September 06, 2013 12:03 AM
 To: Christof Paar; ecrypt2-...@esat.kuleuven.be
 Subject: Re: NYTimes.com: N.S.A. Foils Much Internet Encryption
 
 Christof,
 
 Thanks for sharing this link.
 
 What seems likely, reading between the lines of this article, is that
 NSA/GCHQ have access, by a variety of means, to RSA private keys for
 popular websites, enabling them to (at will) recover SSL/TLS session keys.
 This can be done offline for stored traffic or online as packets pass by
 on the network. I stress that the article does not say this directly.
 
 One solution, preventing passive attacks, is for major browsers and
 websites to switch to using PFS ciphersuites (i.e. those based on
 ephemeral Diffie-Hellmann key exchange). For statistics on current
 adoption of such ciphersuites, see:
 
 http://news.netcraft.com/archives/2013/06/25/ssl-intercepted-today-decrypte
 d-tomorrow.html
 
 
 Regards
 
 Kenny

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] FIPS, NIST and ITAR questions

2013-09-03 Thread james hughes
Hashes aren't ITAR covered is a fact….  from Revised U.S. Encryption Export 
Control Regulations, January 2000 at
http://epic.org/crypto/export_controls/regs_1_00.html

 3. It was not the intent of the new Wassenaar language for ECCN 5A002 to be 
 more restrictive concerning Message Authentication Codes (MAC). Data 
 authentication equipment that calculates a Message Authentication Code (MAC) 
 or similar result to ensure no alteration of text has taken place, or to 
 authenticate users, but does not allow for encryption of data, text or other 
 media other than that needed for the authentication continues to be excluded 
 from control under 5A002. These commodities are controlled under ECCN 5A992.


further, ECCN 5A992 is separated from the high-functioning encryption as 
follows. From 

http://www.governmentcontractslawblog.com/2008/11/articles/export-controls/encryption-export-restrictions-loosened-under-new-rules-that-reduce-prereview-and-reporting-requirements/

 Under the EAR, encryption items, which includes software, technology, and 
 hardware incorporating encryption technology, generally fall into two 
 categories:
 
 Ø  Export Commodity Classification Number (ECCN) 5A002/5D002, for 
 certain enumerated, high-functioning encryption products and software; and
 
 Ø  ECCN 5A992/5D992, for all other encryption items. 
 
 Generally speaking, 5A992/5D992 products can be shipped without delay 
 anywhere in the world (except for Cuba, Iran, North Korea, Sudan, and Syria) 
 as No License Required (NLR). 


Clear (as mud)?




On Sep 3, 2013, at 12:21 PM, radi...@gmail.com wrote:

 Ok, I dug around my email archives to see what the heck to google, and 
 answered my own question regarding ITAR and NIST defined Suite B implementing 
 software. 
 
 Here it goes
 From http://www.nsa.gov/ia/programs/suiteb_cryptography/
 ...Says, effectively, that products that 'are configure to USE Suite B or 
 technical documentation concerning the configuration of such products' are 
 not subject to ITAR. The bis.doc.gov site listing requirements under ITAR for 
 US Persons is, inconveniently, down for maintenance.
 
 However, digging around in my document backup archives (insomnia provided the 
 time for it...hours) and email un-earth the notification addresses required 
 for ALL US based open-source Suite B implementations.
 Yes, this is silly. No, they don't NORMALLY go after anyone for breaking the 
 law for a NIST defined hash/digest/crypto algorithm.
 
 But if the USG decides they don't like you (political views, activism, etc), 
 that silly regulation can cost you years in prison. The legal term if art is 
 'selective prosecution'.
 
 The relevant email addresses are:
 cr...@nsa.gov e...@nsa.gov and web_s...@bis.doc.gov
 
 Required format and fields are:
 Subject: TSU NOTIFICATION - Encryption
 Message body:
 SUBMISSION TYPE: TSU
 SUBMITTED BY: author or corporate contacts full legal name
 SUBMITTED FOR: full legal names of all authors and corporate name if 
 applicable
 POINT OF CONTACT: full legal name of POC for compliance purposes
 PHONE and/or FAX: 10 digit number for either
 PRODUCT NAME/MODEL #: product/program name and model/version
 ECCN: 5D002 for FIPS-180 hash functions, google cache for others, BIS site 
 currently down, lovely
 blank line
 NOTIFICATION: download URL(s) for source file(s)
 
 There ya go. Hashes aren't ITAR covered is unfortunately 'Net Mythology. 
 Silly as hell I admit. If the above helps any other US Persons put a fig leaf 
 on themselves, that'd be great.
 
 Cheers,
 
 David Mercer
 
 David Mercer
 Portland, OR
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Encryption and authentication modes

2010-07-14 Thread james hughes
On Jul 14, 2010, at 1:52 AM, Florian Weimer wrote:

 What's the current state of affairs regarding combined encryption and
 authentication modes?
 
 I've implemented draft-mcgrew-aead-aes-cbc-hmac-sha1-01 (I think, I
 couldn't find test vectors), but I later came across CCM and EAX.  CCM
 has the advantage of being NIST-reviewed.  EAX can do streaming (but
 that's less useful when doing authentication).  Neither seems to be
 widely implemented.  But both offer a considerable reduction in
 per-message overhead when compared to the HMAC-SHA1/AES combination.
 
 Are there any other alternatives to consider?  

If there is no room for or an integrity field, you can look at XTS-AES.
http://csrc.nist.gov/publications/nistpubs/800-38E/nist-sp-800-38E.pdf

 Are there any traps  should be aware of when implementing CCM?

CCM is a counter mode cipher, so don't reuse the count (with any reasonable 
probability).

Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [Not] Against Rekeying

2010-03-25 Thread james hughes
On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:
 
 http://www.educatedguesswork.org/2010/03/against_rekeying.html

On Mar 23, 2010, at 4:23 PM, Adam Back wrote:

 In anon-ip (a zero-knowledge systems internal project) and cebolla [1]
 we provided forward-secrecy (aka backward security) using symmetric
 re-keying (key replaced by hash of previous key).  (Backward and
 forward security as defined by Ross Anderson in [2]).

The paper on Cebolla[4] states that Trust in symmetric keys diminishes the 
longer they are used in the wild. Key rotation, or re-keying, must be done at 
regular interfals to lessen the success attackers can have at crypt-anayzing 
the keys. This is exactly the kind of justification that the Dkr post and most 
of the comments agree is flawed.

It goes on to state what was said about new keys being derived from old keys.

[4] http://www.cypherspace.org/cebolla/cebolla.pdf


Hmm. Interesting. Learn one key, have them all for the future. Wow. Yes, that 
is Ross' definition of backward security, and clearly does not meet Ross' 
definition of forward security. In reading the paper, it seems like this system 
is: Crack one key, you're in forever. A government's dream for an anonymity 
service. Ross' definitions for, backwards, forwards makes sense from a 
terminology point of view, but IMHO without both, it is not secure.

Sure one can talk about attack scenarios, and that just proves the tautology 
that we don't know what we don't know (or don't know what has not been invented 
yet). There is no excuse to bad crypto hygiene. I don't know why someone would 
build a system with K_i+1 = h(K_i) when there are so many good algorithms out 
there.


 But we did not try to do forward security in the sense of trying to
 recover security in the event someone temporarily gained keys.  If
 someone has compromised your system badly enough that they can read
 keys, they can install a backdoor.

I agree with the Ekr posting, but not the characterization above. The Ekr 
posting says [rekey for] Damage limitation If a key is disclosed and This 
isn't totally crazy. The statement of fact is that if a key is compromised, a 
rekey limits the scope of the compromise.

The Ekr posting said nothing about how the key was disclosed. Yes, if you have 
root on the machine an have mounted an active attack, all bets are off, but 
there are other ways for key disclosure to happen (as was discussed in the Ekr 
posting).

For example a cold boot attack [3] can be used to recover a communications 
session key (instead of a disk key). If that key has been used for a 
particularly long time, and if one assumed that the attacker had the 
opportunity to  record all the ciphertext, then one must expect that all of 
that information can now be read.

[3] http://en.wikipedia.org/wiki/Cold_boot_attack

[The comments about breaking keys is deleted. I agree with the original posting 
and everyone else that changing a key OF A MODERN CIPHER to eliminate algorithm 
weaknesses is not a valuable reason to rekey.]

On Mar 23, 2010, at 2:42 PM, Jon Callas wrote:

 If you need to rekey, tear down the SSL connection and make a new one. There 
 should be a higher level construct in the application that abstracts the two 
 connections into one session.

I agree (but as a nit, we can reverse the order. Create a completely new 
session and just move the traffic to the new connection.)

Limiting the scope of a key compromise is the only justification I can see for 
rekey. That said, limiting the scope of the information available because of a 
key compromise is still a very important consideration. 

Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: hedging our bets -- in case SHA-256 turns out to be insecure

2009-11-16 Thread james hughes

On Nov 11, 2009, at 10:03 AM, Sandy Harris wrote:

 On 11/8/09, Zooko Wilcox-O'Hearn zo...@zooko.com wrote:
 
 Therefore I've been thinking about how to make Tahoe-LAFS robust against
 the possibility that SHA-256 will turn out to be insecure.
 
 NIST are dealing with that via the AHS process. Shouldn't you just use
 their results?
 
 We could use a different hash function ...
 There are fourteen candidates left in the SHA-3
 contest at the moment.  Several of them have conservative designs and good
 performance, but there is always the risk that they will be found to have
 catastrophic design flaws or that a great advance in hash function
 cryptanalysis will suddenly show how to crack them.
 
 Yes, but there's also a risk that whatever you come up with will turn
 out to be flawed.

I agree.

The logic of a unknown flaw being fixed flies in the face of prudent 
cryptanalysis. If you don't know the flaw, how can do you know you can or have 
fixed it. 

What if there is an unknown flaw in the fix? Wrap that again? Turtles all the 
way down. 

Putting multiple insecure algorithms together does guarantee a secure one.

The only solution that works is a new hash algorithm that is secure against 
this (and all other) vulnerabilities. It may include SHA 256 as a primitive, 
but a true fix is fundamentally a new hash algorithm. 

This process is being worked on by a large number of smart people. I can 
guarantee you that this kind of construction has been looked at. 

It is my opinion that putting a bandaid around SHA 256 just in case is not 
cryptanalysis, it's marketing.

Jim

P.S. once Sha-3 comes out, your bandaid will look silly.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: FileVault on other than home directories on MacOS?

2009-09-28 Thread james hughes


On Sep 22, 2009, at 5:57 AM, Darren J Moffat wrote:


Ivan Krsti  wrote:
TrueCrypt is a fine solution and indeed very helpful if you need  
cross-platform encrypted volumes; it lets you trivially make an  
encrypted USB key you can use on Linux, Windows and OS X. If you're  
*just* talking about OS X, I don't believe TrueCrypt offers any  
advantages over encrypted disk images unless you're big on  
conspiracy theories.


Note my information may be out of date.  I believe that MacOS native  
encrypted disk images (and thus FileVault) uses AES in CBC mode  
without any integrity protection, the Wikipedia article seems to  
confirm that is  (or at least was) the case http://en.wikipedia.org/wiki/FileVault


Unauthenticated CBC is indeed a problem
http://tinyurl.com/ycoaruo


There is also a sleep mode issue identified by the NSA:
http://crypto.nsa.org/vilefault/23C3-VileFault.pdf


I don't think that Jacob Appelbaum or Ralf-Philipp Weinmann work for  
the NSA (but having crypto.nsa.org is cool :-)


TrueCrypt on the other hand uses AES in XTS mode so you get  
confidentiality and integrity.


Technically, you do not get integrity. With XTS (P1619, narrow block  
tweaked cipher) you are not notified of data integrity failures, but  
these data integrity failures have a much reduced usability than CBC.  
With XTS:


1) You can return 16 byte chunks to previous values (ciphertext  
replay) as long as it is to the same place (offset) as it was before.


2) If you change a bit, you will randomize a 16 byte chunk of  
information.


With the P1619.2 mode, I believe, is called TET (IEEE 1619.2, wide  
block tweaked cipher) there are different characteristics. Usually the  
wide block is a sector so it can be 512 or some other value. In this  
case, you do not get complete integrity either. In this case


1) You can return a sector to a previous value (sector reply) as long  
as it is to the same place (offset) as it was before.


2) If you change a bit, you will randomize a complete sector of  
information.


If you change this to ZFS Crypto
http://opensolaris.org/os/project/zfs-crypto/
You get complete integrity detection with the only remaining  
vulnerability that


1) you can return the entire disk to a previous state.

While I may have put you all asleep, the basic premise holds... XTS is  
better than unauthenticated CBC.

http://www.cpni.gov.uk/docs/re-20050509-00385.pdf
http://jvn.jp/niscc/NISCC-004033/index.html
http://www.kb.cert.org/vuls/id/302220




--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Certainty

2009-08-21 Thread james hughes

Caution, the following contains a rant.

On Aug 19, 2009, at 3:28 PM, Paul Hoffman wrote:
I understand that creaking is not a technical cryptography term,  
but certainly is. When do we become certain that devastating  
attacks on one feature of hash functions (collision resistance) have  
any effect at all on even weak attacks on a different feature  
(either first or second preimages)?


This is a serious question. Has anyone seen any research that took  
some of the excellent research on collision resistance and used it  
directly for preimage attacks, even with greatly reduced rounds?


This is being done. What Perry said.

The longer that MD5 goes without any hint of preimage attacks, the  
less certain I am that collision attacks are even related to  
preimage attacks.


There was an invited talk at Crypto about Alice and Bob Go To  
Washington: A Cryptographic Theory of Politics and Policy. This was  
interesting in that it explained that facts are not what politicians  
want

http://www.iacr.org/conferences/crypto2009/acceptedpapers.html#crypto06
and that politicians form blocks to create shared power.

It seems that your comment about certainty is not a technical one,  
but a political one. The block of people that have implemented MD-5  
believe that this algorithm is good enough and that the facts that the  
hash function contains no science of how it works, can not be proven  
to be resistant to pre-image, nor even reduced to any known hard  
problem, are not certain. Maybe this particular block just wants it  
to be secure? If MD-5 is secure to pre-image attacks, the  
cryptographic community does not know why. It seems that the only  
proof that can be accepted as certainty is an existence proof that  
the bad deed _has_ be done.


Maybe this is not really an MD-5 block, but an HMAC implementer's  
block. This block does have some results to hang their hats on. The  
paper New Proofs for NMAC and HMAC: Security without Collision- 
Resistance was publushed in 2006

http://eprint.iacr.org/2006/043.pdf
that states that as long as the compression function is a PRF HMAC  
is secure. This is mostly because the algorithm is keyed. This places  
HMAC into the class of ciphers as PRF and out of the class of hash  
functions.


I find this interesting. Cryptographers knew in 2004 that the wheels  
just came off MD-5, and it's future was going to be grim. The common  
sense was that a collision by itself was not relevant. Then there was  
the “Colliding X.509 Certificates”

http://eprint.iacr.org/2005/067
and still the common sense was that it could still be used. So then  
there was Chosen-Prefix Collisions for MD5 and Colliding X.509  
Certificates for Different Identities

http://www.win.tue.nl/hashclash/EC07v2.0.pdf
but that was still not enough. This Crypto, the paper Short Chosen- 
Prefix Collisions for MD5 and the Creation of a Rogue CA Certificate

http://www.iacr.org/conferences/crypto2009/acceptedpapers.html#crypto04

https://documents.epfl.ch/users/l/le/lenstra/public/papers/Crypto09nonanom.pdf
seems to have put a nail in this issue, but not the issue of the  
certainty of pre-image attacks.


Some believe that the Best Paper award was given for the persistence  
that the authors showed to continue to spend time and effort on what  
the cryptographic community knows is an cart with no wheels on it to  
counter the common sense implementing block that do not believe it  
until they see it.


Effort placed on replacing MD-5 is more important now than taunting  
the cryptographers to prove that MD-5 pre-images are feasible when  
there is literally nothing proving that pre-images of MD-5 are  
difficult. (Again, this is for bare MD-5, not HMAC.)


Of course, I still believe in hash algorithm agility: regardless of  
how preimage attacks will be found, we need to be able to deal with  
them immediately.


I am curious if you mean Immediately as in now, or immediately when a  
pre-image attack is found?



--Paul Hoffman, Director
--VPN Consortium


Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto'09 Rump session to be webcast

2009-08-18 Thread james hughes

This year's rump session will include
A Live Trojan Message for MD5
A New Security Analysis of AES-128
In how many ways can you break Rijndael?
Alice and Bob Go to Heaven

The full agenda has been posted at
http://rump2009.cr.yp.to/


On Aug 14, 2009, at 11:56 AM, james hughes wrote:

The first Crypto rump session took place in 1981 and was immediately  
heralded as the most important meeting in cryptography. Each  
subsequent Crypto rump session has reached a new level of historical  
significance, outstripped only by the Crypto rump sessions that  
followed it.


The Crypto2009[1] Rump Session[2] will be broadcast live[3] starting  
at Tuesday, August 18th at 7:30pm to 11pm (PST/UTC-7)[4]. Download  
calendar event (ics)[5]


[1] http://www.iacr.org/conferences/crypto2009/
[2] http://rump2009.cr.yp.to/
[3] http://qtss.id.ucsb.edu/crypto2009/rump.sdp
[4] 
http://timeanddate.com/worldclock/fixedtime.html?month=8day=18year=2009hour=19min=30sec=0p1=137
[5] http://www.iacr.org/conferences/crypto2009/Crypto2009RumpSessionWe.ics

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Crypto'09 Rump session to be webcast

2009-08-14 Thread james hughes
The first Crypto rump session took place in 1981 and was immediately  
heralded as the most important meeting in cryptography. Each  
subsequent Crypto rump session has reached a new level of historical  
significance, outstripped only by the Crypto rump sessions that  
followed it.


The Crypto2009[1] Rump Session[2] will be broadcast live[3] starting  
at Tuesday, August 18th at 7:30pm to 11pm (PST/UTC-7)[4]. Download  
calendar event (ics)[5]


[1] http://www.iacr.org/conferences/crypto2009/
[2] http://rump2009.cr.yp.to/
[3] http://qtss.id.ucsb.edu/crypto2009/rump.sdp
[4] 
http://timeanddate.com/worldclock/fixedtime.html?month=8day=18year=2009hour=19min=30sec=0p1=137
[5] http://www.iacr.org/conferences/crypto2009/Crypto2009RumpSessionWe.ics

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-08-09 Thread james hughes


On Aug 6, 2009, at 1:52 AM, Ben Laurie wrote:


Zooko Wilcox-O'Hearn wrote:

I don't think there is any basis to the claims that Cleversafe makes
that their erasure-coding (Information Dispersal)-based system is
fundamentally safer, e.g. these claims from [3]: a malicious party
cannot recreate data from a slice, or two, or three, no matter what  
the

advances in processing power. ... Maybe encryption alone is 'good
enough' in some cases now  - but Dispersal is 'good always' and
represents the future.


Surely this is fundamental to threshold secret sharing - until you  
reach

the threshold, you have not reduced the cost of an attack?


Until you reach the threshold, you do not have the information to  
attack. It becomes information theoretic secure.


They are correct, if you lose a slice, or two, or three that's fine,  
but once you have the threshold number, then you have it all. This  
means that you must still defend the site from attackers, protect your  
media from loss, ensure your admins are trusted. As such, you have  
accomplished nothing to make the management of the data easier.


Assume your threshold is 5. You lost 5 disks... Whose information was  
lost? Anyone? Do you know? What if the 5 drives were lost over 5  
years, what then? CleverSafe can not provide any security guarantees  
unless these questions can be answered. Without answers, CleverSafe is  
neither Clever nor Safe.


Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Unattended reboots

2009-08-03 Thread james hughes


On Aug 2, 2009, at 4:00 PM, Arshad Noor wrote:


Jerry Leichter wrote:
 How
does a server, built on stock technology, keep secrets that it can  
use to authenticate with other servers after an unattended reboot?   
Without tamper-resistant hardware that controls access to keys,  
anything the software can get at at boot, an attacker who steals a  
copy of a backup, say - can also get at.


Almost every e-commerce site (that needs to be PCI-DSS compliant) I've
worked with in the last few years, insists on having unattended  
reboots.


I penned a recent blog about this fact at

http://www.cryptoclarity.com/CryptoClarityLLC/Welcome/Entries/2009/7/23_Encrypted_Storage_and_Key_Management_for_the_cloud.html
or
http://tinyurl.com/klkrvu

It discusses this fact and how it can be mitigated. Specifically, how  
wrapped keys can be escrowed, and used to boot a machine in, what I  
consider, a significantly more secure manner. Given that you can never  
guarantee a cloud provider can not tamper with you machine while  
running, this post describes the problem, a set of goals and one  
possible solution.


Encrypted Kernels are requirement. Geoff Arnold
http://speakingofclouds.com/
suggested that an AMI that can boot an encrypted AMI may solve the  
issue. A harder, but possible solution would be to change the AMI's  
Grub loader without changing AWS's infrastructure. Anyone interested  
on working on a prototype :-)


Jim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-07-26 Thread james hughes


On Jul 24, 2009, at 9:33 PM, Zooko Wilcox-O'Hearn wrote:

[cross-posted to tahoe-...@allmydata.org and  
cryptogra...@metzdowd.com]


Disclosure:  Cleversafe is to some degree a competitor of my Tahoe- 
LAFS project.

...
I am tempted to ignore this idea that they are pushing about  
encryption being overrated, because they are wrong and it is  
embarassing.


and probably patent pending regardless of there being significant  
amounts of prior art for their work. One reference is “POTSHARDS:  
Secure Long-Term Storage Without Encryption” by Storer, Greenan, and  
Miller at http://www.ssrc.ucsc.edu/Papers/storer-usenix07.pdf. Maybe  
they did include this in their application. I certainly do not know.  
They seem to have one patent

http://tinyurl.com/njq8yo
and 7 pending.
http://tinyurl.com/ntpsj9

...
But I've decided not to ignore it, because people who publicly  
spread this kind of misinformation need to be publicly contradicted,  
lest they confuse others.

...

The trick is cute, but I argue largely irrelevant. Follows is a  
response to this web page that can probably be broadened to be a  
criticism of any system that claims security and also claims that key  
management of some sort is not a necessary evil.


http://dev.cleversafe.org/weblog/?p=111 # Response Part 2:  
Complexities of Key Management


I agree with many of your points. I would like to make a few of my own.
1) If you are already paying the large penalty to Reed Solomon the  
encrypted data, the cost of your secret sharing scheme is a small  
additional cost to bear, agreed. Using the hash to “prove” you have  
all the pieces is cute and does turn Reed Solomon into an AONT. I will  
argue that if you were to do a Blakley key split of a random key, and  
append each portion to each portion of the encrypted file you would  
get similar performance results. I will give you that your scheme is  
simpler to describe.


2) In my opinion, key management is more about process than  
cryptography. The whole premise of Shamir and Blakley is that each  
share is independently managed. In your case, they are not. All of the  
pieces are managed by the same people, possibly in the same data  
center, etc. Because of this, some could argue that the encryption has  
little value, not because it is bad crypto, but because the shares are  
not independently controlled. I agree that if someone steals one  
piece, they have nothing. They will argue, that if someone can steal  
one piece, it is feasible to steal the rest.


3) Unless broken drives are degaussed, if they are discarded, they can  
be considered lost. Because of this, there will be drive loss all the  
time (3% per year according to several papers). As long as all N  
pieces are not on the same media, you can actually lose the media, no  
problem. This can be expanded to a loss of a server, raid controllers,  
NAS box, etc. without problem as long as there is only N-1 pieces, no  
problem. What if you lose N chunks (drives, systems, etc.) over time,  
are you sure you have not lost control of someone’s data? Have you  
tracked what was on each and every lost drive? What is your process  
when you do a technology refresh and retire a complete configuration?  
If media destruction is still necessary, will resulting operational  
process really any easier or safer than if the data were just split?


4) What do you do if you believe your system has been compromised by a  
hacker? Could they have read N pieces? Could they have erased the logs?


5) I also suggest that there is other prior art out there for this  
kind of storage system. I suggest the paper “POTSHARDS: Secure Long- 
Term Storage Without Encryption” by Storer, Greenan, and Miller at http://www.ssrc.ucsc.edu/Papers/storer-usenix07.pdf 
 because it covers the same space, and has a good set of references  
to other systems.


My final comment is that you raised the bar, yes. I will argue that  
you did not make the case that key management is not needed. Secrets  
are still needed to get past the residual problems described in these  
comments. Keys are small secrets that can be secured at lower cost  
that securing the entire bulk of the data. Your system requires the  
bulk of the data to to be protected, and thus in the long run, does  
not offer operational efficiency that simple bulk encryption with a  
traditional key management provides.


Jim


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-26 Thread james hughes


On Jul 27, 2009, at 4:50 AM, James A. Donald wrote:


From: Nicolas Williams nicolas.willi...@sun.com
For example, many people use arcfour in SSHv2 over AES because  
arcfour

is faster than AES.


Joseph Ashwood wrote:
I would argue that they use it because they are stupid. ARCFOUR  
should have been retired well over a decade ago, it is weak, it  
meets no reasonable security requirements,


No one can break arcfour used correctly - unfortunately, it is  
tricky to use it correctly.


RC-4 is broken when used as intended. The output has a statistical  
bias and can be distinguished.

http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/FluhrerMcgrew.pdf
and there is exceptional bias in the second byte
http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/bc_rc4.ps
The latter is the basis for breaking WEP
http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/wep_attack.ps
These are not attacks on a reduced algorithm, it is on the full  
algorithm.


If you take these into consideration, can it be used correctly? I  
guess tossing the first few words gets rid of the exceptional bias,  
and maybe change the key often to get rid of the statistical bias? Is  
this what you mean by used correctly?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-24 Thread james hughes


On Jul 24, 2009, at 1:30 PM, Peter Gutmann wrote:

[I realise this isn't crypto, but it's arguably security-relevant  
and arguably

interesting :-)].


As long as we think this is interesting, (although I respectfully  
disagree that there are any inherent security problems with TOE. Maybe  
there are insecure implementation...).



James Hughes hugh...@mac.com writes:

TOEs that are implemented in a slow processor in a NIC card have  
been shown
many times to be ineffective compared to keeping TCP in the fastest  
CPU

(where it is now).


The problem with statements like this is that they smack of the Linux
religious zealotry against TCP offload support in the kernel, TOE's  
are bad
because we say they are, and we'll keep asserting this until you go  
away.


There were a dozen or so protocol offload research projects that the  
US government funded in the 90s. All failed. Is the people who say  
TOE's are bad  because of zealotry or standing on the shoulders of  
the people that ran those projects. At Network Systems, we partnered  
with HT Kung of CMU at the time to move TCP out of a really slow  
Decstation. Result? A accelerator that cost as much as the workstation  
that was faster until the next processor version was available. Yes,  
we could have reduced it to a chip but it wasn't. The take away was  
that improving the software is the gift that keeps on giving. Moore's  
law means you get a faster TCP every time the clock ticks.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1138

BTW, I am not a Linux bigot, just someone that got caught up in this  
issue more than a decade ago. I do not agree with your assertion or  
the Wikipedia page that this is linux bigotry. I find that page  
horribly inaccurate and self serving to the TOE manufacturing community.


What I learned from participating in a project that spent $5M of tax  
payer money was that The protocol itself is a small fraction of the  
problem.



A
decade ago, during the Win2K development, Microsoft were measuring a  
1/3
reduction in CPU usage just from TCP checksum offload.  Given the  
time frame
this was probably on 300MHz PII's, but then again it'd be with  
late-90s
vintage NICs.  On the other hand I've seen even more impressive  
figures with
their more recent TCP chimney offload (which just moves more of the  
NDIS stack

onto the NIC, I think it came out around Server 2003).

Does this mean that MS have figured out (a decade or so ago) how to  
make TOE
work while the OSS community has been too occupied telling everyone  
it doesn't
to do anything about it?  There must be some reason for the  
difference between

the two camps.


Offloading features like checksumming, fragmentation/reassembly (aka  
Large Segment Offload), packet categorization, slitting flows to  
different threads, etc. is not TOE.


TOE is offloading of the TCP stack. The thin line that is crossed is  
where is the TCP state kept. If the state is kept in the card, then  
the protocol to get the data reliably to the application is has more  
corner cases (hence complexity) since the IP layer can be lossy and  
the socket layer can not. In all the research, this has always been  
the case.


If there is something windows has not learned could be that processing  
TCP should be simple and quick. Since the source code is not  
available, I don't know if their software falls into the too  
complicated camp or not... In the case of Chimney partial stack  
offload, the state is in both places. Sounds simple straight forward,  
right?


The case of iSCSI where a complete protocol conversion is done (the  
card looks like a SCSI card, but the data goes out over TCP/IP) it is  
a different story (which is also arguably still about solving the OS  
vendor's lack of software agility with hardware), but that is not the  
intent of this discussion.


I fully agree that offloading features that makes the TCP processing  
easier is a good thing.


Back to crypto?


Peter.


Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-23 Thread james hughes
Note for Moderator. This is not crypto but TOE being the solution to  
networking performance problems is a perception that is dangerous to  
leave in the crypto community.


On Jul 23, 2009, at 11:45 PM, Nicolas Williams wrote:


On Thu, Jul 23, 2009 at 05:34:13PM +1200, Peter Gutmann wrote:

mhey...@gmail.com mhey...@gmail.com writes:
2) If you throw TCP processing in there, unless you are  
consistantly going to
have packets on the order of at least 1000 bytes, your crypto  
algorithm is

almost _irrelevant_.
[...]
for a Linux 2.2.14 kernel, remember, this was 10 years ago.


Could the lack of support for TCP offload in Linux have skewed  
these figures
somewhat?  It could be that the caveat for the results isn't so  
much this was
done ten years ago as this was done with a TCP stack that ignores  
the

hardware's advanced capabilities.


How much NIC hardware does both, ESP/AH and TCP offload?  My guess:  
not

much.  A shame, that.

Once you've gotten a packet off the NIC to do ESP/AH processing,  
you've

lost the opportunity to use TOE.


IPSEC offload can have value. TOE are far more controversial.

TOEs that are implemented in a slow processor in a NIC card have been  
shown many times to be ineffective compared to keeping TCP in the  
fastest CPU (where it is now). For vendors that can't optimize their  
TCP implementation (because it is just too complicated for then?) TOE  
is a siren call that detracts them from their real problem. Look at  
Van Jacobson post of May 2000 entitled TCP in 30 instructions.

http://www.pdl.cmu.edu/mailinglists/ips/mail/msg00133.html
There was a paper about this, but I am at a loss to find it. One can  
go even farther back to An Analysis of TCP Processing Overhead,   
Clark, Jacobson, Romkey and Salwen in 1989 which states The protocol  
itself is a small fraction of the problem.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.75.5741

Back to crypto please.


Nico
--

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 112-bit prime ECDLP solved

2009-07-17 Thread james hughes


On Jul 14, 2009, at 12:43 PM, James A. Donald wrote:


2033130

Subsequent expansions in computing power will involve breaking up  
Jupiter to build really big computers, and so forth, which will slow  
things down a bit.


So 144 bit EC keys should be good all the way to the singularity and  
a fair way past it.


Prediction is very difficult, especially about the future.

I have researched the possibility of 50 or 100 year key sizes. All we  
have to do is look back 50 years to the (unbreakable) Enigma, and 30  
years to the famous Sci.Am article by Rivest that said it would take  
40 quadrillion years to break the challenge, which actually took 25,  
or more recently, or FEAL, or RC-4 (WEP), or MD-5, or SHA-1, or, or  
need I say more?


If we assume that all knowledge to be discovered has been discovered,  
and all mathematical insight humanity is capable of has been achieved,  
you are correct that 144 bit EC keys are good all the way to the  
singularity (which actually depends on the Hubble constant, but I  
digress) and that everything that could be invented has been invented.


I believe it is folly to suggest that 144 bit keys will never be  
broken. Frankly, I hope to see the day.


Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Seagate announces hardware FDE for laptop and desktop machines

2009-06-14 Thread james hughes


On Jun 10, 2009, at 4:19 PM, travis+ml-cryptogra...@subspacefield.org  
wrote:



Reading really old email, but have new information to add.

On Wed, Oct 03, 2007 at 02:15:38PM +1000, Daniel Carosone wrote:
Speculation: the drive always encrypts the platters with a (fixed)  
AES

key, obviating the need to track which sectors are encrypted or
not. Setting the drive password simply changes the key-handling.

Implication: fixed keys may be known and data recoverable from  
factory

records, e.g. for law enforcement, even if this is not provided as an
end-user service.


There was an interesting article in 2600 recently about ATA drive
security.

It's in Volume 26, Number 1 (Spring 2009).  Sorry that I don't have an
electronic copy.

The relevant bit of it is that there are two keys.  One key is for the
user, and one (IIRC, it is called a master key) is set by the factory.

IIRC, there was a court case recently where law enforcement was able
to read the contents of a locked disk, contrary to the vendor's claims
that nobody, even them, would be able to do so.


All of these statements may be true. The standardization of the  
command set for encrypting disk drive does has a set master key  
command. If this command does exist, and if the user had software that  
resets this master password, then the backdoor would have been closed.  
(I know, there area  lot of ifs in that sentence.)

http://www.dtc.umn.edu/disc/resources/RiedelISW5r.pdf
http://www.usenix.org/events/lsf07/tech/riedel.pdf
http://www.t10.org/ftp/t10/document.04/04-004r2.pdf
and from universities you can access
http://ieeexplore.ieee.org/iel5/10842/34160/01628480.pdf
https://www.research.ibm.com/journal/rd/524/nagle.html

Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-22 Thread james hughes
I believe that mode has been renamed EME2 because people were having a  
fit over the *.


On May 14, 2009, at 12:37 AM, Jon Callas wrote:
I'd use a tweakable mode like EME-star (also EME*) that is designed  
for something like this. It would also work with 512-byte blocks.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Destroying confidential information from database

2009-04-30 Thread james hughes


On Mar 9, 2009, at 10:32 PM, Mads wrote:



I know of procedures and programs to erase files securely from  
disks, Guttman did a paper on that


What I don't know is how to securely erase information from a  
database.


If the material is that sensitive, and you only want to selectively  
delete the information, the only way is to:


1) delete the information from the database using the commercial means,
2) export the database
3) Inspect the exported data to ensure all the sensitive information  
is deleted

4) import the database to another storage system.
5) destroy (degauss, wipe) the original storage system.
6) the truly paranoid would destroy the raid controllers also (since  
it contains NVRAM)


Not trivial...

Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Fwd: SMS 4 algorithm implemented as a spreasheet.

2009-02-25 Thread james hughes


Building a reference implementation of a cipher  can be an invaluable  
aid to writing code. Building a cipher in a spreadsheet, while some  
may suggest is strange, is a valid way to effectively describe a  
cipher in a visual sense. This has been done before with The  
Illustrated DES Spreadsheet, it has been done again. With the help of  
a Chinese document and an english translation by Whitfield Diffie and  
George Ledin, I was able to create a spreadsheet that demonstrates the  
SMS4 algorithm.


This algorithm is rather infamous for being proposed as a WiFi  
standard. It is has raised it’s head again as being implemented  
inside a derivative of the TCG standard TPM module for China under the  
name TCM module.


The SMS4 spreadsheet is available in Mac OS X numbers format and .xls  
format at

http://tinyurl.com/cfp6w2

Enjoy.

Jim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-3 Round 1: Buffer Overflows

2009-02-24 Thread james hughes


On Feb 24, 2009, at 6:22 AM, Joachim Strömbergson wrote:


Aloha!

Ian G wrote:

However I think it is not really efficient at this stage to insist on
secure programming for submission implementations.  For the simple
reason that there are 42 submissions, and 41 of those will be thrown
away, more or less.  There isn't much point in making the 41 secure;
better off to save the energy until the one is found.  Then
concentrate the energy, no?


I would like to humbly disagree. In case of MD6 the fix meant that a
bugger had to be doubled in size (according to the Fortify blog). This
means that the memory footprint and thus its applicability for  
embedded

platforms was (somewhat) effected.

That is, secure implementations might have different requirements than
what mighty have been stated, and we want to select an algorithm based
on the requirements for a secure implementation, right?


Two aspects of this conversation.

1) This algorithm is designed to be parallelized. This is a  
significant feat. C is a language where parallelization is possible,  
but fraught with peril. We have to look past the buffer overflow to  
the motivation of the complexity.


2) This algorithm -can- be implemented with a small footprint -if-  
parallelization is not intended. If this algorithm could not be  
minimized then this would be a significant issue, but this is not the  
case.


I would love this algorithm to be implemented in an implicitly  
parallel language like Fortress.

http://projectfortress.sun.com/Projects/Community

Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto Craft Knowledge

2009-02-20 Thread James Hughes


On Feb 14, 2009, at 12:54 PM, David Molnar wrote:


Ben Laurie wrote:

[snip discussion of bad crypto implementation practices]

Because he is steeped in the craft
knowledge around crypto. But most developers aren't. Most developers
don't even have the right mindset for secure coding, let alone  
correct
cryptographic coding. So, why on Earth do we expect them to follow  
our

unwritten rules, many of which are far from obvious even if you
understand the crypto?


Yes, there's a need for a crypto practices FAQ to which one can  
refer.

In addition to individual education, it'd be helpful to have something
when pointing out common mistakes.

[snip specific discussion]

I find this conversation off the point. Consider other trades like  
woodworking. There is no FAQ that can be created that would be  
applicable to building a picture frame, dining room table or a covered  
bridge. A FAQ for creating a picture frame would be possible, but this  
is not the FAQ that is being discussed.


Crypto protocol failures are not trivial. The recent CBC attack on SSH  
shows that this is the case.

http://secunia.com/Advisories/32760/
What FAQ would prescribe how not to make this mistake?

There are PhD programs related to this subject. I would argue (and  
actually dovetailing with another thread) that, if one creates a FAQ,  
that it point to well vetted implementations of information delivery  
protocols like SSL and SSH, and that any FAQ regarding the use of  
crypto libraries be that this is dangerous and should only be  
attempted with proper oversight and/or training.


Crypto protocols are not trivial, and suggesting a FAQ would be able  
to take the uninitiated to secure coding is more dangerous than good.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Crypto2008 rump session to be webcast

2008-08-16 Thread james hughes

All:

The Crypto2008 rump session will be webcast live starting at 19:30 on  
August 19th. The details will be posted on the iacr website at

http://www.iacr.org

Local times are
http://tinyurl.com/6xrln9

Enjoy.

Jim

PS. Please feel free to forward this as widely as possible. 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Call for papers for the Security in Storage Workshop 2008, due May 30th

2008-05-22 Thread james hughes


The 5th international Security in Storage Workshop (SISW)
http://ieeeia.org/sisw/2008/
will be held on Sept 25th, 2008 in conjunction with MSST 2008
http://storageconference.org/2008/
 and theKey Management Summit 2008.
http://www.keymanagementsummit.com/2008/

Prospective participants should submit either a full paper (not to  
exceed 12 single-spaced pages) for a paper presentation to be  
published in the proceedings or can submit a short abstract suggesting  
alternative presentation forms, discussion items, or panel topics.  
Please submit papers to James Hughes ([EMAIL PROTECTED]) by May 30th.


Thanks

Jim Hughes

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Call for presentations: Cryptographic e-voting systems for the IACR

2008-05-22 Thread james hughes


The International Association for Cryptologic Research (http://www.iacr.org/ 
) is seeking presentations and demos of e-voting systems. For its next  
meeting in August-17, 2008 (in Santa-Barbara, CA, USA), the IACR board  
would like to invite presentations and demos of cryptographic e-voting  
systems that are open source and freely available for all.


For more information see http://www.iacr.org/elections/cfp.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Snake oil crypto of the day: BabelSecure Samurai

2008-04-22 Thread james hughes

The company and all it's assets are for sale. Starting price $20M.
http://babelsecure.com/property.aspx


On Apr 16, 2008, at 8:49 PM, Ali, Saqib wrote:


See:
http://babelsecure.com/challenge.aspx

Snake-oil sales pitch:
The creators of BabelSecure are so confident in the ability and
security of Samurai, they have created the Turing Challenge. The first
individual or team to break the following code will earn $5000

saqib
http://doctrina.wordpress.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Password hashing

2007-10-12 Thread james hughes

I forgot to add the links...
http://people.redhat.com/drepper/sha-crypt.html
http://people.redhat.com/drepper/SHA-crypt.txt

On Oct 11, 2007, at 10:19 PM, james hughes wrote:

A proposal for a new password hashing based on SHA-256 or SHA-512  
has been proposed by RedHat but to my knowledge has not had any  
rigorous analysis. The motivation for this is to replace MD-5 based  
password hashing at banks where MD-5 is on the list of do not use  
algorithms. I would prefer not to have the discussion MD-5 is good  
enough for this algorithm since it is not an argument that the  
customers requesting these changes are going to accept.


Jim





Password hashing

2007-10-12 Thread james hughes
A proposal for a new password hashing based on SHA-256 or SHA-512 has  
been proposed by RedHat but to my knowledge has not had any rigorous  
analysis. The motivation for this is to replace MD-5 based password  
hashing at banks where MD-5 is on the list of do not use algorithms.  
I would prefer not to have the discussion MD-5 is good enough for  
this algorithm since it is not an argument that the customers  
requesting these changes are going to accept.


Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Full Disk Encryption solutions selected for US Government use

2007-10-10 Thread james hughes


On Oct 8, 2007, at 4:27 AM, Steven M. Bellovin wrote:


On Mon, 18 Jun 2007 22:57:36 -0700
Ali, Saqib [EMAIL PROTECTED] wrote:


US Government has select 9 security vendors that will product drive
and file level encryption software.

See:
http://security-basics.blogspot.com/2007/06/fde-fde-solutions-selected-for-us.html
OR
http://tinyurl.com/2xffax



Out of curiousity, are any open source FDE products being evaluated?

--Steve Bellovin, http://www.cs.columbia.edu/~smb


Out of curiousity, Vista (BitLocker) was not mentioned?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: debunking snake oil

2007-09-03 Thread james hughes
I am all for humor... Can you give us a hand with how to find this  
patent?


On Sep 2, 2007, at 2:27 PM, Axel Horns wrote:


On Fri, August 31, 2007 18:54, Stephan Neuhaus wrote:


Fun,


See German patent document DE10027974A1 (application was refused  
in

2006).

Axel H. Horns

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Security In Storage Workshop- Extended Deadline

2007-06-09 Thread James Hughes

Call for papers, submission deadline now June 15th.

The 4th International Security In Storage Workshop will be held  
September 27, 2007 (Thursday) at Paradise Point Resort and Spa in San  
Diego, California, USA. The workshop is co-located with the 24th IEEE  
Conference on Mass Storage Systems and Technologies (MSST2007).


Papers related to securing storage and circumventing secure storage  
are welcome. This includes databases, file systems, removable media,  
embedded systems, cell phones, etc.


For more information, and submissions, see
http://ieeeia.org/sisw/2007/

and the call for papers is at
http://ieeeia.org/sisw/2007/SISW07CFP.pdf

Please feel free to forward this email to other lists.

Jim Hughes
Program Chair
SISW 2007



Re: It's a Presidential Mandate, Feds use it. How come you are not using FDE?

2007-01-22 Thread james hughes


On Jan 19, 2007, at 4:06 AM, Bill Stewart wrote:


[...] if you're trying to protect against KGB-skilled attacks [...]



On the other hand, if you're trying to protect against
lower-skilled attackers, [...]



I always find these arguments particularly frustrating.

By slowly raising the bar for the lower-skilled criminals, you get  
the effect in Steven's firewall book cover (I forget the version,  
where you must be a certain height to attack the castle.)


For me, the bottom line is that if you protect against the former,  
then you get the latter, and it is only a small matter of time when  
the lower-skilled people will get a script to do the higher quality  
attacks. Remember WEP?


I really have to question continuing a snail's pace information  
protection arms war when we have all the tools we need to properly  
defend ourselves.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: It's a Presidential Mandate, Feds use it. How come you are not using FDE?

2007-01-22 Thread james hughes


On Jan 18, 2007, at 6:57 PM, Saqib Ali wrote:



When is the last time you checked the code for the open source app
that you use, to make sure that it is written properly?


30 seconds ago.

What mode is it using? How much information is encrypted under a  
single key. Was the implementation FIPS certified. And the list goes on.


These are important issues.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: news story - Jailed ID thieves thwart cops with crypto

2006-12-21 Thread james hughes


On Dec 20, 2006, at 8:44 AM, [EMAIL PROTECTED] wrote:



http://news.com.com/Jailed+ID+thieves+thwart+cops+with+crypto/ 
2100-7348_3-6144521.html


[...]


  According to the Crown Prosecution Service (CPS), which confirmed
  that Kostap had activated the encryption after being arrested,
  it would take 400 computers 12 years to crack the code.


[...]

What algorithm was that? Seems like a really small time, especially  
if you have a 4000 or larger CPU cluster...




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Crypto rump session to be webcast.

2006-08-21 Thread james hughes

All:

The Rump session of this year's Crypto conference will be webcast  
Aug. 22 (tomorrow) starting at 19:30 pacific. Other timezones here:

http://tinyurl.com/otxxu
and the webcast will be broadcast in Quicktime and will be available  
here:

rtsp://qtss.id.ucsb.edu/crypto.sdp

For more information (including the agenda to be posted) will be  
available here:

http://www.iacr.org/conferences/crypto2006/rump.html

Please feel free to forward this message to other lists.

Jim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: compressing randomly-generated numbers

2006-08-11 Thread james hughes


On Aug 9, 2006, at 8:44 PM, Travis H. wrote:


Hey,

I was mulling over some old emails about randomly-generated numbers
and realized that if I had an imperfectly random source (something
less than 100% unpredictable), that compressing the output would
compress it to the point where it was nearly so.  Would there be any
reason to choose one algorithm over another for this application?


This is neither correct nor a good idea.

Taking almost random information and compressing it will lead to less  
random results.


Specifically, I will give the general LZW case and then go to the BZ2  
case.


1) For LZW (even ignoring the magic numbers), if a byte does not  
match anything in the dictionary (it starts with a dictionary of all  
0s, so the probability of a match on the first byte is low) then that  
byte will get a header of a 0 bit. That byte now becomes 9 bits. The  
next byte will have a dictionary of the previous byte and all zeros.  
The chance of a match will still be small and putting that into the  
dictionary will be a 9 bit field with 0s. So in the first 2 bytes, I  
can almost guarantee I know where 2 zero bits are.


2) BZ2 transforms the data and then uses LZW. See 1)

The correct way to improve almost random data is to process it with  
a hash like function with compression.


Jim


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Looking for fast KASUMI implementation

2005-12-15 Thread james hughes

Hello list:

I have research project that is looking for a fast -software-  
implementation of the KASUMI block cipher.  I have found many papers  
on doing this in hardware, but nothing in software. While free is  
better (as is beer), I will consider a purchase.


FYI, KASUMI is the cryptographic engine of the 3GPP.
http://en.wikipedia.org/wiki/3gpp

Thanks.
jim


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-29 Thread james hughes
In listening to this thread hearing all the hyperbole on both sides,  
I would suggest that we may need more fuel to the fire.


There was a rump presentation at the recent Crypto on the use of  
Ceremonies (which, pardon my misstatement in advance, is claimed to  
be computer protocols with the humans included). The presentation  
states, Design a great protocol, prove it secure; add a user, it’s  
insecure. This specifically discusses SSL.


The entire rump session is at
   http://www.iacr.org/conferences/crypto2005/rumpSchedule.html

scroll down to
   Ceremonies by Carl Ellison

The presentation and video
   http://www.iacr.org/conferences/crypto2005/r/48.ppt
   http://www.iacr.org/conferences/crypto2005/r/48.mov

The video is about 50MB.

Thanks

jim

On Aug 28, 2005, at 10:32 PM, James A. Donald wrote:


--
From:   Dave Howe [EMAIL PROTECTED]


2) Google got into the CA business; namely, all
GoogleMail owners suddenly found they could send and
receive S/Mime messages from their googlemail
accounts, using a certificate that just appeared and
was signed by the GoogleMail master cert. Given the
GoogleMail user base, this could make GoogleMail a
defacto CA in days.

3) This certificate was downloaded to your GoogleTalk
client on login, and NEVER cached locally

Ok, from a Security Professional's POV this would be a
horror - certificates all generated by the CA (with no
guarantees they aren't available to third parties) but
it *would* bootstrap X509 into common usage,



That horse is dead.  It is not going into common usage.

SSL works in practice, X509 with CA certs does not work
in practice.  People have been bullied into using it by
their browsers, but it does not give the protection
intended, because people do what is necessary to avoid
being nagged by browsers, not what is necessary to be
secure.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 mQ0rM7wYdVTuoeMRUcrpDc1V9pUqhEgUmJMtyCZZ
 469u1yKDDCKWaUWwU/LYyE/7CVNRZV7OjXCs+Kyyc



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to  
[EMAIL PROTECTED]





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: webcast of crypto rumpsession this year?

2005-08-12 Thread James Hughes
At this time I believe the answer is no. I set it up last year and  
have not this year. I take it that there is interest?


I will send an email to the group if this changes.

Thanks

jim

On Aug 12, 2005, at 9:07 AM, Mads Rasmussen wrote:



Anyone knows whether there will be webcasts from this years Crypto  
conference?


--
Mads Rasmussen
Security Consultant
Open Communications Security
+55 11 3345 2525



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to  
[EMAIL PROTECTED]





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SISW05, the 3rd International IEEE Security in Storage Workshop

2005-07-09 Thread james hughes


3rd International IEEE Security in Storage Workshop
December 13, 2005
Golden Gate Holiday Inn, San Francisco, California USA

Sponsored by the IEEE Computer Society
 Task Force on Information Assurance (TFIA)
 Part of the IEEE Information Assurance Activities (IEEEIA)

Held In Cooperation and Co-Located With the
4th USENIX Conference on File and Storage Technologies (FAST05)
 December 14-16, 2005, San Francisco, CA, USA

In Cooperation with the
 IEEE Mass Storage Systems Technical Committee (MSSTC)

Description

Meeting the challenge to protect stored information critical to  
individuals, corporations, and governments is made more difficult by  
the continually changing uses of storage and the exposure of storage  
media to adverse conditions.


Example uses include employment of large shared storage systems for  
cost reduction and, for convenience, wide use of transiently- 
connected storage devices offering significant capacities and  
manifested in many forms, often embedded in mobile devices.


Protecting intellectual property, privacy, health records, and  
military secrets when media or devices are lost, stolen, or captured  
is critical to information owners.


A comprehensive, systems approach to storage security is required for  
the activities that rely on storage technology to remain or become  
viable.


This workshop serves as an open forum to discuss storage threats,  
technologies, methodologies and deployment.


The workshop seeks submissions from academia and industry presenting  
novel research on all theoretical and practical aspects of designing,  
building and managing secure storage systems; possible topics  
include, but are not limited to the following:

- Cryptographic Algorithms for Storage
- Cryptanalysis of Systems and Protocols
- Key Management for Sector and File based Storage Systems
- Balancing Usability, Performance and Security concerns
- Unintended Data Recovery
- Attacks on Storage Area Networks and Storage
- Insider Attack Countermeasures
- Security for Mobile Storage
- Defining and Defending Trust Boundaries in Storage
- Relating Storage Security to Network Security
- Database Encryption
- Search on Encrypted Information

The goal of the workshop is to disseminate new research, and to bring  
together researchers and practitioners from both governmental and  
civilian areas. Accepted papers will be published by the IEEE  
Computer Society Press in the workshop proceedings and become part of  
the IEEE Digital Library.


Workshop Sponsor
- Jack Cole (US Army Research Laboratory, USA)

Program Chair
- James Hughes (StorageTek, USA)

Program Committee
- Don Beaver (USA)
- John Black (University of Colorado, USA)
- Randal Burns (Johns Hopkins University, USA)
- Ronald Dodge (United States Military Academy, USA)
- Kevin Fu (University of Massachusetts Amherst, USA)
- Russ Housley (Vigil Security, USA)
- Yongdae Kim (University of Minnesota, USA)
- Ben Kobler (NASA, USA)
- Noboru Kunihiro (University of Electro-Communications, Japan)
- Arjen Lenstra (Lucent Technologies' Bell Laboratories and
 Technische Universiteit Eindhoven, Netherlands)
- Fabio Maino (Cisco Systems, USA)
- Ethan Miller (University of California, Santa Cruz, USA)
- Reagan Moore (University of California, San Diego, USA)
- Dalit Naor (IBM Haifa, Israel)
- Andrew Odlyzko (University of Minnesota, USA)
- Rod Van Meter (Keio University, Japan)
- Tom Shrimpton (Portland State, USA)
- John Viega (Secure Software, USA)
- Erez Zadok (Stony Brook University, USA)
- Yuliang Zheng (University of North Carolina, USA)

Submissions

Papers must begin with the title, authors, affiliations, a short  
abstract, a list of key words, and an introduction. The introduction  
should summarize the contributions of the paper at a level  
appropriate for a non-specialist reader. Papers must be submitted in  
PDF format less than 4MB in size (final paper has no limit). Email  
submissions must attach the paper, specify if this is a duplicate  
work, and be sent to [EMAIL PROTECTED]


Papers should be at most 12 pages in length including the  
bibliography, figures, and appendices (using 10pt body text and two- 
column layout). Authors are responsible for obtaining appropriate  
clearances. Authors of accepted papers will be asked to sign IEEE  
copyright release forms. Final submissions must be in camera-ready  
PostScript or PDF. Authors of accepted papers must guarantee that  
their paper will be presented at the conference.


Papers that duplicate work that any of the authors have or will  
publish elsewhere are acceptable for presentation at the workshop.  
However, only original papers will be considered for publication in  
the proceedings.


Although full papers are preferred, submissions of extended abstracts  
describing the final paper will be considered based on merit and  
assessing the author's ability to complete the paper within the  
allotted time.


Important Dates

Paper due: September 1, 2005

Re: [IP] One cryptographer's perspective on the SHA-1 result

2005-03-06 Thread james hughes
On Mar 4, 2005, at 5:23 PM, James A. Donald wrote:
The attacks on MD*/SHA* are weak and esoteric.
On this we respectfuly disagree.
You make it sound trivial. Wang has been working on these results for 
over 10 years. She received the largest applause at Crypto 2004 session 
from her peers I have ever seen.

It is not so fundamentally broken as to justify starting over.
on this I agree.
My recommendation for anyone that listens to (nobody) me is to abandon 
the MD series and SHA algorithms below SHA-256 for everything including 
certificates, pgp and even HMAC. But these are my inclinations. I would 
rather migrate to stronger crypto than have to continually justify why 
I continue to use algorithms that have known weaknesses.

$0.02
--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 QVYtFQAELN4YlZ9xB60CvXTqW8QT8rOABMbJrPXE
 4hz2qo1jnDwc3tmFFeyh6lG9sOrXL1783FYSh2s+v
What software do you use for this? Is it ECC or RSA?
Thanks
jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: link-layer encryptors for Ethernet?

2005-02-09 Thread james hughes
The following device is a layer 2 tunneling device that has 256 bit AES 
at up to 400Mb/s.

http://blueridgenetworks.com/products/index.htm
http://blueridgenetworks.com/support/borderguard_vpn__serv_res_ctr.htm
Hope this helps
On Feb 8, 2005, at 11:29 AM, Russell Nelson wrote:
Steven M. Bellovin writes:
Are there any commercial link-layer encryptors for Ethernet available?
I know that Xerox used to make them, way back when, but are there any
current ones, able to deal with current speeds (and connectors)?
Given the price of gigE, it's hard to say that a 100Mbps adapter is
current, but Intel has one with 3DES.  I recently went through my
collection and threw out about a hundred antique (ISA / MCA) Ethernet
cards, but I kept all the PCI ones.  With sufficient inducement I
could go grovelling through the Intel ones to get you a part number.
--
--My blog is at angry-economist.russnelson.com  | The laws of physics 
cannot
Crynwr sells support for free software  | PGPok | be legislated.  
Neither can
521 Pleasant Valley Rd. | +1 315-323-1241 cell  | the laws of 
countries.
Potsdam, NY 13676-3213  | +1 212-202-2318 VOIP  |

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is 3DES Broken?

2005-02-04 Thread james hughes
On Feb 2, 2005, at 1:32 PM, bear wrote:
On Mon, 31 Jan 2005, Steven M. Bellovin wrote:
snip re: 3des broken?
[Moderator's note: The quick answer is no. The person who claims
otherwise is seriously misinformed. I'm sure others will chime
in. --Perry]
[snip]
When using CBC mode, one should not encrypt more than 2^32 64-bit
blocks under a given key.
[snip]
whichever it is, as you point out there are other and more secure
modes available for using 3DES if you have a fat pipe to encrypt.
I don't want to take this down a rat-hole, but I respectfully disagree. 
The small block size of 3DES is also an issue with more secure modes.

CCM states that only 128 but ciphers are to be used. The NIST document 
states For CCM, the block size of the block cipher algorithm shall be 
128 bits; currently, the AES algorithm is the only approved block 
cipher algorithm with this block size.
	http://csrc.nist.gov/publications/nistpubs/800-38C/SP800-38C.pdf

Ferguson points out that in OCB there is a birthday at the number of 
packets. From the paper, Collision attacks are much easier when 64-bit 
block ciphers are used. Therefore, we most strongly advise never to use 
OCB with a 64-bit block cipher.
	http://csrc.nist.gov/CryptoToolkit/modes/comments/Ferguson.pdf

These basis of this is that the mode loses packet security at a 
birthday of the number of blocks. In communications, this is 2^32 
blocks, and if we assume 1k blocks, this is 4TBytes, which occurs after 
transferring less than 2 full DVDs. As network performance grows, this 
will be a very common transfer size.

While 3DES is not broken, it is my opinion that the 64 bit blocksize 
of 3DES is not adequate for today's requirements. In this sense, it is 
not broken, but obsolete.

Thanks
jim
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is 3DES Broken?

2005-02-02 Thread james hughes
On Jan 31, 2005, at 10:38 PM, Steven M. Bellovin wrote:
When using CBC mode, one should not encrypt more than 2^32 64-bit
blocks under a given key.  That comes to ~275G bits, which means that
on a GigE link running flat out you need to rekey at least every 5
minutes, which is often impractical.  Since I've seen Gigabit Ethernet
cards for US$25, this bears thinking about -- and while 10GigE is
still too expensive for most people, its prices are dropping rapidly.
With 10GigE, you'd have to rekey every 27.5 seconds...
For reference purposes, with AES you'd be safe for 2^64*128 bits.
That's a Big Number of seconds.
		--Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb
I would also like to reinforce Prof. Bellovin's comment that the 3DES 
block size is too small.

In bulk storage system encryption, 3DES will require rekey every 
~~65GBytes. Most PC's have more than this.

With AES the number is ~250 Exabytes (which is 250 billion gigabytes).
Thanks!
jim
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fwd: The PoinFULLness of the MD5 'attacks'

2004-12-22 Thread james hughes
For this discussion, I think we are missing the point here...
1. With a rogue binary distribution with correct hash, this is -at 
least- a denial of service where the customer will install the rogue 
binary and it will crash in the area that the information was changed. 
MD5 based Tripwire will not catch this happening if this is done on the 
distribution machine.

2. If the rogue binary is a driver that gets into the kernel, it could 
cause a crash and that crash -could- cause a kernel exploit.

3. A modification to an seldom used section of code that can be invoked 
in some non-typical way can cause a machine to crash on command.

4. Offline, the attacker could automate a trial and error scheme to 
create random collisions and testing each until one produces an effect 
to their advantage and then substitute it for the real one.

Again, anything that gives the legitimate user a feeling of security 
(because the hashes match) and allows the attacker to do anything to 
their advantage is a failure of the MD5 algorithm.

Maybe these are low probabilities... Are you willing to step up and say 
there is nothing that the attacker can ever do using these collisions? 
I'm not.

My suggestion is that all distributions publish the MD5 and SHA-256 
hashes for a while and then drop the MD5 based ones in a year or so.

thanks
jim

On Dec 15, 2004, at 10:45 AM, Tim Dierks wrote:
On Wed, 15 Dec 2004 08:51:29 +, Ben Laurie [EMAIL PROTECTED] 
wrote:
People seem to be having a hard time grasping what I'm trying to say, 
so
perhaps I should phrase it as a challenge: find me a scenario where 
you
can use an MD5 collision to mount an attack in which I could not mount
an equally effective attack without using an MD5 collision.
Here's an example, although I think it's a stupid one, and agree with
you that the MD5 attack, as it's currently known to work, isn't a
material problem (although it's a clear signal that one shouldn't use
MD5):
I send you a binary (say, a library for doing AES encryption) which
you test exhaustively using black-box testing. The library is known
not to link against any external APIs (in fact, perhaps it's
implemented in a language and runtime with a decent security sandbox
model, e.g., Java). You then incorporate it into your application and
sign the whole thing with MD5+RSA to vouch for its accuracy.
I incorporate several copies of a suitable MD5 collision block in my
library, so one of them will be at the correct 64-byte block boundary.
I can then modify bits inside of my library, which car checked by the
library code and cause it to change the functionality of the library,
yet the signature will still verify.
This would be pretty easy to do as a proof-of-concept, but I don't
have the time.
- Tim
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: IBM's original S-Boxes for DES?

2004-10-04 Thread james hughes
In a personal interview with Walt Tuchman (IBM at the time, worked for 
StorageTek when I met him, now retired) he described the process for 
creating the s-boxes. A set of mathematical requirements were created 
and candidate s-boxes meeting these requirements would be printed out 
on a regular basis. The process ran over a weekend on a 360/195 and the 
results were given to the ASIC developers to determine which would 
result in the smallest ASIC size. One was selected by them. I was told 
that after the requirements were set, NSA did not have a hand in 
selecting the final S-Boxes.

jim
http://www.stortek.com/hughes
On Sep 30, 2004, at 12:25 PM, Steven M. Bellovin wrote:
In message [EMAIL PROTECTED], 
Nicolai Moles
-Benfell writes:
Hi,
A number of sources state that the NSA changed the S-Boxes (and 
reduced the ke
y
size) of IBM's original DES submission, and that these change were 
made to
strengthen the cipher against differential/linear/?? cryptanalysis.

Does anybody have a reference to, or have an electronic copy of these 
original
S-Boxes?

It was only to protect against differential cryptanalysis; they did not
know about linear cryptanalysis.  See Don Coppersmith, The Data 
Encryption
Standard (DES) and its strength against attacks, IBM Journal of 
Research
and Development, Vol. 38, n. 3, pp. 243-250, May 1994.

--Steve Bellovin, http://www.research.att.com/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: references on traffic analysis?

2004-09-10 Thread james hughes
On Sep 7, 2004, at 11:12 PM, Steve Bellovin wrote:
What are some of the classic, must-read, references on traffic 
analysis?
(I'm familiar with the Zendian problem, of course.)
In looking through my library, I came across two references (I would 
not say 'must read' though).

Code Breakers (David Kahn) has several short real world examples. It 
is not a treatise per-se, but is interesting.

The Hut Six Story, Breaking the Enigma Codes (Gordon Welchman) 
describes many of the aspects of traffic analysis as a precursor to an 
actual cryptanalysis attack.

I don't know any study texts on this subject.
Thanks
jim
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


CRYPTO2004 Rump Session Presentations, was Re: A collision in MD5'

2004-08-17 Thread james hughes
Hello:
This is Jim Hughes, General Chair of CRYPTO2002. There are three 
significant Rump session papers on hash collisions that will be 
presented, including an update on this one (and about 40 other short 
papers on other aspects of cryptography). As the session firms up, more 
information it will be posted at

http://www.iacr.org/conferences/crypto2004/rump.html
Barring technical or other difficulties, if you want to hear this from 
the horses mouth, the CRYPTO2004 Rump Session will be webcast at 7pm 
pacific Tuesday Aug 17 for as long as it takes. You may join us 
virtually using the following links (depending on the readers).

Internet Explorer
http://128.111.55.99/crypto.htm 
Microsoft media server
mms://128.111.55.99/crypt
The players (for MS and Mac) are available from
http://www.microsoft.com/windows/windowsmedia/players.aspx
I assume MS clients will be able to cope. I know that my MacOSX machine 
with Windows Media Player can use the mms: link. I welcome feedback 
from anyone using other readers on other platforms like Linux.

The server is currently up and running and is broadcasting a dark, 
empty, and silent hall. This should be more interesting after sunup 
Tuesday Santa Barbara time. You may expect sound near to the start 
time.

This is our the conferences first webcast, and I hope that it works for 
you. If there are problems, I will apologize in advance.

Thanks
jim

On Aug 16, 2004, at 9:02 PM, Eric Rescorla wrote:
I've now successfully reproduced the MD5 collision result. Basically
there are some endianness problems.
The first problem is the input vectors. They're given as hex words, but
MD5 is defined in terms of bitstrings. Because MD5 is little-endian, 
you
need to reverse the written byte order to generate the input data. A
related problem is that some of the words are given as only 7 hex
digits. Assuming that they have a leading zero fixes that
problem. Unfortunately, this still doesn't give you the right hash
value.

The second problem, which was found by Steve Burnett from Voltage
Security, is that they authors aren't really computing MD5. The
algorithm is initialized with a certain internal state, called an
Initialization Vector (IV). This vector is given in the MD5 RFC as:
word A: 01 23 45 67
word B: 89 ab cd ef
word C: fe dc ba 98
word D: 76 54 32 10
but this is little-endian format. So, the actual initialization values
should be 0x67452301, etc...
The authors use the values directly, so they use: 0x01234567,
etc... Obviously, this gives you the wrong hash value. If you use these
wrong IVs, you get a collision... though strangely with a different 
hash
value than the authors provide. Steve and I have independently gotten
the same result, though of course we could have made mistakes...

So, this looks like it isn't actually a collision in MD5, but rather in
some other algorithm, MD5'. However, there's nothing special about the
MD5 IV, so I'd be surprised if the result couldn't be extended to real
MD5.
-Ekr
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: CRYPTO2004 Rump Session Presentations, was Re: A collision in MD5'

2004-08-17 Thread james hughes
I have 2 items of note for this list.
1. The web site is updated with program and the times.
http://www.iacr.org/conferences/crypto2004/rump.html
2. I was typing fast, and mistyped my title. I am General Chair this 
year, not 2002 as was stated.

Enjoy.

On Aug 17, 2004, at 1:39 PM, james hughes wrote:
Yes, my mistake. the link has an 'o' at the end.
mms://128.111.55.99/crypto

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Al Qaeda crypto reportedly fails the test

2004-08-15 Thread james hughes
In message [EMAIL PROTECTED], John Denker writes:
Here's a challenge directly relevant to this group:  Can you
design a comsec system so that pressure against a code clerk
will not do unbounded damage?  What about pressure against a
comsec system designer?
If I understand your question correctly, in 1994 a VPN product was 
fielded that had this capability. It did not have any capability for 
static group or tunnel keys. It was only RSA/DH using DH for the tunnel 
key and RSA only for authentication. The device had perfect forward 
secrecy because the use of RSA disclosed nothing about the tunnel 
keys, and complete RSA secret disclosure would only divulge that the 
D-H was authentic. The DH private keys were use once random and the 
public parameters, well, public. The user could set the tunnel lifetime 
short or long, their choice.

In this case, the code clerk had no direct access to the key material 
and could not set static keys even if they tried. The box was not 
tamper resistant, but it was not easy to remove the keys even with 
physical access.

The device did not have a group password (current Cisco IPSEC 
vulnerability) and used an invitation scheme to bring new nodes in. 
Link to Cisco notice is here http://tinyurl.com/6jovo

Once the system was fielded, pressure on the systems designer could not 
change this.

In essence, there was no code clerk. One can argue that the network 
administrator is the code clerk, but that person could still wire 
around the VPN device or attach a completely separate backdoor to to 
cause, as you say, unbounded damage in a way that does not compromise 
the comsec system.

This was one of the original proposals for IPSEC, but was not selected 
(but that is another story). Subsequent generations of this device are 
still being built and sold from http://www.blueridgenetworks.com/

So, as long as I have understood your question, such systems have 
existed for some time.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]