Re: TLS man in the middle

2009-11-08 Thread Sandy Harris
On 11/6/09, mhey...@gmail.com mhey...@gmail.com wrote:
 From http://www.ietf.org/mail-archive/web/tls/current/msg03928.html
  and http://extendedsubset.com/?p=8

  From what I gather, when TLS client certificates are used, an attacker
  can post a command to a victim server and have it authenticated by a
  legitimate client.


I'm in China and use SSL/TLS for quite a few things. Proxy connections,
Gmail set to always use https and so on. This is the main defense for
me and many others against the Great Firewall.

Should I be worrying about man-in-the-middle attacks from the Great
Firewall servers?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-08 Thread David-Sarah Hopwood
Nicolas Williams wrote:
 On Tue, Nov 03, 2009 at 07:28:15PM +, Darren J Moffat wrote:
 Nicolas Williams wrote:
 Interesting.  If ZFS could make sure no blocks exist in a pool from more
 than 2^64-1 transactions ago[*], then the txg + a 32-bit per-transaction
 block write counter would suffice.  That way Darren would have to store
 just 32 bits of the IV.  That way he'd have 352 bits to work with, and
 then it'd be possible to have a 128-bit authentication tag and a 224-bit
 hash.

 The logical txg (post dedup integration we have physical and logical 
 transaction ids) + a 32 bit counter is interesting.   It was actually my 
 very first design for IV's several years ago!
[...]
 I suspect that sometime in the next 584,542 years the block pointer size 
 for ZFS will increase and I'll have more space to store a bigger MAC, 
 hash and IV.  In fact I guess that will happen even in the next 50 years.
 
 Heh.  txg + 32-bit counter == 96-bit IVs sounds like the way to go.

I'm confused. How does this allow you to do block-level deduplication,
given that the IV (and hence the ciphertext) will be different for every
block even when the plaintext is the same?

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature


Crypto dongles to secure online transactions

2009-11-08 Thread John Levine
At a meeting a few weeks ago I was talking to a guy from BITS, the
e-commerce part of the Financial Services Roundtable, about the way
that malware infected PCs break all banks' fancy multi-password logins
since no matter how complex the login process, a botted PC can wait
until you login, then send fake transactions during your legitimate
session.  This is apparently a big problem in Europe.

I told him about an approach to use a security dongle that puts the
display and confirmation outside the range of the malware, and
although I thought it was fairly obvious, he'd apparently never heard
it before.  When I said I'd been thinking about it for a while, he
asked if I could write it up so we could discuss it further.

So before I send it off, if people have a moment could you look at it
and tell me if I'm missing something egregiously obvious?  Tnx.

I've made it an entry in my blog at

http://weblog.johnlevine.com/Money/securetrans.html 

Ignore the 2008 date, a temporary fake to keep it from showing up on
the home page and RSS feed.

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Security of Mac Keychain, Filevault

2009-11-08 Thread James A. Donald

Jerry Leichter wrote:
 NFC?

Near Field Communications - the wireless equivalent of
whispering in someone's ear.  Ideally, a NFC chip should
only be able to talk to something that is an inch or so
away, and it should be impossible to eavesdrop from more
than a foot or so away.

Lots of people plan that smart phones shall do financial
transactions through NFC.

http://www.intomobile.com/2009/04/10/visa-launches-nfc-
service-in-Malaysia.html
: : Malaysians can now use their Nokia (NYSE:
: :	NOK) 6212 to make near-field Visa payments  
: :	just wave your phone in front of a sensor and

: : bam, instant buy in over 1,800 shops.

These transactions are reversible and made through
authorized retailers, hence, like the widely shared
secret on a credit card, really need very little
security.  Anyone to anyone irreversible transactions
would need considerably higher security, but there
appear to be considerable legal and regulatory obstacles
to that.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


hedging our bets -- in case SHA-256 turns out to be insecure

2009-11-08 Thread Zooko Wilcox-O'Hearn

Folks:

We're going to be deploying a new crypto scheme in Tahoe-LAFS next  
year -- the year 2010.  Tahoe-LAFS is used for long-term storage, and  
I won't be surprised if people store files on Tahoe-LAFS in 2010 and  
then rely on the confidentiality and integrity of those files for  
many years or even decades to come.  (People started storing files on  
Tahoe-LAFS in 2008 and so far they show no signs of losing interest  
in the integrity and confidentiality of those files.)


This long-term focus makes Tahoe-LAFS's job harder than the job of  
protecting transient network packets.  If someone figures out in 2020  
or 2030 how to spoof a network transaction that you sent in 2010 (see  
[1]), it'll be far too late to do you any harm, but if they figure  
out in 2030 how to alter a file that you uploaded to a Tahoe-LAFS  
grid in 2010, that might harm you.


Therefore I've been thinking about how to make Tahoe-LAFS robust  
against the possibility that SHA-256 will turn out to be insecure.


A very good way to do this is to design Tahoe-LAFS so that it relies  
as little as possible on SHA-256's security properties.  The property  
that seems to be the hardest for a secure hash function to provide is  
collision-resistance.  We are analyzing new crypto schemes to see how  
many security properties of Tahoe-LAFS we can continue to guarantee  
even if the collision-resistance of the underlying secure hash  
function fails, and similarly for the other properties of the secure  
hash function which might fail [2].


This note is not about that design process, though, but about how to  
maximize the chance that the underlying hash function does provide  
the desired security properties.


We could use a different hash function than SHA-256 -- there are many  
alternatives.  SHA-512 would probably be safer, but it is extremely  
expensive on the cheap, low-power 32-bit ARM CPUs that are one of our  
design targets [3], and the output size of 512 bits is too large to  
fit into Tahoe-LAFS capabilities.  There are fourteen candidates left  
in the SHA-3 contest at the moment.  Several of them have  
conservative designs and good performance, but there is always the  
risk that they will be found to have catastrophic design flaws or  
that a great advance in hash function cryptanalysis will suddenly  
show how to crack them.  Of course, a similar risk applies to SHA-256!


So I turn to the question of how to combine multiple hash functions  
to build a hash function which is secure even if one or more of the  
underlying hash functions turns out to be weak.


I've read several interesting papers on the subject -- such as [4, 5]  
and especially Robust Multi-Property Combiners for Hash Functions  
Revisited by Marc Fischlin, Anja Lehmann, and Krzysztof Pietrzak  
[6].  The good news is that it turns out to be doable!  The latter  
two papers show nice strong theoretical results -- ways to combine  
hash functions so that the resulting combination is as strong or  
stronger than the two underlying hash functions.  The bad news is  
that the proposal in [6] builds a combined function whose output is  
twice the size of the output of a single hash function.  There is a  
good theoretical reason for this [4], but it won't work for our  
practical engineering requirements -- we need hash function outputs  
as small as possible (partially due to usability issues)


The other bad news is that the construction proposed in [6] is  
complicated, underspecified, and for the strongest version of it, it  
imposes a limit on the length of the inputs that you can feed to your  
hash function.  It grows to such complexity and incurs such  
limitations because it is, if I may call it this, too theoretical.   
It is designed to guarantee certain output properties predicated on  
minimal theoretical assumptions about the properties of the  
underlying hash functions.  This is a fine goal, but in practice we  
don't want to pay such a high cost in complexity and performance in  
order to gain such abstract improvement.  We should be able to hedge  
our bets and achieve a comfortable margin of safety with a very  
simple and efficient scheme by making stronger, less formal, but very  
plausible assumptions about the underlying hash functions.  Read on.


I propose the following combined hash function C, built out of two  
hash functions H1 and H2:


C(x) = H1(H1(x) || H2(x))

The first observation is that if H1 is collision-resistant then so is  
C.  In practice I would expect to use SHA-256 for H1, so the  
resulting combiner C[SHA-256, H2] will be at least as strong as  
SHA-256.  (One could even think of this combiner C as just being a  
tricky way to strengthen SHA-256 by using the output of H2(x) as a  
randomized salt -- see [7].)


The next observation is that finding a pair of inputs x1, x2 which  
collide in *both* H1 and in H2 is likely to be much harder than  
finding a pair of inputs that collide in H1 and finding a pair