Re: AES-CBC + Elephant diffuser

2009-11-01 Thread Darren J Moffat

Eugen Leitl wrote:

We discuss why no existing cipher satisfies the requirements of this
application. Uh-oh.

http://www.microsoft.com/downloads/details.aspx?FamilyID=131dae03-39ae-48be-a8d6-8b0034c92555DisplayLang=en

AES-CBC + Elephant diffuser

Brief Description

A Disk Encryption Algorithm for Windows Vista

^^^

That is the key issue here, it is a disk encryption algorithm 
independent of the filesystem that sits above it.


If instead you put the encryption directly into the filesystem, rather 
than below it, then the restrictions of sector size that mean you can't 
easily use a MAC go away.


This is exactly what we have done for ZFS, we do use a MAC (the one from 
CCM or GCM modes) as well as a SHA256 hash of the ciphertext (used for 
resilvering operations in RAID) and they are stored in the block 
pointers (not the data blocks) forming a Merkle tree.  We also have a 
place to store an IV.  So every encrypted ZFS block is self contained, 
has an IV and a 16 byte MAC.   This means that the crypto is all 
standards based algorithms and modes for ZFS.


http://hub.opensolaris.org/bin/view/Project+zfs-crypto/

--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: AES-CBC + Elephant diffuser

2009-11-01 Thread Paul Hoffman
At 2:24 PM +0100 10/29/09, Eugen Leitl wrote:
We discuss why no existing cipher satisfies the requirements of this
application. Uh-oh.

Yeah, we all know what a light-weight and careless person Neils Ferguson is. 
Who would listen to him?

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: AES-CBC + Elephant diffuser

2009-11-01 Thread Eugen Leitl
On Thu, Oct 29, 2009 at 07:15:53AM -0700, Paul Hoffman wrote:
 At 2:24 PM +0100 10/29/09, Eugen Leitl wrote:
 We discuss why no existing cipher satisfies the requirements of this
 application. Uh-oh.
 
 Yeah, we all know what a light-weight and careless person Neils Ferguson is. 
 Who would listen to him?

Ah, should have spent a few seconds looking him up
http://en.wikipedia.org/wiki/Niels_Ferguson
http://www.macfergus.com/

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


deterministic random numbers in crypto protocols -- Re: Possibly questionable security decisions in DNS root management

2009-11-01 Thread Zooko Wilcox-O'Hearn

On 2009 Oct 19, at 9:15 , Jack Lloyd wrote:


On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:


DSA was (designed to be) full of covert channels.


one can make DSA deterministic by choosing the k values to be HMAC- 
SHA256(key, H(m))


I've noticed people tinkering with (EC) DSA by constraining that  
number k.  For example, Wei Dai's Crypto++ library generates k by  
hashing in the message itself as well as a timestamp into an RNG:


http://allmydata.org/trac/cryptopp/browser/c5/pubkey.h?rev=324#L1036

Wei Dai's motivation for this is to deal with the case that there is  
a rollback of the random number generator, which has always been  
possible and nowadays seems increasingly likely because of the rise  
of virtualization.  See also Scott Yilek: http://eprint.iacr.org/ 
2009/474 which appears to be a formal argument that this technique is  
secure (but I suspect that Scott Yilek and Wei Dai are unaware of one  
another's work).  Yilek's work is motivated by virtual machines, but  
one should note that the same issues have bedeviled normal old  
physical machines for years.


Since the Dai/Yilek approach also uses an RNG it is still a covert  
channel, but one could easily remove the RNG part and just use the  
hash-of-the-message part.


I'm beginning to think that *in general* when I see a random number  
required for a crypto protocol then I want to either  
deterministically generate it from other data which is already  
present or to have it explicitly provided by the higher-layer  
protocol.  In other words, I want to constrain the crypto protocol  
implementation by forbidding it to read the clock or to read from a  
globally-available RNG, thus making that layer deterministic.


This facilitates testing, which would help to detect implementation  
flaws like the OpenSSL/Debian fiasco.  It also avoids covert channels  
and can avoid relying on an RNG for security.  If the random numbers  
are generated fully deterministically then it can also provide  
engineering advantages because of convergence of the output -- that  
two computations of the same protocol with the same inputs yield the  
same output.


Now, Yilek's paper argues for the security of generating the needed  
random number by hashing together *both* an input random number (e.g.  
from the system RNG) *and* the message.  This is exactly the  
technique that Wei Dai has implemented.  I'm not sure how hard it  
would be to write a similar argument for the security of my proposed  
technique of generating the needed random number by hashing just the  
message.  (Here's a crack at it: Yilek proves that the Dai technique  
is secure even when the system RNG fails and gives you the same  
number more than once, right?  So then let's hardcode the system RNG  
to always give you the random number 4.  QED :-))


Okay, aside from the theoretical proofs, the engineering question  
facing me is What's more likely: RNG failure or novel cryptanalysis  
that exploits the fact that the random number isn't truly random but  
is instead generated, e.g. by a KDF from other secrets?.  No  
contest!  The former is common in practice and the latter is probably  
impossible.


Minimizing the risk of the latter is one reason why I am so  
interested in KDF's nowadays, such as the recently proposed HKDF:  
http://webee.technion.ac.il/~hugo/kdf/kdf.pdf .


On Tuesday,2009-10-20, at 15:45 , Greg Rose wrote:

Ah, but this doesn't solve the problem; a compliant implementation  
would be deterministic and free of covert channels, but you can't  
reveal enough information to convince someone *else* that the  
implementation is compliant (short of using zero-knowledge proofs,  
let's not go there). So a hardware nubbin could still leak  
information.


Good point!  But can't the one who verifies the signature also verify  
that the k was generated according to the prescribed technique?


Regards,

Zooko

P.S.  If you read this letter all the way to the end then please let  
me know.  I try to make them short, but sometimes I think they are  
too long and make too many assumptions about what the reader already  
knows.  Did this message make sense?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


re: Security of Mac Keychain, Filevault

2009-11-01 Thread Jerry Leichter
A couple of days ago, I pointed to an article claiming that these were  
easy to break, and asked if anyone knew of security analyses of these  
facilities.


I must say, I'm very disappointed with the responses.  Almost everyone  
attacked the person quoted in the article.  The attacks they assumed  
he had in mind were unproven or unimportant or insignificant.  Gee ...  
sounds *exactly* like the response you get from companies when someone  
finds a vulnerability in their products:  It's not proven; who is this  
person anyway; even if there is an attack, it isn't of any practical  
importance.


Meanwhile, I know many of us on this list use Macs, and many of us  
rely on keychain and Filevault, or at least on encrypted disk images.   
On what rational basis do we rely these?  The only analysis of  
Filevault that I know of is Applebaum and Weinmann's http://crypto.nsa.org/vilefault/23C3-VileFault.pdf 
, which dates back to 2006, two releases of Mac OS ago.  (It found the  
basic mechanisms sound, with some problems around the edges.)  I'm not  
aware of any analyses of Keychain, although key chains can be  
extremely high-value.  If no one on this list is aware of any  
analyses, I'd guess they just don't exist.


Over all, Apple's designs and implementations of security code have  
been good, but hardly perfect.  (Witness the recent questionable  
implementation of encryption on the iPhone 3GS.)  So these are  
legitimate issues.  Meanwhile, I'm sure many of us have potentially  
high-value passwords - like our Mobile Me password - stored in our  
iPhones and iPod Touches.  How safe is that?  I have yet to see any  
analysis of that question either (though I suspect the answer is not  
very).

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-01 Thread Darren J Moffat
For the encryption functionality in the ZFS filesystem we use AES in CCM 
or GCM mode at the block level to provide confidentiality and 
authentication.  There is also a SHA256 checksum per block (of the 
ciphertext) that forms a Merkle tree of all the blocks in the pool. 
Note that I have to store the full IV in the block.   A block here is a 
ZFS block which is any power of two from 512 bytes to 128k (the default).


The SHA256 checksums are used even for blocks in the pool that aren't 
encrypted and are used for detecting and repairing (resilvering) block 
corruption.  Each filesystem in the pool has its own wrapping key and 
data encryption keys.


Due to some unchangeable constraints I have only 384 bits of space to 
fit in all of: IV, MAC (CCM or GCM Auth Tag), and the SHA256 checksum, 
which best case would need about 480 bits.


Currently I have Option 1 below but I the truncation of SHA256 down to 
128 bits makes me question if this is safe.  Remember the SHA256 is of 
the ciphertext and is used for resilvering.


Option 1

IV  96 bits  (the max CCM allows given the other params)
MAC 128 bits
ChecksumSHA256 truncated to 128 bits

Other options are:

Option 2

IV  96 bits
MAC 128 bits
ChecksumSHA224 truncated to 128 bits

Basically if I have to truncate to 128 bits is it better to do
it against SHA224 or SHA256 ?

Option 3

IV  96 bits
MAC 128 bits
ChecksumSHA224 or SHA256 truncated to 160 bits

Obviously better than the 1 and 2 but how much better ?
The reason it isn't used just now is because it is slightly
harder to layout given other constrains in where the data lives.

Option 4

IV  96 bits
MAC 32 bits
ChecksumSHA256 at full 256 bits

I'm pretty sure the size of the MAC is far to small.

Option 5

IV  96 bits
MAC 64 bits
ChecksumSHA224 at full 224 bits

This feels like the best compromise, but is it ?

Option 6

IV  96 bits
MAC 96 bits
ChecksumSHA224 or SHA256 truncated to 192 bits

--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com