Warning! New cryptographic modes!

2009-05-11 Thread Jerry Leichter


I recently stumbled across two attempts to solve a cryptographic  
problem - which has lead to what look like rather unfortunate solutions.


The problem has to do with using rsync to maintain backups of  
directories.  rsync tries to transfer a minimum of data by sending  
only the differences between new and old versions of files.  Suppose,  
however, that I want to keep my backup in the cloud, and I don't  
want to expose my data.  That is, what I really want to store is  
encrypted versions of my files - and I don't want the server to be  
able to decrypt, even as part of the transfer.  So what I do is  
locally encrypt each file before trying to sync it.  However, using  
CBC (apparently the only acceptable mode of operation), any change in  
the file itself propagates to changes to the rest of encrypted file,  
so rsync has to transfer the whole rest of the file.  The obvious  
thing to do is to use a mode that doesn't propagate errors - at least  
not too far.


This issue has been faced in the rsync world for compressed filesi n  
the past:  If I try to minimize both byte transfers and space on the  
remote system by compressing files before sending them, any adaptive  
compression algorithm will tend to propagate a single-byte change to  
the end of the file, forcing a big transfer.  The fix that's come be  
be accepted - it's part of all recent versions of gzip (--rsync- 
friendly option, or something like that) is to simply reset the  
compression tables every so often.  (The actual algorithm is a bit  
more clever:  It resets the algorithm whenever some number of bits of  
a rolling checksum match a constant.  This allow resynchronization  
after insertions or deletions.)


Given this history, both murk (http://murk.sourceforge.net/) and  
rsyncrypto (http://sourceforge.net/projects/rsyncrypto/) seem to do  
the same basic thing:  Use CBC, but regularly reset the IV.  Neither  
project documents their actual algorithm; a quick look at the murk  
code suggests that it encrypts 8K blocks using CBC and an IV computed  
as the CRC of the block.  There also seems to be some kind of  
authentication checksum done, but it's not entirely clear what.


So here we have it all:  A new cryptographic mode, documented only in  
C code, being proposed for broad use with no analysis.


In any case, there are obvious, well-understood solutions here:  Use  
counter mode, which propagates changes by a single block of the  
cryptosystem.  Or use any other stream cipher mode.  (An interesting  
question is whether there's a mode that will recover from insertions  
or deletions.  Perhaps something like:  Use counter mode.  If two  
consecutive ciphertext bytes are 0, fill the rest of the ciphertext  
block with 0's, jump the counter by 65536, and insert a special block  
containing the new counter value.)

   -- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


What happened to X9.59?

2009-05-11 Thread Peter Gutmann
I was looking for information on this recently to update an old reference to 
the DSTU version but it seems to have vanished, there's no information on it 
online that I could find after about 2001 or so (apart from a reference to a 
2006 version in a conference paper).  The ANSI web site claims that it doesn't 
exist, stopping the series at X9.58.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-11 Thread Victor Duchovni
On Mon, May 11, 2009 at 02:16:45PM -0400, Roland Dowdeswell wrote:

 In any case, there are obvious, well-understood solutions here:  Use  
 counter mode, which propagates changes by a single block of the  
 cryptosystem.  Or use any other stream cipher mode.  (An interesting  
 question is whether there's a mode that will recover from insertions  
 or deletions.  Perhaps something like:  Use counter mode.  If two  
 consecutive ciphertext bytes are 0, fill the rest of the ciphertext  
 block with 0's, jump the counter by 65536, and insert a special block  
 containing the new counter value.)
 
 I'm not convinced that a stream cipher is appropriate here because
 if you change the data then you'll reveal the plaintext.

Indeed. If the remote copy is a backup, and the local file-system
supports snaphots, then it is far better to arrange for the remote
backup to always be a copy of a local snapshot, and to compute the rsync
delta between the local copy of the remote snapshot and a newer snapshot
locally, and then rsync the delta. Sure, snapshot file-systems are not
yet universal, but given disk size and file-system trends, I would base
encrypted remote backups on a foundation that assumed the existence of
local before/after images.

A cipher that produces largely identical cipher-text for largely identical
plaintext in the face of updates, inserts and deletes, is unlikely to
be particularly strong.

The CBC IV reset should not be too disasterous if the IV is an encrypted
block counter under a derived key. Drive encryption basically does the
same thing with 512 byte blocks. This fails to handle inserts/deletes
that are not multiples of the chunk size.

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com