Re: Warning! New cryptographic modes!

2009-05-24 Thread Alexander Klimov
On Sun, 10 May 2009, Jerry Leichter wrote:
> The problem has to do with using rsync to maintain backups of
> directories.  rsync tries to transfer a minimum of data by sending
> only the differences between new and old versions of files.
> Suppose, however, that I want to keep my backup "in the cloud", and
> I don't want to expose my data.  That is, what I really want to
> store is encrypted versions of my files - and I don't want the
> server to be able to decrypt, even as part of the transfer.

IMO the simplest solution is to rsync a file container encrypted by
TrueCrypt (each block is separately encrypted by XTS).

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-22 Thread Zooko Wilcox-O'Hearn
For what it is worth, in the Tahoe-LAFS project [1] we simply use CTR  
mode and a unique key for each file.  Details: [2]


Tahoe-LAFS itself doesn't do any deltas, compression, etc., but there  
are two projects layered atop Tahoe to add such features -- a plugin  
for duplicity [3] and a new project named GridBackup [4].


Those upper layers can treat the Tahoe-LAFS as a secure store of  
whole files and therefore don't have to think about details like  
cipher modes of operation, nor do they even have to think very hard  
about key management, thanks to Tahoe-LAFS's convenient capability- 
based access control scheme.


Regards,

Zooko

[1] http://allmydata.org
[2] http://allmydata.org/trac/tahoe/browser/docs/architecture.txt
[3] http://duplicity.nongnu.org
[4] http://podcast.utos.org/index.php?id=52

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-22 Thread james hughes
I believe that mode has been renamed EME2 because people were having a  
fit over the *.


On May 14, 2009, at 12:37 AM, Jon Callas wrote:
I'd use a tweakable mode like EME-star (also EME*) that is designed  
for something like this. It would also work with 512-byte blocks.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread Jon Callas
I'd use a tweakable mode like EME-star (also EME*) that is designed  
for something like this. It would also work with 512-byte blocks.


Jon

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread Jerry Leichter

On May 11, 2009, at 8:27 PM, silky wrote:

The local version needs access to the last committed file (to compare
the changes) and the server version only keeps the 'base' file and the
'changes' subsets.

a)  What's a "committed" file.
b)  As in my response to Victor's message, note that you can't keep a  
base plus changes forever - eventually you need to resend the base.   
And you'd like to do that efficiently.



So yes, it does increase the amount of space required locally (not a
lot though, unless you are changing often and not committing),
Some files change often.  There are files that go back and forth  
between two states.  (Consider a directory file that contains a lock  
file for a program that runs frequently.)  The deltas may be huge, but  
they all collapse!



and
will also increase the amount of space required on the server by 50%,
but you need pay the cost somewhere, and I think disk space is surely
the cheapest cost to pay.
A large percentage increase - and why 50%? - scales up with the amount  
of storage.  There are, and will for quite some time continue to be,  
applications that are limited by the amount of disk one can afford to  
throw at them.  Such an approach drops the maximum size of file the  
application can deal with by 50%.


I'm not sure what cost you think needs to be paid here.  Ignoring  
encryption, an rsync-style algorithm uses little local memory (I think  
the standard program uses more than it has to because it always works  
on whole files; it could subdivide them) and transfers close to the  
minimum you could possibly transfer.


If you want the delta computation to be done locally, you need two  
local
copies of the file - doubling your disk requirements.  In  
principle, you
could do this only at file close time, so that you'd only need such  
a copy
for files that are currently being written or backed up.  What  
happens if
the system crashes after it's updated but before you can back it  
up?  Do you

need full data logging?


I think this is resolved by saving only the last committed.
If a file isn't committed when closed, then you're talking about any  
commonly-used system.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread silky
On Tue, May 12, 2009 at 10:22 AM, Jerry Leichter  wrote:
> On May 11, 2009, at 7:06 PM, silky wrote:
> > How about this.
> >
> > When you modify a file, the backup system attempts to see if it can
> > summarise your modifications into a file that is, say, less then 50%
> > of the file size.
> >
> > So if you modify a 10kb text file and change only the first word, it
> > will encrypt that component (the word you changed) on it's own, and
> > upload that seperate to the file. On the other end, it will have a
> > system to merging these changes when a file is "decrypted". It will
> > actually be prepared and decrypted (so all operations of this nature
> > must be done *within* the system).
> >
> > Then, when it reaches a critical point in file changes, it can just
> > upload the entire file new again, and replace it's "base" copy and all
> > the "parts".
> >
> > Slightly more difficult with binary files where the changes are spread
> > out over the file, but if these changes can still be "summarised"
> > relatively trivially, it should work.
>
> To do this, the backup system needs access to both the old and new version
> of the file.  rsync does, because it is inherently sync'ing two copies,
> usually on two different systems, and we're doing this exactly because we
> *want* that second copy.

The local version needs access to the last committed file (to compare
the changes) and the server version only keeps the 'base' file and the
'changes' subsets.

So yes, it does increase the amount of space required locally (not a
lot though, unless you are changing often and not committing), and
will also increase the amount of space required on the server by 50%,
but you need pay the cost somewhere, and I think disk space is surely
the cheapest cost to pay.


> If you want the delta computation to be done locally, you need two local
> copies of the file - doubling your disk requirements.  In principle, you
> could do this only at file close time, so that you'd only need such a copy
> for files that are currently being written or backed up.  What happens if
> the system crashes after it's updated but before you can back it up?  Do you
> need full data logging?

I think this is resolved by saving only the last committed.


> Victor Duchovni suggested using snapshots, which also give you the effect of
> a local copy - but sliced differently, as it were, into blocks written to
> the file system over some defined period of time.  Very useful, but both it
> and any other mechanism must sometimes deal with worst cases - an erase of
> the whole disk, for example; or a single file that fills all or most of the
> disk.
>
>                                                        -- Jerry

-- 
noon silky

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread Darren J Moffat

Jerry Leichter wrote:
To support insertions or deletions of full blocks, you can't make the 
block encryption depend on the block position in the file, since that's 
subject to change.  For a disk encryptor that can't add data to the 
file, that's a killer; for an rsync pre-processor, it's no big deal - 
just store the necessary key-generation or tweak data with each block.  
This has no effect on security - the position data was public anyway.


That is basically what I'm doing in adding encryption to ZFS[1].  Each 
ZFS block in an encrypted dataset is encrypted with a separate IV and 
has its own AES-CCM MAC both of which are stored in the block pointer 
(the whole encrypted block is then checksumed with an unkeyed SHA256 
which forms a merkle tree).


To handle smaller inserts or deletes, you need to ensure that the 
underlying blocks "get back into sync".  The gzip technique I mentioned 
earlier works.  Keep a running cryptographically secure checksum over 
the last blocksize bytes.  


ZFS already supports gzip compression but only does so on ZFS blocks not 
on files so it doesn't need to do this trick.  The downside is we don't 
get as good a compression as when you can look at the whole file.


ZFS has its own replication system in its send/recv commands (which take 
a ZFS dataset and produce either a full or delta between snapshots 
object change list).  My plan for this is to be able to send the per 
block changes as ciphertext so that we don't have to decrypt and 
re-encrypt the data.  Note this doesn't help rsync though since the 
stream format is specific to ZFS.


[1] http://opensolaris.org/os/project/zfs-crypto/

--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread James A. Donald

Jerry Leichter wrote:
Consider first just updates.  Then you have exactly the same problem as 
for disk encryption:  You want to limit the changes needed in the 
encrypted image to more or less the size of the change to the underlying 
data.  Generally, we assume that the size of the encrypted change for a 
given contiguous range of changed underlying bytes is bounded roughly by 
rounding the size of the changed region up to a multiple of the 
blocksize.  This does reveal a great deal of information, but there 
isn't any good alternative. 


You specified a good alternative:  Encrypted synchronization of a file 
versioning system:


Git runs under SSH.

Suppose the files are represented as the original values of the files, 
plus deltas.  If the originals are encrypted, and the deltas encrypted, 
no information is revealed other than the size of the change.


Git is scriptable, write a script to do the job.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread Matt Ball
On Mon, May 11, 2009 at 2:54 PM, Jerry Leichter  wrote:
> On May 11, 2009, at 2:16 PM, Roland Dowdeswell wrote:
>
>> On 1241996128 seconds since the Beginning of the UNIX epoch
>> Jerry Leichter wrote:
>> I'm not convinced that a stream cipher is appropriate here because
>> if you change the data then you'll reveal the plaintext.
>
> Well, XOR of old a new plaintext.  But point taken.
>
> Sounds like this might actually be an argument for a stream cipher with a
> more sophisticated combiner than XOR.  (Every time I've suggested that, the
> response has been "That doesn't actually add any strength, so why 
> bother"-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread silky
How about this.

When you modify a file, the backup system attempts to see if it can
summarise your modifications into a file that is, say, less then 50%
of the file size.

So if you modify a 10kb text file and change only the first word, it
will encrypt that component (the word you changed) on it's own, and
upload that seperate to the file. On the other end, it will have a
system to merging these changes when a file is "decrypted". It will
actually be prepared and decrypted (so all operations of this nature
must be done *within* the system).

Then, when it reaches a critical point in file changes, it can just
upload the entire file new again, and replace it's "base" copy and all
the "parts".

Slightly more difficult with binary files where the changes are spread
out over the file, but if these changes can still be "summarised"
relatively trivially, it should work.

--
silky

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread Jerry Leichter

On May 11, 2009, at 7:08 PM, Matt Ball wrote:

Practically, to make this work, you'd want to look at the solutions
that support 'data deduplication' (see
http://en.wikipedia.org/wiki/Data_deduplication).  These techniques
typically break the data into variable length 'chunks', and
de-duplicate by computing the hash of these chunks and comparing to
the hashes of chunks already stored in the system.  These chunks
provide a useful encryption unit, but they're still somewhat
susceptible to traffic analysis.  The communication should
additionally be protected by SSH, TLS, or IPsec to reduce the exposure
to traffic analysis.
It's interesting that data-dedup-friendly modes inherently allow an  
attacker to recognize duplicated plaintext based only on the  
ciphertext.  That's their whole point.  But this is exactly the  
primary weakness of ECB mode.  It's actually a bit funny:  ECB mode  
lets you recognize repetitions of what are commonly small, probably  
semantically meaningless, pieces of plaintext.  Data-dedup-friendly  
modes let you recognize repetitions of what are commonly large chunks  
of semantically meaningful plaintext.  Yet we reject ECB as insecure  
but accept the insecurity of data-dedup-friendly modes because they  
are so useful!

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread silky
On Tue, May 12, 2009 at 10:39 AM, Jerry Leichter  wrote:
> On May 11, 2009, at 8:27 PM, silky wrote:
> >
> > The local version needs access to the last committed file (to compare
> > the changes) and the server version only keeps the 'base' file and the
> > 'changes' subsets.
>
> a)  What's a "committed" file.

I'm thinking an SVN-style backup system. When you're done with all
your editing, you just commit the changes to go into the backup. As
part of the commit operation, it decides on the amount of changes
you've done and whether it warrants an entire re-encrypt and upload,
or whether a segment can be done.


> b)  As in my response to Victor's message, note that you can't keep a base
> plus changes forever - eventually you need to resend the base.  And you'd
> like to do that efficiently.

As discussed in my original post, the base is reset when the changes
are greater then 50% of the size of the original file.


> > So yes, it does increase the amount of space required locally (not a
> > lot though, unless you are changing often and not committing),
>
> Some files change often.  There are files that go back and forth between two
> states.  (Consider a directory file that contains a lock file for a program
> that runs frequently.)  The deltas may be huge, but they all collapse!

In that specific case, say and MS Access lock file, it can obviously
be ignored by the entire backup process.


> > and
> > will also increase the amount of space required on the server by 50%,
> > but you need pay the cost somewhere, and I think disk space is surely
> > the cheapest cost to pay.
>
> A large percentage increase - and why 50%? - scales up with the amount of
> storage.  There are, and will for quite some time continue to be,
> applications that are limited by the amount of disk one can afford to throw
> at them.  Such an approach drops the maximum size of file the application
> can deal with by 50%.

No reason for 50%, it can (and should) be configurable. The point was
to set the time at which the base file would be reset.


> I'm not sure what cost you think needs to be paid here.  Ignoring
> encryption, an rsync-style algorithm uses little local memory (I think the
> standard program uses more than it has to because it always works on whole
> files; it could subdivide them) and transfers close to the minimum you could
> possibly transfer.

The cost of not transferring a entirely new encrypted file just
because of a minor change.

-- 
noon silky

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread Jerry Leichter

On May 11, 2009, at 7:06 PM, silky wrote:


How about this.

When you modify a file, the backup system attempts to see if it can
summarise your modifications into a file that is, say, less then 50%
of the file size.

So if you modify a 10kb text file and change only the first word, it
will encrypt that component (the word you changed) on it's own, and
upload that seperate to the file. On the other end, it will have a
system to merging these changes when a file is "decrypted". It will
actually be prepared and decrypted (so all operations of this nature
must be done *within* the system).

Then, when it reaches a critical point in file changes, it can just
upload the entire file new again, and replace it's "base" copy and all
the "parts".

Slightly more difficult with binary files where the changes are spread
out over the file, but if these changes can still be "summarised"
relatively trivially, it should work.
To do this, the backup system needs access to both the old and new  
version of the file.  rsync does, because it is inherently sync'ing  
two copies, usually on two different systems, and we're doing this  
exactly because we *want* that second copy.


If you want the delta computation to be done locally, you need two  
local copies of the file - doubling your disk requirements.  In  
principle, you could do this only at file close time, so that you'd  
only need such a copy for files that are currently being written or  
backed up.  What happens if the system crashes after it's updated but  
before you can back it up?  Do you need full data logging?


Victor Duchovni suggested using snapshots, which also give you the  
effect of a local copy - but sliced differently, as it were, into  
blocks written to the file system over some defined period of time.   
Very useful, but both it and any other mechanism must sometimes deal  
with worst cases - an erase of the whole disk, for example; or a  
single file that fills all or most of the disk.


-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread Jerry Leichter
To handle smaller inserts or deletes, you need to ensure that the  
underlying blocks "get back into sync".  The gzip technique I  
mentioned earlier works.  Keep a running cryptographically secure  
checksum over the last blocksize bytes.  When some condition on the  
checksum is met - equals 0 mod M - insert filler to the beginning of  
the next block before encrypting; discard to the beginning of the  
next block when decrypting.  Logically, this is dividing the file up  
into segments whose ends occur at runs of blocksize bytes that give  
a checksum obeying the condition.  A change within a segment can at  
most destroy that segment and the following one; any other segments  
eventually match up.
Oh, feh.  I'm typing without thinking.  In the worst case, an  
insertion (deletion) of K bytes could create (delete) K/M new  
(existing) segments.  In practice, this is unlikely except in an  
adversarial situation, and all it can do is force extra data to be  
transferred.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-11 Thread Jerry Leichter

On May 11, 2009, at 5:13 PM, Victor Duchovni wrote:


On Mon, May 11, 2009 at 02:16:45PM -0400, Roland Dowdeswell wrote:


In any case, there are obvious, well-understood solutions here:  Use
counter mode, which propagates changes by a single block of the
cryptosystem.  Or use any other stream cipher mode.  (An interesting
question is whether there's a mode that will recover from insertions
or deletions.  Perhaps something like:  Use counter mode.  If two
consecutive ciphertext bytes are 0, fill the rest of the ciphertext
block with 0's, jump the counter by 65536, and insert a special  
block

containing the new counter value.)


I'm not convinced that a stream cipher is appropriate here because
if you change the data then you'll reveal the plaintext.


Indeed. If the remote copy is a "backup", and the local file-system
supports snaphots, then it is far better to arrange for the remote
backup to always be a copy of a local snapshot, and to compute the  
rsync
"delta" between the local copy of the remote snapshot and a newer  
snapshot

locally, and then rsync the delta. Sure, snapshot file-systems are not
yet universal, but given disk size and file-system trends, I would  
base

encrypted remote backups on a foundation that assumed the existence of
local before/after images.
If you have a snapshot file system, sure, you can use it.  Caching  
checksums and using a write barrier (like fsevents in MacOS) would  
also work.


Any such system will eventually build up either a huge number of small  
deltas, or deltas that are close to the size of the underlying files  
(i.e., eventually most things get changed).  So you also need a way to  
reset to a new base - that is, run an rsync as you do today.  Thus,  
while this is certainly a good optimization, you still need to solve  
the underlying problem.


A cipher that produces largely identical cipher-text for largely  
identical

plaintext in the face of updates, inserts and deletes, is unlikely to
be particularly strong.
Consider first just updates.  Then you have exactly the same problem  
as for disk encryption:  You want to limit the changes needed in the  
encrypted image to more or less the size of the change to the  
underlying data.  Generally, we assume that the size of the encrypted  
change for a given contiguous range of changed underlying bytes is  
bounded roughly by rounding the size of the changed region up to a  
multiple of the blocksize.  This does reveal a great deal of  
information, but there isn't any good alternative.  (Of course, many  
file types are never actually changed in place - they are copied with  
modifications along the way - and in that case the whole thing will  
get re-encrypted differently anyway.)


The CBC IV reset should not be too disasterous if the IV is an  
encrypted

block counter under a derived key. Drive encryption basically does the
same thing with 512 byte blocks. This fails to handle inserts/deletes
that are not multiples of the "chunk" size.
It's curious that the ability to add or remove blocks in the middle of  
a file has never emerged as a file system primitive.  Most file system  
organizations could support it easily.  (Why would you want this?   
Consider all the container file organizations we have, which bundle up  
segments of different kinds of data.  A .o file is a common example.   
Often we reserve space in some of the embedded sections to allow for  
changes later - patch areas, for example.  But when these fill,  
there's no alternative but to re-create the file from scratch.  If we  
could insert another block, things would get easier.)


Given that file systems don't support the operation, disk encryption  
schemes haven't bothered either.


To support insertions or deletions of full blocks, you can't make the  
block encryption depend on the block position in the file, since  
that's subject to change.  For a disk encryptor that can't add data to  
the file, that's a killer; for an rsync pre-processor, it's no big  
deal - just store the necessary key-generation or tweak data with each  
block.  This has no effect on security - the position data was public  
anyway.


To handle smaller inserts or deletes, you need to ensure that the  
underlying blocks "get back into sync".  The gzip technique I  
mentioned earlier works.  Keep a running cryptographically secure  
checksum over the last blocksize bytes.  When some condition on the  
checksum is met - equals 0 mod M - insert filler to the beginning of  
the next block before encrypting; discard to the beginning of the next  
block when decrypting.  Logically, this is dividing the file up into  
segments whose ends occur at runs of blocksize bytes that give a  
checksum obeying the condition.  A change within a segment can at most  
destroy that segment and the following one; any other segments  
eventually match up.  (The comparison algorithm can't have anything  
that assumes either block or segment offset from beginning of file are  
si

Re: Warning! New cryptographic modes!

2009-05-11 Thread Victor Duchovni
On Mon, May 11, 2009 at 02:16:45PM -0400, Roland Dowdeswell wrote:

> >In any case, there are obvious, well-understood solutions here:  Use  
> >counter mode, which propagates changes by a single block of the  
> >cryptosystem.  Or use any other stream cipher mode.  (An interesting  
> >question is whether there's a mode that will recover from insertions  
> >or deletions.  Perhaps something like:  Use counter mode.  If two  
> >consecutive ciphertext bytes are 0, fill the rest of the ciphertext  
> >block with 0's, jump the counter by 65536, and insert a special block  
> >containing the new counter value.)
> 
> I'm not convinced that a stream cipher is appropriate here because
> if you change the data then you'll reveal the plaintext.

Indeed. If the remote copy is a "backup", and the local file-system
supports snaphots, then it is far better to arrange for the remote
backup to always be a copy of a local snapshot, and to compute the rsync
"delta" between the local copy of the remote snapshot and a newer snapshot
locally, and then rsync the delta. Sure, snapshot file-systems are not
yet universal, but given disk size and file-system trends, I would base
encrypted remote backups on a foundation that assumed the existence of
local before/after images.

A cipher that produces largely identical cipher-text for largely identical
plaintext in the face of updates, inserts and deletes, is unlikely to
be particularly strong.

The CBC IV reset should not be too disasterous if the IV is an encrypted
block counter under a derived key. Drive encryption basically does the
same thing with 512 byte blocks. This fails to handle inserts/deletes
that are not multiples of the "chunk" size.

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-11 Thread Jerry Leichter

On May 11, 2009, at 2:16 PM, Roland Dowdeswell wrote:


On 1241996128 seconds since the Beginning of the UNIX epoch
Jerry Leichter wrote:
I'm not convinced that a stream cipher is appropriate here because
if you change the data then you'll reveal the plaintext.

Well, XOR of old a new plaintext.  But point taken.

Sounds like this might actually be an argument for a stream cipher  
with a more sophisticated combiner than XOR.  (Every time I've  
suggested that, the response has been "That doesn't actually add any  
strength, so why bother".  And in simple data-in-motion encryption,  
that's certainly true.)


Perhaps Matt Ball's suggestion of XTS works; I don't see exactly what  
he's suggesting.  There is certainly a parallel with disk encryption  
algorithms, but the problem is different:  Using rsync inherently  
reveals what's changed in the cleartext (at least to some level of  
granularity), so trying to protect against an attack that reveals this  
information - something one worries about in disk encryption - is  
beside the point.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-11 Thread Roland Dowdeswell
On 1241996128 seconds since the Beginning of the UNIX epoch
Jerry Leichter wrote:
>

>So here we have it all:  A new cryptographic mode, documented only in  
>C code, being proposed for broad use with no analysis.
>
>In any case, there are obvious, well-understood solutions here:  Use  
>counter mode, which propagates changes by a single block of the  
>cryptosystem.  Or use any other stream cipher mode.  (An interesting  
>question is whether there's a mode that will recover from insertions  
>or deletions.  Perhaps something like:  Use counter mode.  If two  
>consecutive ciphertext bytes are 0, fill the rest of the ciphertext  
>block with 0's, jump the counter by 65536, and insert a special block  
>containing the new counter value.)

I'm not convinced that a stream cipher is appropriate here because
if you change the data then you'll reveal the plaintext.

--
Roland Dowdeswell  http://Imrryr.ORG/~elric/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-11 Thread Matt Ball
On Sun, May 10, 2009 at 4:55 PM, Jerry Leichter  wrote:
>
> I recently stumbled across two attempts to solve a cryptographic problem -
> which has lead to what look like rather unfortunate solutions.
>
> The problem has to do with using rsync to maintain backups of 
> directories-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com