Cryptography-Digest Digest #111, Volume #11 Sun, 13 Feb 00 02:13:01 EST
Contents:
Re: Latin Squares (was Re: Reversibly combining two bytes?) (zapzing)
Re: Counter mode (was Re: Period of cycles in OFB mode) (David Wagner)
Re: Do 3 encryptions w/ a 40-bit key = 120 bits? (David Wagner)
Re: Block chaining (David Wagner)
Re: Basic Crypto Question 2 ("Douglas A. Gwyn")
Re: Which compression is best? ("Douglas A. Gwyn")
Re: Message to SCOTT19U.ZIP_GUY ("Douglas A. Gwyn")
Re: BASIC Crypto Question ("Douglas A. Gwyn")
Re: Which compression is best? (Jerry Coffin)
Re: RFC: Reconstruction of XORd data (Samuel Paik)
SHA-1 sources needed! ("Nikolaus")
----------------------------------------------------------------------------
From: zapzing <[EMAIL PROTECTED]>
Subject: Re: Latin Squares (was Re: Reversibly combining two bytes?)
Date: Sun, 13 Feb 2000 01:59:23 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Terry Ritter) wrote:
>
> On Sat, 12 Feb 2000 01:36:12 GMT, in <882ded$6q9$[EMAIL PROTECTED]>, in
> sci.crypt zapzing <[EMAIL PROTECTED]> wrote:
>
> >In article <[EMAIL PROTECTED]>,
> > [EMAIL PROTECTED] wrote:
> >> Latin squares are really only useful if the input alphabet and key
> >> alphabet are the same size. The DES S-boxes don't have the same sizes
> >> but seem to work OK.
> >>
> >>
> >
> >Well. actually that is true by definition,
> >since a latin *square* must be square.
> >But your post gives me an interesting idea.
> >What if we generalize it to a latin *rectangle*?
>
> I suppose much would depend upon how it is used. In many cases it
> might be better to use multiple Latin squares: If we just drop in
> multiple occurrences of values at random, we are almost guaranteed to
> have some grouping which will highlight some locality of input values.
> This would leak more information than would happen if groupings were
> minimized by the enforced spacing of different tables.
>
I believe that is correct. If the column number
is the message and the row number is the
encrypting symbol, then if we made up
the "combining rectangle" by stacking up
rows that are permutations of the bytes from
0..255, then we would not be guaranteed that
every column would contain 256 instances
exactly of every byte, but the imbalance would
be much less severe.
And your idea of using multiple latin squares
could be used to, we could use a Latin
square, followed by a generalized combining
function, followed by a latin square. That
would make an encrypting symbol of an
effective size of three bytes. no highlited
input values, and more effective entropy
than a generalized combining function.
--
Do as thou thinkest best.
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Counter mode (was Re: Period of cycles in OFB mode)
Date: 12 Feb 2000 18:17:07 -0800
In article <[EMAIL PROTECTED]>,
David Hopwood <[EMAIL PROTECTED]> wrote:
> > But it is worth mentioning that counter mode also starts to exhibit
> > some regularities (a birthday paradox-like issue) after about 2^32
> > blocks, so it would only be prudent to change keys well before that
> > point.
>
> Really? I agree that it's probably prudent not to encrypt that much
> data with the same key, but I thought the regularities in, say, CBC mode
> were due to collisions in the cipher input (i.e. with random 64-bit inputs
> you expect a collision after about 2^32 blocks, and since the encryption
> function is a fixed permutation, that corresponds to a collision in the
> output). In counter mode there will be no collisions in the input.
Yup. There's your regularity. :-)
By regularity, I mean it behaves differently from, say,
a one-time pad would (with true randomness). In other words,
the output of counter mode can be distinguished from random
after about 2^32 blocks (if there are collisions, it is not
counter mode; if there are no collisions, probably it's not
random).
It may be non-trivial to exploit this property in a
ciphertext-only attack model (certainly it's not as favorable
for the cryptanalyst as a stream cipher with a period of 2^32
would be!), but it's still unpleasant enough to make me want
to change keys before we approach the point where this sort
of information might be available to sophisticated attackers.
> Technically you can infer (under known plaintext) that an output block
> that has been seen will not occur again, but after 2^32 known plaintext
> blocks that only limits the output to 2^64 - 2^32 possibilities. I can
> see why that would be detectable, in the sense that you're not seeing
> repeats when you would expect to do so for other modes, but why would it
> be a weakness? Unless I'm missing something, all it really tells you is
> that counter mode (or some close variant of it) is being used.
Well, I don't have a ready answer for you, but if we look at it
from an information-theoretic perspective, I think there's a storm
brewing here. Even if I don't know of any computationally feasible
way to exploit the available information, it still is enough to
make me worry (it's not like the cryptanalysis problem has been
well-studied and we understand just how far you can go with this
type of an attack).
An alternate way to think about it is to take an exceedingly
unrealistic, but non-trivial, example. Suppose we send traffic
with the characteristic that half of the plaintext is fully known
to the adversary, and the other half is unknown but each unknown
block is known to come from one of only two possibilities (e.g.,
"yes" and "no", or all-zeros and all-ones, whatever). Then
something like 2^33 blocks of ciphertext will be enough for the
cryptanalyst to start recovering (his first block of) plaintext.
(For each unknown plaintext block, one can rule out 2^32 possibilities;
thus, there is only a 2^32/2^64 = 1/2^32 chance of learning anything
interesting, for any given unknown block -- but there are 2^32
unknown blocks, so you expect to see one that you can derive
information from.)
With more than 2^33 texts, the amount of information the cryptanalyst
learns goes up quadratically in the amount of traffic available.
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Do 3 encryptions w/ a 40-bit key = 120 bits?
Date: 12 Feb 2000 18:20:44 -0800
In article <883jib$vai$[EMAIL PROTECTED]>,
Tom St Denis <[EMAIL PROTECTED]> wrote:
> You are of course assuming a type of mitm attack will work. Sometimes
> they require alot of memory and are not fesaible.
What I have pointed out numerous times on sci.crypt is that there
are sophisticated techniques for significantly reducing the amount
of memory required. The current techniques do increase the workload
somewhat (by a smaller factor than the amount of memory saved), but
I think it would be only prudent to expect that further advances in
this area might well give the cryptanalyst enough of an advantage
that the memory requirements can be roughly ignored.
In keeping with the general tradition that we should assume
favorable circumstances for the cryptanalyst when assessing the
security of our cryptosystems, then, I think one is forced to
conclude that one should not rely on the memory usage of today's
meet-in-the-middle attacks for security; one should perhaps
conservatively count the strength of the cryptosystems in terms
of workload alone (when it comes to meet-in-the-middle attacks).
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Block chaining
Date: 12 Feb 2000 18:25:25 -0800
In article <[EMAIL PROTECTED]>, Tim Tyler <[EMAIL PROTECTED]> wrote:
> When writing this, I was (irrelevantly) thinking about decryption ;-/
Right. Well, parallelism for decryption *does* get us half way there!
> However... Plaintext Block Chaining (PBC), and Plaintext FeedBack (PFB)
> modes allow parallel processing in the encryption direction.
Are these ones secure, when used with non-random plaintexts?
(e.g., English text as the plaintext)
I'm not worried so much about chosen-text attacks as whether there is
the possibility that they might share some of the weaknesses of ECB mode,
but to a lesser extent. Any thoughts?
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Basic Crypto Question 2
Date: Sun, 13 Feb 2000 03:50:38 GMT
[EMAIL PROTECTED] wrote:
> 1.Which are the Top 5 PRNG's for practical cryptosystems?
There aren't any. Random number generation and cryptology are
different applications with different requirements.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Which compression is best?
Date: Sun, 13 Feb 2000 03:53:51 GMT
Tim Tyler wrote:
> While this seems to cover pretty much 0% of them. How can you
> /know/ that any confidence you have in your cryptosystem is
> justifiable? You can't. You have to juggle probabilities.
You prove an appropriate theorem using appropriate methods
(as opposed to blathering about how you don't know all methods
of attack).
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Message to SCOTT19U.ZIP_GUY
Date: Sun, 13 Feb 2000 03:57:29 GMT
Tim Tyler wrote:
> Encrypting something twice does not double the time to break.
> Speaking *very* roughly, if anything, it squares it.
If the encryption uses the same key, then it doubles the time
for a brute-force key search.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: BASIC Crypto Question
Date: Sun, 13 Feb 2000 04:03:01 GMT
Johnny Bravo wrote:
> On Sat, 12 Feb 2000 12:44:13 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> >Johnny Bravo wrote:
> >> Technically, the norm is to work on a multiple of 8 bits at a time, i.e.
> >> "bytes" as most languages can easily read byte sized units from a file.
> >> Another norm is working on multiples of 8 bytes, though there is no real
> >> requirement for this (recent 128 and 256 bit block ciphers). Stream
> >> ciphers of course work on a single byte at a time.
> >No; while most computers store information in octets, ciphers
> >typically operate on a bit stream or on blocks much larger than
> >8 bits.
> You missed the term "multiple" in that first line. :)
> The blocks are usually multiples of 8 bits, ...
No, I didn't miss that, I included it for blocks for accuracy.
But you seem to have missed "bit stream" in my response.
------------------------------
From: Jerry Coffin <[EMAIL PROTECTED]>
Subject: Re: Which compression is best?
Date: Sat, 12 Feb 2000 21:13:21 -0700
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
[ ... ]
> : I suppose if you want to get technical, yes, it can (but doesn't
> : necessarily) make a _minuscule_ difference even if the entire
> : plaintext is known
>
> You're right that the difference is miniscule. I should probably not have
> bothered mentioning it.
I agree, you shouldn't have.
[ compression making the difference between a completely secure and
completely broken cipher. ]
> This would happen when the analyst is effectively deprived of a halting
> criterion.
But they're not. The only difference it makes is that when they do a
trial decryption, they have to do an entire message instead of a
single block. You have to require all messages to be almost
ridiculously huge before this makes any serious difference in overall
time.
> If the compression were maximal in some sense, decompressed texts would be
> indistinguishable from plausible messages. The failure of decompreseed
> files to resemble correct messages is one symptom of poor compression.
> So it *WAS* what you were talking about ;-)
I guess as long as you're free to define "in some sense" any way you
want, (I.e. requiring the result you say you want) then your statement
becomes a meaningless tautology. Otherwise, it's basically just
false. e.g. Huffman has been proven to be "maximal in some sense",
but the effect you claim simply doesn't happen. Taking random garbage
and trying to run it through David's decompression algorithm does what
I said before: it produces individual letters with roughly plausible
frequencies, but is easily distinguishable from normal text as soon as
you look at digraphs, trigraph, and so on.
> : I've long since gotten too sick of reading his crap to bother spending
> : a couple hours on Deja News to find it again. If you really care,
> : feel free to do so yourself.
>
> I was the one who was questioning the existence of these things. Why
> should I spend time searching for something I doubt even exists, in order
> to demonstrate your point for you?
My apologies -- I'd mistaken you for somebody who was asking a
question because he wanted to know the answer, not because he was
simply trying to give the appearance winning an argument, even when he
was wrong.
> It's questionable whether what you describe qualifies as "compressing in
> one direction". It takes information from near the end of the file and
> allows it to influence output from near the start of the file. You could
> not use such a method as a stream compressor.
Nobody said anything about a stream compressor. Since you seem to
have forgotten what you said, I'll quote it for you again:
:> Compressing in one direction only diffuses plaintext in one
:> direction.
> I never claimed that compressing in both directions was the /only/ way
> to get information diffusion through the whole file. Obviously there are
> other ways of getting similar effects.
You seem to be changing your story a bit. You specifically said that
compressing in only one direction can only have certain effects. Now
you're saying that "obviously" there are other ways of getting similar
effects. Of course it's obvious you were wrong AFTER I proved it, but
I think it's pretty obvious to everybody reading this thread exactly
what you thought up until your ideas were proven incorrect.
> Well - for what it's worth - compressing in each direction seems to me to
> be a type of diffusion that can easily be applied with a serial machine.
I have to admit that I don't see what it's worth, or how it relates to
anything else being discussed at all.
> Non-1-1 compressors (I assume this is the subject? - or are you referring
> to compressing once in each direction here?) allow systematic rejection of
> keys, regardless of the plaintext they transmit.
That's not necessarily true -- it's only true if decrypting with the
wrong key can and does produce a plaintext that can't possibly have
resulted from the non-1-1 compressor in question.
At the same time, a 1-1 compressor CAN allow easy and systematic
rejection of keys: using David Scott's compressor as an example, even
though every input is theoretically legal, decompressing from most
produces output that's _easily_ distinguishable from normal text.
> If your cypher's keyspace is reduced to dimensions which allow an
> automated search, whether by a bad RNG used to generate the keys, a cypher
> break, key leakage through a programming bug, key knowledge gained through
> captured key documents, or any of a zillion other factors, use of 1-1
> compression can make the difference between having one document which is
> identified as the genuine article, and having millions of possible
> plaintexts, with no known automatic method of choosing between them.
You've got two basic ideas here: one that a non-1-1 compressor makes
it easy to distinguish a correct decryption from an incorrect one.
That's not necessarily true.
Second, that a 1-1 compressor makes it impossible to distinguish
between a correct and an incorrect decryption. That's pretty clearly
not true either -- as I've already pointed out a number of times, the
1-1 nature of David Scott's compressor does NOT prevent automatic
detection of at least the HUGE majority of incorrect keys. I suppose
technically speaking, you can say it requires more advanced
programming and such, since counting digraphs IS a TINY bit more
sophisticated than only counting individual letters. Despite this,
we're still talking about techniques that have been well known for
decades.
> :> If you don't care about compressors systematically adding information to
> :> the files they compress, that's your look out.
>
> : "that's your look out"? You're apparently trying your own version of
> : security through obscurity, saying things that simply make no sense.
>
> ? Did I say something that made no sense?
Yes. I even quoted the part that made no sense to make it easy for
you to pick out. Apparently that wasn't quite enough for you...
> 1-1 compression /is/ generally a good thing, though. Algorithms that
> aren't 1-1 are wasting their range, and losing their efficiency.
In theory that's true. In reality, nobody seems to have devised a 1-1
compressor yet that's nearly as effective as some non-1-1 forms.
> Huffman and arithmetic compressors already have the fundamental technology
> behind their compression available in 1-1 variants - and shorter files
> result. Hopefully, other systems will follow.
You don't seem to be giving a referent for your comparison. You say
"shorter files result", but you never say WHAT they're shorter than.
Clearly they're NOT shorter than the result from MANY forms of
compression that aren't 1-1.
In theory, it's true that a form of compression that was absolutely as
effective as theoretically possible would have to be 1-1. What you're
ignoring is the fact that at the present time, NO compressor extant
even approaches theoretical perfection, and in fact they're all SO far
from perfect, that this isn't really even a consideration at all --
experience shows that no 1-1 compressor extant gives anywhere close to
the compression available by other compressors without the 1-1
characteristic.
I'd also note that I think that if you want to do a reverse
"scrambling" step to diffuse data from the end of the file back toward
the beginning, something other than a compressor is probably just as
effective, and quite a bit simpler to write. Even simply starting at
the end of the file, and writing out the XOR of each byte with the
following byte will effectively diffuse information all the way from
the end of the file back toward the front. Most forms of compression
will actually expand data when run across the same data a second time,
but simply XORing bytes avoids any possibility of that happening.
--
Later,
Jerry.
The universe is a figment of its own imagination.
------------------------------
From: Samuel Paik <[EMAIL PROTECTED]>
Subject: Re: RFC: Reconstruction of XORd data
Date: Sun, 13 Feb 2000 05:54:54 GMT
David A Molnar wrote:
> C doesn't have a nice way to express carry bits.
Sort of true. However:
unsigned x, y, z, c;
...
z = x + y;
c = (z < x) ? 1 : 0; // c = 1 if carry out from addition
This is essentially how MIPS and Alpha compute carry out.
--
Samuel S. Paik | http://www.webnexus.com/users/paik/
3D and multimedia, architecture and implementation
Solyent Green is kitniyos!
------------------------------
From: "Nikolaus" <[EMAIL PROTECTED]>
Subject: SHA-1 sources needed!
Date: Sun, 13 Feb 2000 09:22:11 +0300
Could anybody send me the source code (or URL) of SHA-1 hash algorithm
written on C/C++ ??
thanx,
Nikolaus
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************