Cryptography-Digest Digest #989, Volume #11 Fri, 9 Jun 00 13:13:00 EDT
Contents:
Re: Random IV Generation ([EMAIL PROTECTED])
Re: Is OTP unbreakable?/Station-Station (Tim Tyler)
Re: Multiple encryptions (Tim Tyler)
Re: Statistics of occurences of prime number sequences in PRBG output (Mok-Kong
Shen)
Re: Some dumb questions (Mok-Kong Shen)
Re: DES question ("Paul Pires")
Re: Multiple encryptions (John Myre)
Re: Brute forcing for Counterpane's Password Safe ("TheGPFguy")
Re: Random IV Generation (Terry Ritter)
Re: Encoding 56 bit data ---HELP--- (Runu Knips)
Re: testing non linearity of arithmetic-logic combinations (Terry Ritter)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Random IV Generation
Date: Fri, 09 Jun 2000 16:04:58 GMT
In article <[EMAIL PROTECTED]>,
Eric Lee Green <[EMAIL PROTECTED]> wrote:
> http://twofish-py.sourceforge.net :-). If you are using Linux or
> FreeBSD, the situation is even simpler: simply request 8 bytes
> from the file "/dev/random".
What other unix or unix-like OSs supply such random devices? OpenBSD has
several random devices, but the manpages specify that /dev/random is
reserved for use with hardware random number generators; one uses
/dev/arandom typically (ARC4 seeded by network, console, disk, etc..)
Some fellow has "egd" -- entropy gathering daemon -- for those machines
without cryptographic random numbers built in.
How about windows? Is its only source of randomness coming from MS's
cryptographic library? Does anyone know if that has a decent PRNG?
:)
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Is OTP unbreakable?/Station-Station
Reply-To: [EMAIL PROTECTED]
Date: Fri, 9 Jun 2000 15:55:13 GMT
Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> If the message were sandwiched at a genuinely random position within
:> 1K of random bytes before the OTP was applied (with some signal for
:> stripping the data off again), this attack would succeed only one
:> time in a thousand - rather than every single time.
: No, it would succeed every time, if the cryptanalyst were competent.
It appears that you must be making some very different assumptions to me
to arrive at this conclusion.
Are you assuming that the "random" pad is not suitably random? Or that
the analyst has stolen a copy of it?
Are you assuming that the RNG used to decide where the random data ends
and the actual message begins terminally broken?
Does the analyst get feedback from the recipient when he successfully
forges a message - and thousands of failed forgeries "don't matter"?
I believe that - unless there has been an insecurity introduced into the
implementation, the analyst will be none-the-wiser about where in the
message the actual message lies - and his chance of making a forgery will
be about 1/2^n - if n bits have been added.
Note that the analyst cannot try the plaintext at every possible position
(and see which is correct) - since in each position the revealed key will
appear to be a uniform random stream - with no way to decide which
position was correct.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Goodbye cool world.
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Multiple encryptions
Reply-To: [EMAIL PROTECTED]
Date: Fri, 9 Jun 2000 16:06:50 GMT
AllanW <[EMAIL PROTECTED]> wrote:
: We have some encryption program E [which] is meant to take any
: plaintext (even non-printable data) and encrypt it, to
: transmit it safely to other sites. E identifies the
: encrypted data with a brief cleartext message at the
: start of it's output, to allow the decryption engine to
: avoid trying to decrypt data that never was encrypted. [...]
: Secretly, we also have another encryption program D. It
: isn't public knowledge, but what we really do is take our
: data files and encrypt them with D. Then we take the D
: output and feed that into E. Programs D and E know
: nothing of each other; each is meant to be used as a
: stand-alone encryption engine. D also appends some
: cleartext at the beginning of it's output, but of course
: E encrypts that [...]
: I've heard that this hypothetical case is a bad idea, and
: not just because of any false sense of security. Someone I
: respect tells me that the result is actually LESS secure
: than using either D or E alone.
You have known-plaintext in both inputs to the respective
cyphers. If there's a partial-known plaintext attack against both cyphers,
the scheme migh be weaker than either alone - if known plaintext were
otherwise not common.
If you don't add in the known plaintext, then this is an orthodox
cypher stack - which is likely to be stronger than either cypher alone.
: Suppose that D is a home-grown "security by obscurity"
: encryption engine that was never released to the public. [...]
This makes little difference - a pretty fundamental principle is to
assume that the attacker has access to the cyphermachine.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Breast is best.
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Statistics of occurences of prime number sequences in PRBG output
Date: Fri, 09 Jun 2000 18:44:44 +0200
John wrote:
> Mathematicians and computer scientists view formulas a bit
> differently. A mathematical formula can be translated into a
> computer program. Some computer programs can't always be
> translated into one simple mathematical formula.
I don't think that mathematicians are that separated from the
computer scientists in the notion of functions today. Many
mathematicians also study stuffs related to the Church-Turing
thesis, for example.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Some dumb questions
Date: Fri, 09 Jun 2000 18:44:54 +0200
Jim Gillogly wrote:
> Mok-Kong Shen wrote:
> >
> > Jim Gillogly wrote:
> >
> > > Mok-Kong Shen wrote:
> > > > from other viewpoints, e.g. operating expenses/difficulties. (To
> > > > avoid flames from other readers due to misunderstanding, let me
> > > > repeat that I don't 'recommend' or 'propose' using n-OTP with
> > > > frequency flattening as desciribed above and that I am in fact not
> > > > even sympathetic to OTP as such.)
> > >
> > > Why, then, did you restart this discussion? Trying to help somebody
> > > out who was trying to breathe new life into the rotting corpse of
> > > a dead system seemed like a worthy goal, but wanking around with
> > > something <nobody> believes in seems like a waste of time. I'm out
> > > of this one.
> >
> > Pardon. Which discussion? The frequency distribution issue? Since
> > you in your previous post once again touched about the issue of
> > cracking based on frequencies, I thought that it were consequently
> > allowed to respond to that on my part. Or was that perhaps a sin
> > of mine?
>
> Not at all. The discussion is the one with the above title, i.e. the
> entire thread -- you initiated it apparently to find out ways to improve
> the security of N-time pads, but you've said here that you do not
> recommend or propose either N-time pads or 1-time pads. This leads
> me to wonder what the point was of this exercise -- was it simply to
> increase the volume in sci.crypt?
'Apparently' to you, but unfortunately it does not correspond to my
real intention. Let me tell you the story before I initiated the article.
I have been long time interested in generating pseudo-random bit
sequences and therefore wanted to know the threshold of bias that
could be tolerable for crypto applications. That's why I asked the
first question in the original post. The other day, I read in library a
new book about Venona about which I later posted a pointer.
I was amazed that NSA was that capable of cracking a combination
of code book and OTP. I knew that an OTP shouldn't be reused.
But on reflection I came to the question of whether it could be that
fatal if it were reused in an environment where everything excepting
the reuse were impacable. The book was not precise about code
breaking at all. On the other hand, there were some indications
that a number of other errors than the reuse probably also have
materially contributed to NSA's success. The book also mentioned
rumors of Venona not being actually cracked as such. The reading
of the book finally led me to pose to myself the question of
crackbility of a 2-OTP having only knowledge of the frequency
distribution of single characters and no other aids whatsoever.Since
I, with my humble knowledge, got nowhere with that question, I
determined to formulate the second question and post both to the
group, in the hope that some experts would help me obtain good
answers.
I hope you are now satisfied with this point. As to possible waste
of bandwidth, I am not sure I am that guilty compared to some
other threads I read in the past that, in my personal subjective
view, were at least in part a waste of bandwidth to a much higher
extent. Anyway, my thread has remained strictly within the confines
of crypto.
> > Nonetheless, I like to point out it is a subjective matter whether
> > OTP is a 'dead system'. At least till recently there have been several
>
> The "dead system" is the N-time pad, not the 1-time pad.
But the 2-OTP is in my view an issue of significance to 1-OPT,
because those employing 1-OTP should properly comprehend
the dangers of 2-OTP. So I don't see there is much wrong in
discussing 2-OTP, if that promotes better understanding of
1-OTP, independent of whether 2-OTP is dead or not (i.e. one
should anyway know the reasons leading to the death of 2-OTP).
M. K. Shen
------------------------------
From: "Paul Pires" <[EMAIL PROTECTED]>
Subject: Re: DES question
Date: Fri, 9 Jun 2000 09:36:36 -0700
<Big snip>
Well, I guess low bandwidth and topic tracking are in the eye of the
beholder :-)
Seriously, I saw myself in there a few times, so I thank you. (I could be
paranoid and suspect that I actually inspired a few)
Simple rules:
Follow someone you have disagreed with and find something you can Support
and commend. Then do so. Limiting your tool kit to negative conditioning is
silly.
Only post when you feel you can reach the person you are responding to.
Don't preach to the room. (a pomposity moderator)
Manners are not an attribute of a wimp. Courtesy and consideration are the
lubricants that allow civilizations to function.
Paul
------------------------------
From: John Myre <[EMAIL PROTECTED]>
Subject: Re: Multiple encryptions
Date: Fri, 09 Jun 2000 10:39:11 -0600
Guy Macon wrote:
> [An excellent analysis of composing ciphers]
Is this in the FAQ? It ought to be.
To the OP: the point about possibly getting a weaker system by
composing ciphers can be taken in two ways. In a practical sense,
it means you need to avoid the degenerate cases that Guy mentioned,
and things like them. This should not be too hard, but you can
see that inexperienced people could make mistakes.
In a theoretical sense, the statement about weakness is just a
statement about ignorance. A general composition of ciphers could
be stronger, or weaker, or the same, depending on details. See Guy's
post for examples. Also, it is often true that in a specific case,
even after competent and lengthy study, we just don't *know*. We may
have an intuition, which is probably right, but we can't *prove* it.
Finally, it should be pointed out that some crypto types express
annoyance at those who espouse doing something extra "because it
can't hurt, and could help." It depends on the application, of
course, but if one of the two ciphers really is strong (enough so
that your opponents can't break it, at all), the extra steps don't
help security, and could compromise it (just through the extra
complexity it adds to the system).
For example, say the extra cipher is fine, but the program that
uses it sometimes copies the plaintext out to the internet (oops!).
Of course that is a programming error, and has nothing to do with
cryptology, per se. But the fewer programs you write, the less
there is to go wrong. I must admit to feeling ambivalent on this
point. If *I* write it, it will work, but if someone else does,
I'm suspicious. Oh well, we all have our weak points; I guess
that's one of mine. :)
John M.
------------------------------
From: "TheGPFguy" <[EMAIL PROTECTED]>
Subject: Re: Brute forcing for Counterpane's Password Safe
Date: Fri, 09 Jun 2000 16:45:27 GMT
Joseph Smith <[EMAIL PROTECTED]> wrote in article
<DaQ%4.124$[EMAIL PROTECTED]>...
>
> Indeed Joe Smith is
> a pseudonym, though it is slightly less obvious than
> John Doe.
It *would* have been slightly less obvious, if you hadn't
misspelled it Joeseph.
No one misspells their own name.
(BTW, the thanks still remain for the auto-key Vigenere attack.)
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Random IV Generation
Date: Fri, 09 Jun 2000 16:53:02 GMT
On 9 Jun 2000 11:25:40 GMT, in <[EMAIL PROTECTED]>,
in sci.crypt [EMAIL PROTECTED] (Mark Wooding) wrote:
>Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
>> I agree with your first sentence, but I think that it helps, if one
>> can keep the IV unknown to the opponent.
>
>For CBC and CFB it can only protect the first plaintext block. I don't
>think there's any point in keeping it secret.
There *is* a reason to keep an IV secret: With CBC an exposed IV
allows an opponent to change the recovered plaintext for the first
block at will.
Changing the IV changes the first block recovered plaintext in the
normal XOR way, bit-for-bit. So if some sort of sequence number or
other selection is given in the first block, that selection could be
changed by intercepting the message and changing the IV.
Subsequent blocks cannot be changed because they depend upon actual
ciphertext. Further, any changes *after* the IV will be chained
through to the end of the message, where the resulting CBC is often
used to check message integrity.
Whether or not changing the first block of plaintext would be problem
would depend upon what the message puts there. Whether or not the
change would be detected would depend upon message integrity checks
beyond CBC. But from the cipher-design perspective, these are clear
potential weaknesses which must be understood and then handled
*somewhere*.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
Date: Fri, 09 Jun 2000 18:53:58 +0200
From: Runu Knips <[EMAIL PROTECTED]>
Subject: Re: Encoding 56 bit data ---HELP---
[EMAIL PROTECTED] wrote:
> In article <[EMAIL PROTECTED]>,
> tomstd <[EMAIL PROTECTED]> wrote:
> > In article <8hqpm1$r94$[EMAIL PROTECTED]>, dexMilano
> > <[EMAIL PROTECTED]> wrote:
> > >Is there some good algorithm coding 7 byte in 7 byte using a
> > masterkey.
>
> > May I ask why you want to encode only 7 bytes with only a 7 byte
> > key? It seems for most purposes a 7 byte key is too small...
>
> This brings to mind a question I have had for some time now; is there
> any point in using a key larger than the data to be encrypted? If there
> is danger in brute-force-searching the key, why not brute-force search
> the plain-text instead? :)
You're right. If you have only 56 bit data, you could as well use a
56 bit OTP. But maybe that guy wants to encrypt 56 bit of data at
a time ? Then a 96 or 128 bit key makes more sense.
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: testing non linearity of arithmetic-logic combinations
Date: Fri, 09 Jun 2000 17:00:51 GMT
On Fri, 09 Jun 2000 12:20:11 +0200, in
<[EMAIL PROTECTED]>, in sci.crypt Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:
>I have two additional dumb questions:
>
>1. Consider two values X and Y in computer words. A mapping
> that can't be expressed as X = a Y + b (mod 2^n), where
> n is the number of bits in word, is non-linear, isn't it?
Well, that depends. It would be "nonlinear" with respect to that
particular linear equation.
Mathematically speaking, linearity requires *every* element of the
transformation to comply with the linear equation. But that is not a
particularly useful definition when investigating cryptographic
strength, where having even *some* elements which comply to a linear
equation might produce weakness.
Note that Boolean function nonlinearity is a *measure*, not a
*quality* to either have or not have.
> What is the (general) basis, i.e. principle, on which a
> quantitative 'measure' of non-linearity is built on?
Boolean function nonlinearity is based on the Hamming distance to the
closest affine Boolean function; it is the number of bits which must
change in a sequence to become the closest affine Boolean sequence.
This can be computed by hand, with the only tricky part being the need
to perform the comparison to each possible affine Boolean function,
which is tedious. So, in practice, we use a Fast Walsh-Hadamard
Transform (FWT).
I don't know what more I can do than to refer once again to the
several pages I have on Boolean function nonlinearity, the list of
references they contain, or my Glossary. See, for example:
http://www.io.com/~ritter/JAVASCRP/NONLMEAS.HTM
or do a search using the panel at the top of on my main page.
>2. For a given n, the can be more than one n*n Litin square.
> Are there quality differences among these as far as
> crypto is concerned? If yes, how does one deal with that?
We deal with "quality differences" the same way we deal with anything
that has a range of values: we examine the distribution and then may
find that a random selection is useful.
For example, I advocate the use of "keyed" or "randomly selected"
substitution tables, even with the knowledge that some are better than
others (indeed, at least one of those tables is actually useless). I
find the situation acceptable because I have investigated the
distribution of nonlinearity values in randomly constructed tables
(see, for example
http://www.io.com/~ritter/ARTS/MIXNONLI.HTM
), and so can estimate the probability of producing a linear table as
being very, very low -- far lower, for example, than the probability
of choosing the correct key at random -- provided the tables are of
reasonable size (8-bits or more). And, while it would be unreasonable
to expect the maximum possible nonlinearity in a random table, it is
very reasonable to expect a substantial amount of nonlinearity.
Similarly, there is also a nonlinearity distribution for Latin squares
of any particular size. I have done some work on this and have found
and presented a "checkerboard" Latin square construction method which
gives a nonlinearity distribution similar to that for
randomly-constructed tables. See:
http://www.io.com/~ritter/ARTS/PRACTLAT.HTM
The construction method thus supports the rapid construction of keyed
Latin squares with the nonlinearity statistics of randomly-constructed
tables. This was actually an intermediate result on the way to
presenting a rapid construction for keyed *orthogonal pairs* of Latin
squares (see:
http://www.io.com/~ritter/ARTS/NONLBBM.HTM
), which can be used in my patented Balanced Block Mixing (see
http://www.io.com/~ritter/#BBMTech
) approach to block cipher construction. This experimental evidence
supports the use of "checkerboard"-constructed oLs pairs to produce
the same nonlinearity distribution as a block cipher of twice the
block size. Extrapolating this means that small tables can be mixed
to produce the same nonlinearity results as an arbitrarily large
block.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************