Cryptography-Digest Digest #868, Volume #10 Sat, 8 Jan 00 11:13:00 EST
Contents:
Re: frequency analysis with homophones (Mok-Kong Shen)
Re: I want to know if this works? (Mok-Kong Shen)
Re: Intel 810 chipset Random Number Generator (Guy Macon)
Re: AES3 Conference: deadline for papers *18*/01/2000 (David Crick)
Re: OLD RLE TO NEW BIJECTIVE RLE (John Savard)
Re: Truly random bistream (Michael)
modifiec game of life encryption, to be analyzed ([EMAIL PROTECTED])
Re: Questions about message digest functions (Tim Tyler)
Re: OLD RLE TO NEW BIJECTIVE RLE (Tim Tyler)
Re: is signing a signature with RSA risky? (Tim Tyler)
Re: Followup: Help Needed For Science Research Project (Paul Crowley)
Re: is signing a signature with RSA risky? (Tim Tyler)
----------------------------------------------------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: frequency analysis with homophones
Date: Sat, 08 Jan 2000 12:59:20 +0100
r.e.s. wrote:
>
> : The fundamental problem I can see is the non-reversibility of
> : implication. That is, if one uses J homophones for each plaintext
> : character, then one has the frequency distribution you obtained.
> : But the implication in the reverse direction is not logically sound.
> : (Equivalence has to be proved in logic.)
>
> The fundamental problem, I'm afraid, is that you're looking at what
> the data does or does not "logically imply", while the issue is one
> of plausible inference in the presence of uncertainty -- not one of
> strict logical implication. That is why the problem is posed as a
> statistical one in the first place.
Unfortunately, in cryptanalysis, as far as I am aware, one often needs
quite a lot of guesswork/speculations/intuitions as well as good luck.
Rational mathematical arguments alone frequently don't help one very
far. In my humble opinion (from what I have sofar understood from
your description) the 'statistical significance' of the sort that
you found is of such 'uncertainty' (on the scale of my personal
'subjective' measure) that I wouldn't start put much energy
straightaway in following the direction 'indicated' by the computing
results. (I consider namely that the result being a 'chance
coincidence' to be pretty high.) Well, this is just my personal
standpoint, probably biased due to my poor knowledge and experiences.
You and others may well have the very opposite standpoint. Obviously,
however, this is analogous to a question of tastes and couldn't be
settled by mathematics or such. For else a follow-up of someone
would have already put an immediate end to that issue.
>
> Nobody knows the exact distribution, nor does the statistical
> approach attempt to "deduce" exact results. The problem is one of
> plausible *induction*, not deduction.
Right. But exactly here lies the source of difficulty. What is
plausible to me is not necessarily plausible to you and vice versa.
Look into what the AI people work on induction and fuzzy reasoning.
They haven't yet gone very far in my humble view. Anyway, their
current tools wouldn't help you much in the present case in my
conviction.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: I want to know if this works?
Date: Sat, 08 Jan 2000 13:09:03 +0100
Jeff Lockwood wrote:
>
> /*******************************************
> rat data stream scrambler:
>
> try it out.
It is always preferable to let other people discuss your design
principles/rationales rather than discuss your C-codes or even
actually run the codes, if you really intend to know if what you
designed works.
M. K. Shen
------------------------------
From: [EMAIL PROTECTED] (Guy Macon)
Subject: Re: Intel 810 chipset Random Number Generator
Date: 08 Jan 2000 07:36:24 EST
http://developer.intel.com/design/chipsets/rng/docs.htm
http://developer.intel.com/design/chipsets/datashts/290658.htm
http://www.intel.com.ec/design/chipsets/rng/faq.htm
http://www.rsasecurity.com/products/bsafe/intel/
http://www.rsasecurity.com/products/bsafe/intel/rsa_rng_nontech.pdf
http://www.rsasecurity.com/products/bsafe/intel/rsa_rng_tech.pdf
------------------------------
From: David Crick <[EMAIL PROTECTED]>
Subject: Re: AES3 Conference: deadline for papers *18*/01/2000
Date: Sat, 08 Jan 2000 12:43:12 +0000
David Crick wrote:
>
> A reminder:
>
> "Paper submission deadline: January 15, 2000"
>
> See: http://csrc.nist.gov/encryption/aes/round2/conf3/aes3conf.htm
> and http://csrc.nist.gov/encryption/aes/round2/conf3/aes3cfp.htm
> and http://csrc.nist.gov/encryption/aes/aes_home.htm
from the third link above:
"Due to a Federal Government holiday on Monday, January 17, paper
submissions for AES3 may be submitted to NIST up until 9:00 am
Eastern Standard Time on Tuesday, January 18."
--
+-------------------------------------------------------------------+
| David Crick [EMAIL PROTECTED] http://members.tripod.com/vidcad/ |
| Damon Hill WC96 Tribute: http://www.geocities.com/MotorCity/4236/ |
| M. Brundle Quotes: http://members.tripod.com/~vidcad/martin_b.htm |
| ICQ#: 46605825 PGP: RSA 0x22D5C7A9 DH-DSS 0xBE63D7C7 0x87C46DE1 |
+-------------------------------------------------------------------+
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: OLD RLE TO NEW BIJECTIVE RLE
Date: Sat, 08 Jan 2000 12:35:04 GMT
On Sat, 08 Jan 2000 04:01:14 GMT, Tom St Denis <[EMAIL PROTECTED]>
wrote:
>Actually the point of encryption is to eliminate bias. Compression is
>suppose to simply remove redundancy. So your point is moot.
>Let me re-iterrate
>COMPRESSION = MAKE SMALLER
>ENCRYPTION = MAKE RANDOM
Ah, but compression _works_ - it couldn't make things smaller in any
other way - by making things more random. Thus, modifying a
compression scheme so that this randomness is more evenly spread, or
whatever, so as to make ciphertext-only attacks on subsequent
encryption harder, is a perfectly legitimate endeavor.
John Savard (teneerf <-)
http://www.ecn.ab.ca/~jsavard/index.html
------------------------------
From: [EMAIL PROTECTED] (Michael)
Subject: Re: Truly random bistream
Date: 08 Jan 2000 12:52:26 GMT
>The time between decay events is NOT a uniform random variable. It
>follows an Erlang distribution (Exponential waiting time). Now if
>you want to use this as a source for a UNIFORM (Bernoulli bit stream)
>one must introduce a transformation. There are then two sources of
>possible error: [maybe more?]
>
>(1) We can not measure the time between events sufficiently accurately.
>(2) We can not compute the non-linear transform with "true" [i.e.
>infinite) precision.
If time from pulse 1 to 2 is greater than time from pulse 2 to 3 then 1.
Otherwise 0. Would there be a very slight bias toward 0 because the pulses are
on average decreasing in frequency? If so couldn't you take a second set of
decay bits (from a different source or something), invert them (to make slight
bias toward 1), then XOR the two strings of bits? Is there a way to improve
upon this?
I guess something like this could never be realized because of the error in
equipment, etc. which I neglected to consider in my previous post.
Michael
[EMAIL PROTECTED]
"By necessity, by proclivity, and by delight, we all quote." (Emerson)
------------------------------
From: [EMAIL PROTECTED]
Subject: modifiec game of life encryption, to be analyzed
Date: Sat, 08 Jan 2000 13:49:15 GMT
===== part one ======
Background:
1) The Game Of Life.
This 'game', as quoted in Scientific American April 1970, was invented
by John Conway while exploring the idea of the universal constructor,
which was first studied by American mathematician John von Neumann in
the 1940s. From this came the 'Game Of Life', the rules of which can
be found in many places on the net including http://www.sciam.com/
(Scientific American) by searching for John Conway.
2) One Way Hash Function.
ref: cryptography-faq/part07, 7.1. What is a one-way hash function?
Adaptations:
1) The Game Of Life.
When the rules of the 'game' are modified to a more simple set of
rules, many different patterns can be generated.
1a) Odd Even rule:
When using a static grid, no buffer swapping, one pixel anywhere
within the grid will produce perfect Serinpani Triangles when each
pixel is set or reset following the single rule:
[R1]Pixel is on when there are an odd number of on pixels immediately
surrounding the pixel.
1b) When using two exchangeable buffers and the same rule above, more
complicated and usually symmetrical patterns can be generated from
simple groups of pixels or interference patters outside the grid area;
previously considered as returning a zero value when counting
neighbours of a pixel on the border.
1c/2a) Setting the state of a pixel in a destination buffer depending
on [R1] from a position correspondent pixel from a source buffer is an
extreme version of a One Way Hash Function. Infact, it could be said
to be a mere checksum.
3) A block of data can be taken from any source; be it random,
plaintext or other organised data, and layed bit for bit as pixel
information on a source buffer. After each source pixel has been
analysed by [R1] and the results stored in the destination buffer,
almost 99.9% of the original data is lost. There is no method to
reverse the process.
Beginning with the destination buffer as source, a second iteration
yields more destruction of data coherence, what little there was.
Iteration after iteration completely deforms the original data into
complete chaos.
QBASIC example:
xo1=0
xo2=170
for cnt=1 to 100
for y=0 to 160
for x=0 to 160
c=0
for i=-1 to 1
for j=-1 to 1
if i<>0 and j<>0 then
if point(x+ox1+i,y+j)=1 then
c=c+1
end if
end if
next j
next i
pset(x+xo2,y),c and 1
next x
next y
swap ox1,ox2
next cnt
point(x,y) returns the color value of pixel x,y
pset(x,y),n sets the pixel x,y color n
====== part two =========
So What?:
1) Using a grid of 256 by 16 pixels, processing any image or 'dumped'
data using [R1] through 511 iterations will miraculously reproduce a
bit-for-bit perfect 'copy' of the original data. While watching a
graphically recognisable black and white image being processed, such
as a bit image of text, it's possible to see rotated, inverted,
mirrored and intertwined sections of the original image.
2) If the source image is less than 25% of the total grid area, the
image will mutate and slowly progress towards the empty end of the
grid. At the 255th or 256th iteration the 75% blank area will reappear
with a seemingly random collection of pixels at the other end. All the
information from the original image has been rearranged and placed
into an x y box of exactly the same size. (This appears to be the most
random configuration generated when the entire grid is used for
data/image storage.) Further iterations, to the completion of the
process will result in the original image.
3) Introducing an interference pattern outside of the 256x16 grid will
introduce unpredictable permutations in the data/image processing. If
the interference pattern is removed at any time before 511 iterations
have been achieved the final image will be anywhere between quite
degenerated to complete randomness. When the interference pattern is
present for all 511 iterations, the data/image is returned to it's
original state.
Application:
3a) By stopping the process with an interference pattern at the 255th
iteration and saving the resulting data/image, an indecipherable block
of seemingly random bytes results.
Beginning the process again from the first iteration without the
correct interference pattern produces nothing but more random data.
When the process is begun again with the interference pattern in
place, the data need only be processed 256 times before returning to
it's original form.
===== Benefits and Misc topics. ========
1) Interference patterns have now become KEY's. There are four sides
of a grid area which can be used for an interference pattern. Two
sides have 16 bits and the other two have 256 bits. This gives the
possibility of using one simple 16 bit key, open to 'Brute Force'
attacks, 32 bits; better. One of the 256 bit sides or using both for a
512 bit key is by far more secure. Although untested by myself, using
the outside boundaries of the destination copy buffer when it becomes
the source allows a second set of 2x16 and 2x256 bit interference
patters. In total there could be a split sequence of 1088 bits. (my
calculator tells me this number is in error and Javascript claims
infinity...)
2) There is no difference in the time taken for a 16 bit encryption
key and a massive 1088 bit key.
3) A grid of 8x64 takes 256 iterations to complete a cycle. I guess
the next level with this same ability may be a grid of 32x1024 and
take 2048 iterations. But I am sceptical of such a low number of
iterations and lack both a large enough screen for image testing and
time due to a 100mhz computer.
4) This system is increadibly simple. !?
===== As An Encryption system. ======
1) I find it a bit difficult to qualify exactly the type of encryption
system this is.
1a) "A product cipher is a block cipher that iterates several weak
operations such as substitution, transposition, modular
addition/multiplication, and linear transformation.
[5.1] FAQ
1b) "A 'block cipher' just means a cipher that encrypts a block of
data---8 bytes, say---all at once, then goes on to the next block."
[5.1] FAQ
This process iterates 4096 transposition operations 511 times. Is one
iteration of this transposition process a weak operation? It is an
extended one way hash which cannot be weak.
Can I claim a new method?
2) "Nobody knows how to prove mathematically that a product cipher is
completely secure."
[5.2] FAQ
Lacking knowledge of this mathematics I cannot compute this method's
security. It would seem, though, that being a grid of known x,y and a
set number of iterations, some concrete value of security could be
calculated.
3) A few friends who have, from my perspective, an advanced interest
in cryptography are currently analysing my encryption system. I have
not heard from them for longer than I would have expected. Even taking
into account that there are some very pretty patterns created; I had
expected them to have either found the 'flaw' or given up and rated
this as secure enough for them...
3a) Using a 16 bit interference pattern I have discovered a few
occasions where the interference 'key' pattern is embedded within the
data at the 255th iteration only when less than 50% of the original
grid area contains data. Since a further 256 iterations are required
for decryption, by saving encrypted data at the 256th iteration
decoding for an initial 255 iterations, the embedded interference key
pattern is overwritten once with a one way hash. It's lost. For a 256
bit key someone with a better language than QBASIC and a faster than
100mhz computer would have to implement a testing program do detect if
other key sizes create this phenomenon.
=== Enhancements =====
1) It is possbile to change the interferance pattern half way through
the encryption cycle, at about the 128th iteration, for example.
Decoding would require begining with the second key and replacing it
with the first key at about the 303rd iteration.
=== Last Word =======
Finally, I can say, with exception to the last point 3a), this
encryption system is more secure than my ability to 'break'. FAQ
94/06/07 would have it; encryption routine E and decryption routine
Ei-1 (where i is the number of iterations). Is this then, with data D,
Ei-1(E(d))=D ?
D10n...
[o] 8 Jan 2000
[EMAIL PROTECTED]
http://yoboseyo42.virtualave.net
fax: +61 089-495-1101 (mon-fri)
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Questions about message digest functions
Reply-To: [EMAIL PROTECTED]
Date: Sat, 8 Jan 2000 15:29:11 GMT
Scott Fluhrer <[EMAIL PROTECTED]> wrote:
: "Matt Timmermans" <[EMAIL PROTECTED]> wrote:
:>I missed the start of this thread, but as far as I know, there are no known
:>one-way permutations that can be shown to be permutations [...]
:
: Doing modular exponentions can be shown to be a permutation without giving
: away any secret material. That is, you can find particular values of g and p
: such that:
: f(x) = (g**x) mod p
: can be demonstrated to be a permutation from (1..p-1) to (1..p-1), without
: there being any known way to compute the inverse in a reasonable period of
: time.
While AFAICS, this example is good, it has the /slight/ problem in the
context under discussion that it's not normally possible for p-1 to be set
to the size of (say) a n-bit hash - except if 2^n + 1 happens to be
prime.
However, as a demonstration of the existence of one-way bijective
functions, it seems fine.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
When in doubt, do as doubters do.
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: OLD RLE TO NEW BIJECTIVE RLE
Reply-To: [EMAIL PROTECTED]
Date: Sat, 8 Jan 2000 15:35:19 GMT
Tom St Denis <[EMAIL PROTECTED]> wrote:
: [EMAIL PROTECTED] wrote:
:> John Savard <[EMAIL PROTECTED]> wrote:
:> : (For myself, while I too think removing certain reduncancies from
:> : compression have their uses, I quarrel with any attempt to emphasize
:> : one-to-one purity at the expense of bias. [...]
:>
:> Bias in the resulting compressed file is certainly important.
:>
:> Which is /more/ important depends partly on the relative sizes of the
:> bias caused by lack of elimination of redundancies in the plaintext, and
:> the bias introduced by a lack of 1-1 compression.
: Actually the point of encryption is to eliminate bias.
No. The point of encryption is to make recevering the plaintext difficult
given the cyphertext. Encryption schemes that produce highly non-random
cyphertext certainly exist - and even have concrete applications.
: Compression is suppose to simply remove redundancy. So your point is moot.
Removing redundancy has the side effect of reducing bias. So my point was
correct.
: Let me re-iterrate
: COMPRESSION = MAKE SMALLER
: ENCRYPTION = MAKE RANDOM
ENCRYPTION != MAKE RANDOM
ENCRYPTION = MAKE DIFFUCULT TO RECOVER MESSAGE
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
The hidden flaw never stays that way forever.
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: is signing a signature with RSA risky?
Reply-To: [EMAIL PROTECTED]
Date: Sat, 8 Jan 2000 15:46:11 GMT
Anton Stiglic <[EMAIL PROTECTED]> wrote:
[Tim says signing the message inside all encryption has problems]
: About your argument: firstly, the signature keys and encryption keys are
: should be totaly independent (this is obvisous, since the private parts
: belong to two different entities). So the failling of a signature
: verification doesn't tell you anything about the encrypted message or the
: key used for encryption.
No. If you're using a signing method with a publicly available key, and
you have the information that the message has been signed by a particular
party (who has perhaps been identified by traffic analysis), then
failure of signature verification allows you to systematically reject
keys used in the main encryption, without performing any other sort of
frequency analysis on the message at all.
If the message's signature verification fails, you /know/ you are not
using the correct key to decrypt (assuming the message is not corrupt).
: Secondly, you have to decrypt first, and then verify the signature of the
: message (in a private fashion, of cours).
Yes. Decrypt, then verify.
: Publicaly verifying a message doesn't mean you need to publicaly
: diffuse the message, it just means that you get a public key to
: verify it privately. If you have to show it to some on (in order to
: prove something), you have to show the cleartext (showing just the
: ciphertext won't proove anything).
I don't see what you are getting at here. I dont recall mentioning
diffusion. I don't see what showing the public key (for the signature I
presume?) to someone has to do with anything, nor how this would recover
the cleartext.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
All complex things start life being simple.
------------------------------
From: Paul Crowley <[EMAIL PROTECTED]>
Subject: Re: Followup: Help Needed For Science Research Project
Date: 8 Jan 2000 13:27:30 -0000
Pelle Evensen <[EMAIL PROTECTED]> writes:
> Bob Jenkins has done some analysis of RC4 and there is a slight bias;
> http://www.burtleburtle.net/bob/rand/isaac.html#RC4code
Ah, I lost Bob's pages when they left geocities! I'll update my
links. I've run similar tests on full-sized RC4, the results are
here:
http://www.hedonism.demon.co.uk/paul/rc4/
> I'd suggest that you try something else than RC4.
Panama and ISAAC spring to mind, but I can't find links for those just
now. However RC4 has seen more analysis than either.
--
__
\/ o\ [EMAIL PROTECTED] Got a Linux strategy? \ /
/\__/ Paul Crowley http://www.hedonism.demon.co.uk/paul/ /~\
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: is signing a signature with RSA risky?
Reply-To: [EMAIL PROTECTED]
Date: Sat, 8 Jan 2000 15:53:21 GMT
Pascal Scheffers <[EMAIL PROTECTED]> wrote:
: Basically, can Alice generate another message and, by modifying her
: private exponent, still get the same signature?
I think the answer to this is "not very easily".
One desirable property of signatures is that it's hard to find a message
corresponding to a given signature. This is what Alice is trying to do
in your example above.
I don't really understand what Alice expects to gain by doing this.
: What I intend to do in practice is:
: Alice has a message for Bob. She signs it.
: Alice sends the signature of the message to Trent, who then adds a
: timestamp and gives the timestamp and his signature over it back to
: alice.
: Alice sends the message with her signature and Trents timestamp to
: Bob.
This all sounds fine to me. Trent has to be well known for not placing
false timestamps on messages and then signing them, of course ;-)
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
Those who can't write, write help files.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************