Cryptography-Digest Digest #556, Volume #9 Mon, 17 May 99 05:13:04 EDT
Contents:
Re: Mandlebrot transform ([EMAIL PROTECTED])
Computer-generated random numbers (was Re: Magic) ("Riboflavin")
Re: Security ([EMAIL PROTECTED])
Can Somebody Verify My DES execution? ("Zulkifli Hamzah")
Re: Security (David A Molnar)
Re: On Contextual Strength (wtshaw)
Re: On Contextual Strength ([EMAIL PROTECTED])
Re: Security (wtshaw)
Computer-generated random numbers (was Re: Magic) (Frank Palmer)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Mandlebrot transform
Date: Mon, 17 May 1999 01:16:16 GMT
> The Mandelbrot transform transforms a complex number a + bi into
> c= (a+bi)^2 + (d+ei)
>
> The number of iterations repeated when c is transformed by the same
procedure
> without having the tranformed point "leave the room" is the height of
the
> fractal at the point a + bi.
This 'c' value is the output (usually the color) right?
Since you can zoom any point of a mandelbrot to form another mandelbrot
(MB for short), this transformation is discrete. If you map a 2d plane
(dimensions X by Y) over a MB when z=1 you will have X * Y points to
use, you could also modify z such that X and Y are not known. The
complexity would be limited by the range of z and your other
calculations only.
Also there must be a equal number of outputs 'c' which are equal
probable. I don't think that's the case in a MB which may make it less
usefull.
Tom
--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---
------------------------------
From: "Riboflavin" <[EMAIL PROTECTED]>
Crossposted-To: rec.arts.sf.science
Subject: Computer-generated random numbers (was Re: Magic)
Date: Sun, 16 May 1999 21:10:56 -0400
[crossposted to sci.crypt]
Frank Palmer wrote in message <7hmo5v$qeq$[EMAIL PROTECTED]>...
>: [EMAIL PROTECTED] (Frank Palmer) writes:
>: > Start with text, rotate each byte, XOR with the next byte;
>: > repeat nine times. Result, random bytes.
>: >
>"Random byte" simply means that the preceding bytes provide no way of
guessing
>which byte will come next. Each byte provides 8 bits of information.
>
>All the text need do is contain information for this system to work after
some
>number of repetitions. English text generally contains 1 bit of
information
>per letter. I used 10 letters to provide some margin.
>It wouldn't work for pages of the letter, A; it would work for normal
English
>text.
No, this would not produce random numbers (unless you perform your XORs an
infinite number of times). The numbers might look random to a casual glance,
but would have certain patterns that would make any crypto based off of them
much easier to crack. I crossposted this to sci.crypt, I think one of the
experts there can do a better job of explaining why this is so than I can.
--
Kevin Allegood [EMAIL PROTECTED]
Remove the pants from my email address to reply
"If you can put this postmodernist gibberish to music, I'll dance to it."
-Cleve on some leftist blathering
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Security
Date: Mon, 17 May 1999 01:10:17 GMT
> Of course, that would need to be proved. Such proofs are quite rare.
>
I conjecture that such proofs are impossible. Since there can exist a
algorithm to solve a AES (which would be??) message faster then brute
force. However the actual effort/plaintext/memory required will most
likely make such attacks infeasible or useless (you could just steal
the information physically).
Large keys used properly are good to avoid brute force, however the
underline algorithm must be secure.
Tom
--
PGP public keys. SPARE key is for daily work, WORK key is for
published work. The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'. Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'. Try SPARE first!
--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---
------------------------------
From: "Zulkifli Hamzah" <[EMAIL PROTECTED]>
Subject: Can Somebody Verify My DES execution?
Date: Mon, 17 May 1999 10:11:09 +0800
Reply-To: "Zulkifli Hamzah" <[EMAIL PROTECTED]>
Hi Forum,
I'm newbie to encryption software development (and newbie to this
group either) but ...
I've made a simple program implementing stadard DES as described in
Cryptography and Network Security by William Stallings, using exactly
same tables and s-boxes.
Here some output of my program (spaced manually for readability), using
the same key input with 3 plaintext inputs each differ by 1 bit. (The book
gives some output reference for SDES, but not much on DES).
If anybody outthere has coded the DES algorithm and verified to work
successfully,
could you help verifying mine? - Thank you in advance.
ZulH.
Key:
0000001 1001011 0100100 1100010 0011100 0011000 0011100 0110010
===================
PlainText:
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Output:
10101011 11010000 10101110 01100111 11111010 10111101 10101010 10000010
===================
PlainText:
10000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Output:
11111111 11110101 11101111 11011111 11110101 10111111 01010101 01010110
===================
PlainText:
01000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Output:
01000011 11011111 00111100 01100111 11111010 11111101 10101010 10111111
===================
------------------------------
From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Security
Date: 17 May 1999 03:11:22 GMT
[EMAIL PROTECTED] wrote:
>> Of course, that would need to be proved. Such proofs are quite rare.
>>
> I conjecture that such proofs are impossible.
>Since there can exist a
^^^^^^^^^^^^^^^^^^^^^^^^
> algorithm to solve a AES (which would be??) message faster then brute
^^^^^^^^^^^
> force.
This is part of your conjecture, or do you have some reasoning which
leads you to this conclusion? I'm not sure I follow.
Are you arguing that there are an unlimited number of algorithms to
break ciphers, therefore there must be at least one algorithm for
each AES candidate or other block cipher?
i>However the actual effort/plaintext/memory required will most
> likely make such attacks infeasible or useless (you could just steal
> the information physically).
I'm hesitant to label an attack useless without making specific
reference to an implementation and the design of an entire system.
It is true that some attacks are a lot harder to apply than others,
though. My concern stems from the fact that a lot of attacks I thought
were sort of useless at first glance, like chosen-ciphertext attacks,
end up being useful after all.
-David
------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: On Contextual Strength
Date: Sun, 16 May 1999 23:40:36 -0600
In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn"
<[EMAIL PROTECTED]> wrote:
> ... failure to
> find an effective way to crack a system really provides very little
> evidence of the system's theoretical security. My concern is that
> several participants in these discussions sound like they believe
> the exact opposite.
Better than being optomistic or pessimistic is to be realistic, you know
what you know. You can check an algorithm against known attacks: It passes
or fails each attack. You can't check an algorithm against unknown
attacks, but you can surely try newly discovered ones against algorithms
that seem to have otherwise passed muster.
You might even be able to quantify the resistance of an algorithm to a
certain known attack. For that attack, one algorithm might be stronger
than another, which is useful information. In a scheme with many
algorithms, it would be useful to know that some are more resistant to
certain forms of attack.
There should be no belief one way or another about that which you cannot
prove; just the facts of what you know need be presented. It seems that
successes in breaking algorithms are more apt to be reported than
failures, but to get the best possible picture of each algorithm, leaving
out that information is less than helpful.
--
Weathermen prosphesize and insurance companies predict, while both pretend to be doing
the other to get an audience.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: On Contextual Strength
Date: Mon, 17 May 1999 06:21:00 GMT
In article <[EMAIL PROTECTED]>,
Bryan Olson <[EMAIL PROTECTED]> wrote:
> ...
> Yes, I'd love to have a proof of security, but I don't. We can
> only act on what we know, an I think that means we'd better put
> effort into knowing as much as we can.
I agree, but it seems that the academic community is putting most of
its effort for gaining knowledge into a very narrow path. For a long
time most modern block ciphers have not really been broken in the
sense that we can recover the secret key after a realizable number
of known or chosen plaintexts and using feasible computer
resources. Cryptography is a very practical matter, but it seems to
me that current research has moved to more and more rarefied
levels. In the last AES conference there was discussion of an
attack against a 256 bit SAFER+ key requiring 2^200 chosen
plaintexts. It is not clear to me if this kind of result has really
anything to do at all with the viability of a cipher to maintain
secrets for decades to come. I think there are better avenues for
doing research and gaining knowledge that can actually increase
security:
1. Meta-cryptanalysis. Study how cryptanalytic attacks work and try
to device methodologies for designing ciphers that are resistant
against a large class of unknown attacks too. Maybe this is very
difficult to achieve, but nonetheless it is what we need. In fact
the current obsession with known attacks can lead to mistaken
statements such as: attack X works against cipher A with 2^N known
plaintexts whereas the same attack against cipher B would require
2^(N+10) known plaintexts, THEREFORE cipher A is weaker than cipher
B. Now if N is large then both attacks are theoretical and the
difference in degree says in principle nothing about the overall
strength of either A or B or how they will fare against other
attacks. So, deducing that A is weaker than B is just plainly
wrong, because it can very well be the case that an unforeseen
attack in the future demolishes B but is useless against A. By
definition then, cipher A MAY indeed be stronger then B. Please
observe: I am not claiming that the limited cryptanalytic result
with X means nothing - on the contrary, everything else being the
same we should choose cipher B over A. What I am saying is that a
narrow and theoretical cryptanalytic result alone does not prove
that a cipher is stronger than another. In some cases it is not
even sufficient reason to trust one cipher over the other.
2. Analytic strengthening. Study how cryptanalysis is done and use
these results to design ciphers that are resistant against analysis
(not just attacks that follow successful analysis). If we assume
that there are ways to break non-trivial ciphers in the real world,
then it is quite posible that the cost of analysis is greater than
the cost of implementing the attack. Amazingly - if you think about
it - most academic research completely ignores this cost for the
attacker and even makes the ease of analysis a positive feature of
a new design. It is not difficult to understand why this happens:
whereas our opponent's goal is to read our secrets, the goal of an
academic researcher is to publish interesting papers - you cannot
costeffectively publish beautiful analytic results about ciphers
designed to be difficult to analyze, therefore such designs and the
possibilities they offer are ignored.
3. Security amplifiers. While we are not certain about the strength
of our primitives, why not investigate and prove ways on how to
increase security by combining them? Whitening is an interesting
result in this sense - Maurer's papers about combining ciphers is
another. Ideally, when you combine ciphers each should have an
independent key, but what is the best way to generate keys when in
fact the secret key is smaller than that? I want to combine 3DES
with its 64 bit block with AES and its 128 bit block - what is the
best way to do this? It seems that mixing random data in the
encryption process (therefore increasing the ciphertext size) is a
valid way to increase security - but what is best way to do it
when, say, I can only afford to increase ciphertext size by 20%?
4. Implementation issues. Everybody says the biggest risk comes out
of implementation issues. This was painfully clear at the Second
AES conference when it was shown that all 128 bits of Twofish could
be recovered from a smart card after only 50 non-invasive uses of
it. So how do you validate the implementation of a security system?
The situation is very complex: for example, in a typical PC you
have the interplay of hardware, OS, your software, other software
running concurrently, end-user, organizational environment and
physical environment. Maybe there is not much interesting math in
this kind of problem, but more people should be doing research and
publishing results here. It is such an important field and we are
left in a situation where our best source of knowledge are
Internet's hackers.
5. Actually breaking something. We can argue about what the
evidence so far means, but there is no reason to believe that
modern ciphers cannot be broken in practice. As a case in point,
nobody has really broken any of the 15 AES candidates: nobody has
demonstrated how to recover any candidate's key using, say, a
million known plaintexts and a million hours of PC time. Instead of
concentrating their efforts in trying traditional methods or
variants of these methods to see how far they can go, I think
researchers should try to device new kind of attacks with the goal
not only of theoretical improvement but of actually breaking a
cipher.
--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---
------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: Security
Date: Sun, 16 May 1999 23:52:17 -0600
In article <7hnq9p$had$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
> > Of course, that would need to be proved. Such proofs are quite rare.
> >
>
> I conjecture that such proofs are impossible. Since there can exist a
> algorithm to solve a AES (which would be??) message faster then brute
> force. ...
Right now, efforts are spread against several candidates. By narrowing
the field, those few will get more effort spent against them, and the
winner will be the loser in a way since it will receive more attention
ultimately than all the others combined.
Then, the question is what happens if extreme efforts find a flaw in the
single pick. I suppose that one should look a little farther back in the
pack again, if the flaw is not easily fixable. But, then a fix means
basically starting over in evaluation.
Hopefully, the process will do its job well. Then, in a non-governmental
context, all the algorithm needs to be compared with are the algorithms
that were never formally in the race to begin with, or sprang out of
improvements on the other entrants, or something entirely new and
unforseen. Sci-crypt should be around for a long time yet to come.
--
Weathermen prosphesize and insurance companies predict, while both pretend to be doing
the other to get an audience.
------------------------------
From: [EMAIL PROTECTED] (Frank Palmer)
Crossposted-To: rec.arts.sf.science
Subject: Computer-generated random numbers (was Re: Magic)
Date: 17 May 1999 07:42:10 GMT
Reply-To: [EMAIL PROTECTED]
Copyright, 1999, Frank Palmer. Right to quote in a reply during
current debate expressly granted. Only necessary parts of previous
posts have been quoted.
In article: <7hnr9h$4sh$[EMAIL PROTECTED]>
"Riboflavin" <[EMAIL PROTECTED]> writes:
> Frank Palmer wrote in message <7hmo5v$qeq$[EMAIL PROTECTED]>...
> >: [EMAIL PROTECTED] (Frank Palmer) writes:
> >: > Start with text, rotate each byte, XOR with the next byte;
> >: > repeat nine times. Result, random bytes.
> >: >
> >"Random byte" simply means that the preceding bytes provide no way
> >of guessing which byte will come next. Each byte provides 8 bits
> >of information.
>
> >All the text need do is contain information for this system to work
> >after some number of repetitions. English text generally contains
> >1 bit of information per letter. I used 10 letters to provide some
> >margin.
>
> >It wouldn't work for pages of the letter, A; it would work for
> >normal English text.
>
> No, this would not produce random numbers (unless you perform your
> XORs an infinite number of times). The numbers might look random to
> a casual glance,
Erm,
You don't think I'm basing this on printing out the results of this
process do you?
I'm a mathematician, not a physicist. I do calculations, not
experiments.
> but would have certain patterns that would make any
> crypto based off of them much easier to crack.
Not so. A weakness of this system for certain uses is that there is
no reason to believe that any single bit (of 0 - 7) is split
absolutely evenly between 0s and 1s in English text*. While all the
bits in bytes produced as I have described would be much closer to
50-50 than any single bit in English text**, they wouldn't be
perfectly so.
This would create problems in certain uses, but not practical ones
in cryptography.
As for long-range patterns (which tend to turn up in pseudo-random
numbers), they aren't a problem in English text, let alone scrambled
English text. When you read "There but f" The likelihood is better
than 90% that the rest of the sentence is "or the grace of God go
I." On the other hand, the first ten thousand characters in a story
tell you almost nothing about the probabilities of the twenty
thousandth.
> I crossposted this to
> sci.crypt, I think one of the experts there can do a better job of
> explaining why this is so than I can.
I'll read their responses with interest.
* For our purposes, there are two kinds of non-randomness: The
first kind involves one byte coming up more often than another; the
second kind involves discerning some sort of pattern as to _when_ it
will turn up.
A good pseudo-random number generator evades the first problem,
but never (for long enough runs) the second. Natural events have a
real problem avoiding the first, but find the second easier to
avoid. The 2-Geiger-counter example, for instance, requires that both
Geiger counters be equally sensitive, equally far from the radioactive
sample, etc. In practice, one counter will produce more clicks than
the other.
** For example:
If you XOR two bits which are each 60% likely to be a 1 and are
independent, the result is 52% likely to be a 0 and 48% likely to
be a 1. (Both 1s, yielding 0: 36%; both 0s, yielding 0: 16%)
If two independent bits are each 52% likely to be 1, the result
of XORing them is 50.08% likely to be 0. (And so four independent
bits, each 60% likely to be 1, would yield a 50.08% likelihood of
1.)
--
Frank Palmer
[EMAIL PROTECTED]
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************