Cryptography-Digest Digest #138, Volume #9 Thu, 25 Feb 99 12:13:04 EST
Contents:
Re: Block cipher in the smallest PIC (Andrew Haley)
Re: RSA test vectors ([EMAIL PROTECTED])
Re: DSS Keys (DJohn37050)
Snake Oil - "The flip side" ("jmp")
Re: Define Randomness (Patrick Juola)
Re: Testing Algorithms ("Trevor Jackson, III")
Re: Quantum Computation and Cryptography (R. Knauer)
Re: DSS Keys (Doug Stell)
Re: Define Randomness ("Trevor Jackson, III")
Re: Testing Algorithms (R. Knauer)
Re: A New Public-Key Cryptosystem (John Savard)
Any idea what this might be??? (Eike Kendelbacher)
Re: Testing Algorithms ("Trevor Jackson, III")
Re: Testing Algorithms (Vernon Schryver)
Re: Define Randomness (R. Knauer)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Andrew Haley)
Subject: Re: Block cipher in the smallest PIC
Date: 25 Feb 1999 15:57:02 GMT
Robert Scott ([EMAIL PROTECTED]) wrote:
: I have an application in remote keyless entry that,
: for economic reasons, favors the use of the smallest
: PIC processor: the 12C581. It has only 32 bytes of
: RAM and 512 words of program. Although the stakes are
: not as high as in electronic funds transfer, still I
: wanted to see if I could implement an 8-byte block
: cipher as secure as DES.
You could use Skipjack: 256 byte table, 10 byte key, and the algorithm
is very simple. That would leave you most of your ram and a little
bit more for the rest of the application. Should be doable...
Andrew.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: RSA test vectors
Date: Thu, 25 Feb 1999 15:24:20 GMT
Hello Thierry,
Try to use my RSA examples implemented
As 13 tables in MS Access.
After you have checked your algorithm you can
scale it to larger or arbitrary modulus and keys.
You can download it from the top of
www.online.de/home/aernst/RSA.html
I think that my RSA study can help you.
Best regards
Alex
In article <[EMAIL PROTECTED]>,
Thierry Schneider <[EMAIL PROTECTED]> wrote:
>
> Does anybody know where I could find some test vectors (input and outputs) for
> the RSA
> encryption.
> I will try to write a RSA code and I need to verify the result but I have
> nothing to compare
> to !
>
> Thanks a lot
>
> Bye
> Thierry
>
>
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: DSS Keys
Date: 25 Feb 1999 16:25:52 GMT
The X9.30 DSA revision being considered by ANSI X9.F.1 mandates a minimum size
for p (prime order field size) of 1024 bits, this was the maximum size
previously. The minimum size for q (prime order subgroup size) is 160 bits.
SHA-2 will have an output larger than 160 bits, exact details TBA.
Don Johnson
------------------------------
From: "jmp" <[EMAIL PROTECTED]>
Subject: Snake Oil - "The flip side"
Date: Thu, 25 Feb 1999 09:11:56 -0500
To "snake oil" observers:
Snake oil IS out there as you describe (from nuts that don't know any
better), but a more sinister form exists too. Many free download sites
carry products that proclaim "secure" algorithms like Blowfish, Triple DES,
etc. The problem is that only a few tell you that the maximun key is 40, 56,
or 64 bits - and they tell you in the small print at that. You think this is
not Snake Oil??
A short word about cryptanalysis. Your liberal use of the word "break"
that is echoed by the media when one occurs, does harm to some cipher
designers reputations. If you call an attack that requires 10^110 plaintext
blocks (from a common PC hard drive) worth anything, your being academic or
SILLY; Or one that has a complexity of 10^90 for that matter. Why don't you
not use the word "break"; use dented, crippled, cracked, etc. instead. More
concerning is that I have seen SNAKE OIL cryptanalysis out there too.
Bullshit analysis that was published and not even disputed by the author!
Attacks must be peer reviewed as well as cipher designs to be worth
anything.
Just some thoughts,
jmp
------------------------------
From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Define Randomness
Date: 25 Feb 1999 10:28:38 -0500
In article <[EMAIL PROTECTED]>,
R. Knauer <[EMAIL PROTECTED]> wrote:
>On 24 Feb 1999 17:33:41 -0500, [EMAIL PROTECTED] (Patrick Juola)
>wrote:
>
>>i) Algorithmic doesn't mean that they produce a pattern.
>
>Sure it does. The very algorithm itself is the source of the pattern.
>That's why algorithms cannot produce truly random numbers.
Does *NOT*!
Consider the following algorithm :
I flip the first bit of a (random) sequence and leave the rest unchanged.
What sort of pattern does this introduce?
>>It just
>>means that they operate in a predictable fashion in the patterns
>>randomly (and spuriously) found in data.
>
>There is no such thing as a "random pattern", if by random you mean
>Kolmogorov randomness. By definition randomness is patternless.
>Otherwise the pattern could be exploited to make a program shorter
>than the number itself, in which case it is not Kolmogorov random.
That's why I included "spurious." A "random", in the sense of
Kolmogorov-complexity, sequence *will* include spurious patterns
at odd and unpredictable intervals -- for example, there will (with
probability one) be sub-sequences of any finite length that are
all zeros, or alternating zeros and ones, or all ones, or 100100100
in waltz time, or whatever.
Something is "algorithmic" if the way that it operates on these
particular patterns (as it finds them) is deterministic and predictable;
it doesn't mean that the particular patterns are found in any predictable
way or that the sequence *after* this algorithmic processing is any
more patterned than it was previously.
>>ii) The von Neuman pairwise transformation provably does not introduce
>>a bias into a stream *if* the stream is composed of independent events.
>
>I agreed with you about that earlier. Now I am asking if it (or any
>other anti-skewing procedure) introduces correlations.
No. Each pair is operated upon independently, so the actions of one
pair cannot affect the actions of any other pair. The input sequence
to the von Neumann transformation is assumed correlation-free, and
the transformation is stateless so it cannot introduce any correlations.
-kiten
------------------------------
Date: Thu, 25 Feb 1999 11:46:13 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Testing Algorithms
R. Knauer wrote:
> On Thu, 25 Feb 1999 00:20:13 -0500, "Trevor Jackson, III"
> <[EMAIL PROTECTED]> wrote:
>
> >> Is it? I don't recall a single scientific experiment disproving the
> >> possibility of FTL communication -- and a lot of Bell-type inequalities
> >> that suggest it.
>
> >OK, technically you are correct. Neither general relativity nor quantum
> >mechanics forbid FTL communication. However, researchers have been looking
> >for exactly such a mechanism for over 60 years and failed to find it.
>
> Alain Aspect found FTL communication in his experiments. I believe it
> was in 1982.
Well, he found non-local phenomena, but he didn't find a way to pass a signal
AFAIK. Waveguides are a simple case of apparent FTL events in that the phase
velocity inside such a guide may exceed c. But the signal velocity is still
constrained to <= (usually =) c.
There were also a number of tachyon seaches conducted in the early eighties that
produced a number of interesting particel events. But subsequent analysis
indicated that the results were too explicable to be statistically signifigant.
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Quantum Computation and Cryptography
Date: Thu, 25 Feb 1999 16:41:50 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 25 Feb 1999 21:27:08 +1100, "Benjamin Johnston"
<[EMAIL PROTECTED]> wrote:
>I read somewhere, a while ago, that Quantum computers, if or when they are
>created, will turn an otherwise difficult factorization into a trivial task.
See "Explorations In Quantum Computing" by Colin Williams.
>Will all the current cryptosystems be outdated, the instant a stable and
>practical quantum computer is created?
The only proveably secure cryptosystem is the OTP. The messages hidden
in OTP ciphers are completely unknowable - unless you have the key.
>Are there any cryptographic algorithms designed to be secure against quantum
>computers?
Sure, but they are quantum algorithms. An additional advantage is that
if a quantum cipher is intercepted, the correspondents can know it has
been intercepted.
>What about algorithms secure against quantum computer cryptanaysis, but
>require a quantum computer for encryption?
See above.
>Does anybody have any personal opinions/predictions about the ramifications
>of such new technology/s.
See the book above, which purports to be the only book of its kind on
quantum computing.
There is an extensive writeup on quantum encryption, which is seen as
one of the most important applications of quantum computing.
BTW, the govt is spending significant resources to build these
contraptions, so if you have any important messages to send you might
want to use the OTP method. Soon, hopefully, after the major nations
of the world have played enough with their destructive weapons
systems, they will turn to industrial espionage in a major way. That
means that decryption capability must be strong.
The wars of the future will be fought over information, not territory.
Bob Knauer
"Democracy is the theory that the common people know what they
want, and deserve to get it good and hard."
--H.L. Mencken
------------------------------
From: [EMAIL PROTECTED] (Doug Stell)
Subject: Re: DSS Keys
Date: Thu, 25 Feb 1999 16:35:14 GMT
On Thu, 25 Feb 1999 14:53:16 -0000, "Nicholas Cole"
<[EMAIL PROTECTED]> wrote:
>PGPi uses DH/DSS keys. For encryption purposes, anything less than a 2048
>key is no longer considered secure.
This is nonsense, especially with respect to these algorithms.
>Does this mean that DSS signing keys should also be lengthened (currently limited
>to only 1024) to avoid the forging of signatures?
Currently, the Digital Signature Standard (DSS) contains only one
Digital Signature Algorithm (DSA). The DSS standard limits the DSA p
to 512 to 1024 bits, in steps of 64 bits and the DSA q to exactly 160
bits.
The major security of DSS is determined by the shorter q. This is
still good security, because the algorithm is designed to be immune to
certain attacks.
However, we can expect NIST to publish a new standard (DSS) in the not
too distant future. They plan to include one or more additional
algorithms in the standard, e.g., RSA. They may also increase the
allowable key sizes for DSA, although I have not officially heard
this. BTW, the process is slow, with a new algorithm taking about two
years to have its test suite in place.
------------------------------
Date: Thu, 25 Feb 1999 11:34:41 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Define Randomness
Patrick Juola wrote:
> In article <[EMAIL PROTECTED]>,
> Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
> >>
> >> 2) I have not been convinced that anti-skewing does not introduce
> >> correlations, which would make the output unsuitable for crypto-grade
> >> ciphers. After all, the anti-skewing procedure is algorithmic, so
> >> there is always the opportunity to introduce correlation(s).
> >>
> >> The von Neumann method looks innocent on the surface since the
> >> probability of dibits 01 and 10 are the same, namely p*(1-p). But that
> >> says nothing about the correlations inherent in their appearance. If
> >> you start throwing out bits like in the von Neumann method, who knows
> >> what patterns are left behind.
> >
> >That is exactly the point. No one knows. I.e., it is unpredictable.
>
> No. The point is that the vN method does not *introduce* patterns.
>
> If there are longer-range correlations in the input data, the vN method
> will not remove long-range correlations. What it *will* do is remove
> bias from independent bits.
>
> It's an easy proof; I've sent it to you.
Wrong. The initial questioned what patterns are left behind, i.e., not
eliminated by the vN filtration. It did not imply the creation of patterns.
My reply indicated that longer range patterns may be present but they will
not matter if single-bit bias is the only concern. If longer range
correlation is a concern there are extended versions of the vN two-bit filter
that can eliminate the correlations by N-bit translation. They even exist
for non-linear bit dependence.
For example, a source may generate bits of equal probability except in the
case of two successive ones, which will be followed by a zero with
probability 2/3. So the patterns 000, 001, 010, 011, 100, 101 all appear
with equal probability (1/8), while 110 appears with probability 1/6 and 111
appears wth probability 1/12.
A trivial correction is to simply drop the bit following two successive
ones. This produces an unbiased sequence, but it loses the information
contained in the dropped bit. Since simplicity is strongly preferred this
may be the appropriate mechanism. If entropy resources are scarce with
respect to omputational resources, one can skip the first distorted bit,
saving its value, and when the second distorted bit is available, treat the
pair as described in the vN procedure.
Given a bias defined over a specified interval ist is possible to remove that
bias, leaving no residual pattern, at some cost in data rate and possibly
some cost in information rate.
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Testing Algorithms
Date: Thu, 25 Feb 1999 14:27:12 GMT
Reply-To: [EMAIL PROTECTED]
On 25 Feb 1999 00:39:57 GMT, [EMAIL PROTECTED] (Coen Visser)
wrote:
>My guess is, as long as there is enough need for faster/bigger computers,
>faster/bigger computers will be build. Not just by extrapolating current
>technology but by scientific innovation. The question is if there is a need.
If it can be used in warfare, there will be a need.
If man were not a war-like animal, we would not have advanced
technically as far as we have. Peace brings with it technical
stagnation.
The computer itself arose from a wartime requirement.
Bob Knauer
"Democracy is the theory that the common people know what they
want, and deserve to get it good and hard."
--H.L. Mencken
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: A New Public-Key Cryptosystem
Date: Thu, 25 Feb 1999 16:44:40 GMT
Somniac <[EMAIL PROTECTED]> wrote, in part:
>Dear Mr. Craig, what is Mrs. Hill's first name?
How should we know what the first name of Lester S. Hill's wife was?
>Where was the Hill algorithm published?
In a mathematical journal: the reference is in David Kahn's The
Codebreakers.
>Do you have a website where more details are given?
>Is there an on-line copy of the Hill paper?
I think there is.
But the Hill cipher is just matrix multiplication mod 26, so *it* is
no big deal.
I remember trying to make a public-key cipher out of the Hill cipher,
but my attempt didn't work, because matrices can be diagonalized.
John Savard (teenerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html
------------------------------
From: [EMAIL PROTECTED] (Eike Kendelbacher)
Subject: Any idea what this might be???
Date: Thu, 25 Feb 1999 16:53:19 GMT
Reply-To: [EMAIL PROTECTED]
Hi,
I tried a few a things on this simple cypher, but didn't make it. Any
ideas what I should try?
It seems to encrypt 8 chars at a time.
Input 1:
74 65 73 74 00 00 00 00
00 74 65 73 74 00 00 00
00 00 74 65 73 74 00 00
00 00 00 74 65 73 74 00
00 00 00 00 74 65 73 74
Output 1:
73 74 B8 F0 01 1C D6 E7
D1 C8 9E DF DF 1F 6F 69
69 78 81 B0 94 54 BB A0
64 F1 63 C1 F8 A7 54 C8
33 D7 46 B8 BA 30 D7 F8
Input 2:
00 01 00 00 00 00 00 00
01 01 00 00 00 00 00 00
02 01 00 00 00 00 00 00
03 01 00 00 00 00 00 00
04 01 00 00 00 00 00 00
05 01 00 00 00 00 00 00
06 01 00 00 00 00 00 00
07 01 00 00 00 00 00 00
08 01 00 00 00 00 00 00
09 01 00 00 00 00 00 00
0A 01 00 00 00 00 00 00
0B 01 00 00 00 00 00 00
0C 01 00 00 00 00 00 00
0E 01 00 00 00 00 00 00
0F 01 00 00 00 00 00 00
10 01 00 00 00 00 00 00
Output 2:
73 e8 f0 64 19 ed af d2
4d 97 d5 1b b9 1b 72 ae
11 fc 43 29 4b 49 31 9e
21 92 51 77 1e f8 70 62
73 f7 45 ad 8a 59 5e 78
57 78 6e c4 c4 ee 53 31
70 49 d8 40 2e 5d b0 71
8d 46 28 5f 3d 17 e9 99
a2 2a 6a bc 3a 4e 8e 0c
3a 54 3b 14 5c 76 28 47
45 f3 3b 00 89 2e 50 da
13 09 0c 66 a5 89 b3 a2
72 b3 25 b9 16 24 2c 20
26 d3 56 bb 29 38 e0 f9
69 92 db a8 f1 54 ea ba
8f c8 3b 73 e8 7b 69 f7
a0 1c 6b e4 fa c5 d4 a4
Any ideas, hints, help is greatly appreciated!!!
Best regards,
Marc
[EMAIL PROTECTED]
------------------------------
Date: Thu, 25 Feb 1999 11:56:28 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Testing Algorithms
Alan Braggins wrote:
> "Trevor Jackson, III" <[EMAIL PROTECTED]> writes:
> > A superstring computer is certainly conceivable with modern theory, given
> > some room for TBDs in the specs. But a computer that violates the speed
> > of light is in the same class as divine inspiration. If you assume any
> > rules you want then you can get any output you want. By tomorrow.
>
> Once you assume you can violate light speed, you can get the answer
> not merely tomorrow, but yesterday[1]. Then you needn't bother
> calculating it before sending it back in time to yourself. On the
> other hand your opponent can then go back in time and decide to send a
> different message in the first place, so you still haven't cracked it.
>
> [1] unless FTL is possible in some reference frames but not others
Well the induction that FTL implies time travel is pretty strong. But Niven's
rule indicates a kind of anthropic proof that it is impossible to invent time
travel. Not that time travel is impossible, but that no one will ever invent
it.
Consider a universe in which time travel is possible and used. Assume that
changes in early history ripple forward inducing changes in all related
(entangled?) events. Probably easiest to consider the light cone of the
initial change contaminated.
Now, what is the least change to history that terminates the change process?
Niven claims that the simplest change is that the inventor of time travel does
not make the invention. If someone else does it instead, the changes continue
until he does not. Thus no one will invent time travel.
Metaphorically, the history of the universe bounces around until time travel is
not used, after which the history stops bouncing. A pebble in the road keeps
moving as disturbed by passing cars and pedestrians until it leaves the road,
at which point it reaches a much more peaceful position.
Since this interpretation seems to forbid the practice of time travel rather
than the physics of time travel, and time travel is strongly connected to FTL
travel. I'd guess that FTL cannot be practiced even if the physics of the
universe permits it.
------------------------------
From: [EMAIL PROTECTED] (Vernon Schryver)
Subject: Re: Testing Algorithms
Date: 24 Feb 1999 09:13:53 -0700
>No, there is no quarantee of this. There is also no quarantee that the
>speed of light will be a barrier.
If you're going to "design" by wishing, why not wish for a machine that
reads minds? Such a device is vastly more plausible than a computer that
both breaks the speed of light and profits by that. A mind reading
computer would be far more effective for finding keys.
>You are basing your calculations on the assumption that CPU speeds
>will stop increasting. So far the trend has been a doubling around
>every 1.5 years. I remember back in 1985 being told that my 1 MIP CPU
>is about as fast as we can possible get because of 'physical
>barriers'. Today we have CPUs that can run 2,000 times faster. Have we
>got to that barrier yet? No, I don't think so.
>
>This whole thing comes down to speculation. ...
The other statements are engineering estimates, but those statements
not even speculation:
- there is no long term trend in computer speed of doubling every
1.5 years unless you know nothing about the history of computers or
the speed of current systems. The CDC 6600 and the IBM Stretch were
from 35+ years ago, and were much faster than 2**(35/1.5) or 8,000,000
times slower than the fastest current CPU.
- no one with with a clue about computers in 1985 would have said that
1 MIPS was close to any kind of physical limits, although at the
time, 1 VAX MIPS was near some economic and manufacturing limits in
some markets. There were plenty of CPUs that were going significantly
faster than 1 MIPS in 1985, although not many "workstations" or
"personal computers." 1 MIPS was not unheard of even in 1975.
- today we do not have very many uniprocessors running 2,000 times faster
than 1 MIPS. Clocking something at 2 GHz is not amazing, but the
sustained memory bandwidth of at least 8 GByte/sec to get a Specmark
equivalent to 2000 MIPS from a single CPU is a little rare. If you
have the money, you can buy a system with a lot more than 2000 MIPS,
but only if you count the aggregate in a multiprocessor. Using
something like 2**128 CPUs, each 2**64 times faster than current CPUs
get a 2**192 speed up is a thought, but only you cannot understand
exponents, and cannot understand that 2**128 of anything is worse
than silly.
- 2**(x/1.5) is a poor approximation of a change of 2000X in 15 years.
People who think that CPU speeds have been doubling every 1.5 years
probably also feel that by 2015, PC's will have 2**64 bytes of storage,
based on a trend in 8, 16, and 32 bit addresses spaces. As I've said
before, you can't communicate with the innumerate who cannot understand
exponents by talking about 2**256, 2**64, or even 2**1.5.
--
Vernon Schryver [EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Define Randomness
Date: Thu, 25 Feb 1999 17:06:25 GMT
Reply-To: [EMAIL PROTECTED]
On 25 Feb 1999 10:19:17 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:
>>I believe that the notions of "uniform" and "normal" are essentially
>>the same thing. I cannot imagine a number that possesses one of those
>>characteristics not possessing the other. For example, what is a
>>non-uniform normal number, or a uniform non-normal number. I suppose
>>one could fabricate some weasel-worded contorions to get around that,
>>but then they would not be using conventional meanings for those two
>>words.
>Well, a standard example of a non-uniform normal number is the
>Champernowne sequence 1234567891011121314151617...; a formalist might
>point out that that's not a 'number' per se, but the equivalent
>real number 0.1234567891011121314.... certainly is.
I do not believe that I implied that normal numbers are necessarily
random.
>It's provably normal (at least in base 10), but is also not the result
>of a uniform Bernoulli process; in fact, it's not the result of a
>random process at all!
Well, it could be - it is one of the possible sequences from a TRNG,
at least in principle, because Campernowne's number is infinite. But
then we get into that vanishingly small probability thing again.
This randomness stuff sure is pesky business, eh.
>>Hmm... I did not expect Li & Vitanyi to publish falsehoods. I am truly
>>crushed.
>Deal. 8-) As I said, they oversimplified; they're not interested in
>the study of random sequences per se; they're interested in the study
>of Kolmogorov complexity. Discussing this sentence in full would have
>doubled the length (and cost) of the book -- and their sales were low
>enough anyway.
You say they were not interested in the study of random sequences, yet
half the book is about random sequences. Just look in the insex under
"sequences, random".
>>Maybe they have a different notion of randomness than you have. We all
>>know that the term "random" has been heavily abused in both math and
>>crypto.
>Of course. That's the problem. But I'd be very surprised to see someone
>suggest that the result of rolling a K-sided golf ball isn't "random"
>in any sense of the word.
I never meant to imply that it was not random in some sense. I meant
to point out that it is not random in the crypto sense - at least not
until you get rid of the bias.
>Kolmogorov complexity yields one very specific definition of "randomness"
>that matches, with probability one (there's that phrase again), most other
>useful definitions of randomness on infinite sequences.
It does not match the definition of crypto-grade randomness, because
it excludes strings that can be produced by a TRNG.
>It's also a very
>powerful definition and technique, which is why I like looking at it as
>a primary analytic tool. But it's not THE definition....
I believe that the most fundamental form of randomness is that which
is encountered in Quantum Mechanics. The decay of a radioisotope is
fundamentally random. It is independent and equidistributed, and has
an underlying physical theory to demonstrate its randomness both
theoretically and experimentally. You cannot find anything that is
more random than the decay of a radioisotope - although you can find
other processes that are the same level of randomness as radioactive
decay. But nothing can be more random than radioactive decay.
It is interesting that QM randomness is suitable for crypto. Most
other kinds of randomness, like statistical randomness, are not
suitable for crypto.
Bob Knauer
"Democracy is the theory that the common people know what they
want, and deserve to get it good and hard."
--H.L. Mencken
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************