Cryptography-Digest Digest #639, Volume #11 Wed, 26 Apr 00 16:13:01 EDT
Contents:
Re: sci.crypt think will be AES? ([EMAIL PROTECTED])
Re: Can a password be to long? ("Holger Wei�")
Re: Requested: update on aes contest (Jerry Coffin)
Re: Magnetic Remenance on hard drives. (Jonathan Thornburg)
Re: papers on stream ciphers ("almis")
Re: Requested: update on aes contest (Jerry Coffin)
Re: sboxes for the bored... (Terry Ritter)
Re: U-571 movie ("Tony T. Warnock")
Re: new Echelon article ("Tony T. Warnock")
base #- digit # ([EMAIL PROTECTED])
Re: AEES 16 rounds ("Joseph Ashwood")
Re: Secret splitting for kids' treasure hunt (Stefan Schlott)
Re: Magnetic Remenance on hard drives. ([EMAIL PROTECTED])
----------------------------------------------------------------------------
From: [EMAIL PROTECTED]
Subject: Re: sci.crypt think will be AES?
Date: Wed, 26 Apr 2000 17:41:33 GMT
=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1
Paul Koning wrote:
> I have a silly reason for liking Rijndael -- the name... :-)
> paul
heh, realy silly.. if the Rijndael wins it will be not be
called "Rijndael" anymore but "AES" (the same with other ciphers)
== <EOF> ==
Disastry http://i.am/disastry/
http://disastry.dhs.org/pgp.htm <-- PGP half-Plugin for Netscape
http://disastry.dhs.org/pegwit <-- Pegwit - simple alternative for PGP
remove .NOSPAM.NET for email reply
=====BEGIN PGP SIGNATURE=====
Version: Netscape PGP half-Plugin 0.14 by Disastry / PGPsdk v1.7.1
iQA/AwUBOQcOBDBaTVEuJQxkEQIA6gCgku8H9U/WPn2mr0+LGPJGmfR2tckAnjzj
bavTASjbzHn/N4NXv01JT+T/
=UtVK
=====END PGP SIGNATURE=====
------------------------------
From: "Holger Wei�" <[EMAIL PROTECTED]>
Subject: Re: Can a password be to long?
Date: Wed, 26 Apr 2000 19:53:11 +0200
Anton Stiglic wrote:
|From a client usability prospective, that is certainly true. You
can't
|really rely on a user remembering more than a 10 character
password
|(especially if the characters are chosen from a large set, and
the
|password is secure (not just a word in the dictionary for
example).
This is an advise how to remember long passwords given by many of
the most popular European computer magazines (so this is no longer
a secret):
Choose some kind of text you can remember easily, such as a part
of song lyrics or a quotation of a text. Then take a selection of
the letters in this text, like every third letter or consonants
only as your password.
EXAMPLE:
"The Lord of the Rings", page 456, first sentence:
"And if we decide to part company, I can set you down outside my
country at any point you choose."
Every 5. char:
fcocnnonicrnnce (or something like that; I seem to have forgotten
how to count)
provides a 15 char password
Advantage: your password, no matter how long it is, stands any
dictionary attack
Disadvantage: a) in this example you have relatively few
different characters
b) this method does not provide any numbers or
special characters and only few capital letters
But there most times is a greater problem than thinking off a long
password: most programs, providers, mail servers... have a maximum
password length that usually is too short for my passwords!
Holger
------------------------------
From: Jerry Coffin <[EMAIL PROTECTED]>
Subject: Re: Requested: update on aes contest
Date: Wed, 26 Apr 2000 11:58:43 -0600
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
[ ... ]
> This raises an interesting question: what decision could NIST make regarding
> the five finalists that could be characterized as "bone-headed". Ignoring
> priority/preference ordering there appear to be 31 possible decisions and one
> possible nondecision. It is not clear to me that any of these 32 outcomes are
> obviously bad including the nondecision.
As long as they stick to simply picking some number of ciphers from
the existing pool, there probably IS no truly bone-headed decision to
make. A completely bone-headed decision at this point would almost
certainly involve taking some action that weakened things -- e.g.
deciding to improve performance by reducing the number of rounds to
JUST greater than has already been attacked.
> N.B. the figure of 50 years might cover the period in which AES ciphers are
> used to encrypt information, but the lifetime of such encrypted information is
> probably much longer. I'd guess at least 100 years (50 year lifetime in
> 2050), and perhaps as much as 150 (100 year lifetime in 2050).
I was already taking that into account -- even if we assume Moore's
law holds out that long, a 256-bit key should still be immune to
exhausting the key space for more than 150 years.
Of course, it's much safer to make a prediction more than a century
into the future than one that might be disproven while people still
remember it. <G> OTOH, John Dvorak's spent years making predictions
that were proven wrong almost before they were printed, and the last
time I noticed, he was still getting paid to do it!
--
Later,
Jerry.
The universe is a figment of its own imagination.
------------------------------
From: [EMAIL PROTECTED] (Jonathan Thornburg)
Subject: Re: Magnetic Remenance on hard drives.
Date: 26 Apr 2000 20:00:03 +0200
In article <[EMAIL PROTECTED]>,
Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote:
>Do you have a reference handy?
It's not a commercial service, but
> ## http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html
>
> Secure Deletion of Data from Magnetic and Solid-State Memory
>
> Peter Gutmann
> Department of Computer Science
> University of Auckland
> [EMAIL PROTECTED]
>
> This paper was first published in the Sixth USENIX Security Symposium
> Proceedings, San Jose, California, July 22-25, 1996
explains how/why data recovery *is* both possible and practical,
in some detail.
--
-- Jonathan Thornburg <[EMAIL PROTECTED]>
http://www.thp.univie.ac.at/~jthorn/home.html
Universitaet Wien (Vienna, Austria) / Institut fuer Theoretische Physik
"If security is set to none, everything just works."
quoted from http://msdn.microsoft.com/library/techart/msdn_signmark.htm
------------------------------
From: "almis" <[EMAIL PROTECTED]>
Subject: Re: papers on stream ciphers
Date: Wed, 26 Apr 2000 13:05:12 -0500
Douglas A. Gwyn <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Tom St Denis wrote:
> > Because if the stuff you xor against the plaintext is cryptographically
> > secure, then the ciphertext is essentially random. So why use anything
> > else?
>
> Because if you happen to have a lengthy known plaintext, you can
> recover such a long stretch of "clean" intermediate key that it
> would take an incredibly resistant generation algorithm to
> withstand determined cryptanalytic attack. With a considerably
> less complex algorithm that incorporates the plaintext into its
> operation, a generalization of autokey (q.v.), such an attack
> is not possible.
I am a little late for this stream but I was busy trying to get Linux
working.
First I must agree with Tom in that there is no inherent weakness to
XOR.
I believe it has been you (forgive me if I'm wrong), who has stated that
the requirements for a cryptographically strong bit stream and randomness
are different issues.
The requirement for a cryptographically strong key is only that it be
unpredictable.
Given this belief, knowing a "clean" stretch of
key gives no information about the rest of the key.
(Those stream ciphers that stretch keys, such as
Lorenz machines, do reveal information as stretches of
composite key are intercepted).
Of course randomness has the quality of unpredictablility.
As do other bitstreams, such as those generated by
transcendental numbers of the form suggested by Pierzyk et.al.
One doesn't care that, perhaps, a truly random bitstream cannot be
created. Merely that it pass something that Maurer likes to call the
'next bit test'.
Ultimately the man-in-the-middle can be resolved only when a system
can determine who the sender is and not what they know.
It is left for those in Authentication and Protocols to find solutions to
this problem.
...al
------------------------------
From: Jerry Coffin <[EMAIL PROTECTED]>
Subject: Re: Requested: update on aes contest
Date: Wed, 26 Apr 2000 12:28:46 -0600
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] says...
[ ... ]
> While nobody can make absolutely precise statements about
> computer architectures so far into the future, it is possible to
> make some general observations about possible trends.
No insult intended, but at least as I see it, part of that basically
just translates to "as long as you don't really commit to anything,
nobody can exactly say you're wrong." While true, this may tend to
reduce impact of the predictions (read that carefully to note that I
can weasel-word things with some of the best <G> ).
> *More functional units, registers
Functional units, probably, at least as long as you don't define the
term too tightly -- they may no longer resemble what we think of as
functional units too closely at all.
[ ... ]
> *Very large data/instruction caches
That, of course, depends on what you figure "very large" means. If
you simply mean they'll be a lot larger than is common right now,
you're almost certainly correct. If you try to guess the size using
a concrete number, there's little question in my mind that you're
more likely to be wrong than right, unless you simply include a
concrete number covering a huge range (e.g. "between 10 megabytes and
a few quadrillion terabytes" is probably safe, but mostly by not
really being a guess at all).
> *Main memory access will become even more expensive
Presumably you mean something like "relative to the speed of
registers." I'm not at all sure that's likely to be the case at all.
Rather the contrary, it wouldn't surprise me a lot of exactly the
opposite was true. Main memory being slower than the CPU is almost
entirely a matter of economics -- it does more good to concentrate
the technology in the CPU, and as long as we can keep building faster
and faster CPUs, that's likely to remain the case.
Assume, for the moment, that we find that extending CPU speeds
beyond, say, 1 THz starts to get REALLY expensive. If you want to
build a faster system, what do you do? There are two fairly obvious
routes: use more CPUs, and speed up other parts of the system.
Massivly parallel systems work well for some problems, but none has
had any great sucess in the mainstream, making it hard for me to
predict that they'll take over anytime soon. That leaves speeding up
other parts of the system as the most likely method of getting speed
gains, assuming there really is some sort of ceiling on (economical)
CPU speeds.
It really comes down to one simple question: can we count on Moore's
law holding out for another 50 years? I'm not quite willing to say
it's categorically impossible, but I'm MUCH further from making a
categorical statement that it WILL do so either. I think it's only a
matter of time before the curve starts to flatten, and the only real
question is whether that will take more or less than 50 years. If I
were going to bet, I'd say less...
--
Later,
Jerry.
The universe is a figment of its own imagination.
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: sboxes for the bored...
Date: Wed, 26 Apr 2000 18:34:24 GMT
On Wed, 26 Apr 2000 16:29:03 GMT, in <[EMAIL PROTECTED]>, in
sci.crypt Tim Tyler <[EMAIL PROTECTED]> wrote:
>[...]
>Large s-boxes take up too much space in hardware, and chew up your cache
>in software.
An 8-bit s-box only takes 256 bytes.
>It seems to me that using a potentially large number of
>small s-boxes seems to have some advantages over it:
>
>It is practical to test them for non-linearity; and offer guaranteed
>minimum levels. When creating a single large s-box, real-time testing
>is impractical and thus there's still a /chance/ of linearity (and thus
>of weak keys).
Well, testing is fairly quick even at the 8-bit level. But the point
is that we *don't* have to test 8-bit tables: We use multiple of them
and so make the probability of linearity far smaller than the
probability of choosing the correct key at random.
>If you use enough independent small s-boxes in series the resulting system
>can still be made quite strong.
Chaining substitutions in sequence just produces another substitution.
No possible sequence of 4-bit substitutions can be any different than
some other 4-bit substitution.
And no 4-bit substitution has a nonlinearity over 4. That means that
changing only 4 bits in the Boolean function for some bit position
will yield a linear Boolean function.
In contrast, it is difficult to even *find* an 8-bit substitution with
a nonlinearity below 78.
>If you use a small number of them the
>system can be made quite small and fast. Use of a single large s-box does
>not appear to offer you a simple way to trade-off between strength
>and performance.
But 4-bit tables have almost no strength at all. In contrast, each
8-bit table has almost enough nonlinearity on its own to be a cipher,
were nonlinearity the only issue.
That said, I have demonstrated ways to increase the nonlinearity of
small tables by using two small tables and mixing their outputs in
Balanced Block Mixers. So we might use small tables, at the cost of
more tables and another layer of mixing.
>If you could easily vary the size of the s-box this might be less of an
>issue - but if you only have one s-box you can't change its size without
>changing the number of inputs or outputs - which typically has knock-on
>effects elsewhere in your design.
Personally, I *never* have only one s-box. I believe in having many
such structures, and in generating them from the key. When used in
stream cipher combiners, I generally select from among multiple tables
"at random" (that is, using the keying sequence).
>The disadvantage is probably additional concern over security as a result
>of the greater structure involved.
I think that is exactly backwards: There is much less "structure" in
a random 256-byte table than in a random 16-nybble table. That's what
linearity means.
>If a block cypher is usually a large permutation constructed from smaller
>ones, it seems sensible to ask how large the smaller components should be.
>
>The possible answers seem to be "as small as possible while retaining
>non-linearity" - i.e. 3x3 or possibly 4x4 - "as large as possible
>without slowing things down to a crawl" - or "of a variety of sizes".
First, none of this works unless you have some sort of mixing which
does in fact effectively create large boxes from small ones.
Then, to obtain simulated large boxes from small ones we need multiple
small boxes, and a mixing process which does that. The larger the
tables we start out with, the fewer tables and fewer mixing layers
that are needed.
My answer is: "large enough for each actual table to have substantial
nonlinearity on its own, to protect against the case where the
opponent reaches the actual table." Since that only needs 8-bit
tables, which take only 256 bytes each, that seems a very reasonable
solution.
>FWIW, I'm quite attracted to the use of large numbers of smaller s-boxes.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: "Tony T. Warnock" <[EMAIL PROTECTED]>
Subject: Re: U-571 movie
Date: Wed, 26 Apr 2000 12:45:26 -0600
Reply-To: [EMAIL PROTECTED]
> Although the Purple and Naval dispatch codes were broken early in
> 1937. Makes you wonder why Pearl Harbor happened. Sort of like getting
> the U.S. involved in WW1. Lies told to the masses.........
For one thing, the attack order on Pearl was never sent over Purple, Naval
Dispatch, Orange, etc. The Japanese fleet kept radio silence. The Americans
knew something was up but they guessed that Manila or Taiwan would be the
target. This was before satellites or even long range aerial survalence. It
was a well-planned attack (not necessarily well motivated.)
A few months later at Midway, the USN did use their knowledge of Japanese
codes.
Purple, Orange, Coral, etc. were broken with only cyphertext. The
reconstructed codebreaking machines along with parts of some Japanese coding
machines can be seen at the NSA museum. The resemblence is uncanny.
------------------------------
From: "Tony T. Warnock" <[EMAIL PROTECTED]>
Crossposted-To:
alt.politics.org.cia,alt.politics.org.nsa,alt.journalism.print,alt.journalism.newspapers
Subject: Re: new Echelon article
Date: Wed, 26 Apr 2000 12:46:29 -0600
Reply-To: [EMAIL PROTECTED]
Volker Hetzer wrote:
> Lincoln Yeoh wrote:
> > Thing is I recall one prominent US judge ( I think New York) saying that
> > judges trying to achieve justice are dangerous and they should stick to the
> > law.
> >
> > That didn't look good to me.
> Personally I think, that's a very good thing.
> I'd have a really BIG problem if every judge just decides what he thinks is just.
> That would mean that the judges own political/moral agenda would enter the judgement
> process in a degree that would make me highly uncomfortable.
> What's just is not the judge's to decide, it's an ethical question that gets
> decided by society.
> That ethics is supposed to shaped into law (which, in turn is supposed to
> change over time) by some kind of legislative (and elected) commitee.
> If people (or judges) don't like the judgement, they can always lobby
> for a new law, thereby going through a proper democratic process.
>
> > Laws and judges aren't perfect. However laws do not change by themselves,
> > so if judges just keep following the law there's less hope for improvement.
> More people decide about laws (by means of elections) than about judgements.
> That means IMHO that justice enacted by the few (judges) is less democratic
> than judges following the justice put into law (by politicians, and therefore
> by voters).
>
> > Good people in the right places promotes justice. Laws help maintain what
> > they do.
> So, if you want to change laws, don't become a judge.
>
Sort of the opposite of the current crop of US judges?
------------------------------
From: [EMAIL PROTECTED]
Subject: base #- digit #
Date: Wed, 26 Apr 2000 19:07:00 GMT
I just want to ask :
Dont we have different number of bases
for a 250 digit number or a 100 digit number, or a 500 digit number?
What is the relation between digit # & # of bases ?
In article <8e0uat$aoa$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Scott Contini) wrote:
> In article <8e0qrh$mji$[EMAIL PROTECTED]>, <[EMAIL PROTECTED]> wrote:
> >
> >> >According to Robert D. Silverman (*), citing an article by
> >> >I. Damgard, P. Landrock et C. Pomerance, for numbers 512 bits
> >> >or more, 8 Miller-Rabin tests are enough for an error
> >> >probability below 2^-100.
> >>
> >> This assumes that the number to be tested for primality was
selected
> >> randomly. If you don't know anything about where the number came
from
> >> (for example, if somebody e-mailed you some number and asked you to
> >> test if it were prime) then you would require 50 iterations to
> >> have probability below 2^-100.
> >
> >
> >how do you calculate that? I mean, how do you find 50?
>
> probability for each one is 2^-2. For 50 randomly chosen tests,
> the probability is (2^-2)^50 = 2^-100.
>
> >
> >and i want to say that the paper writen by Pinch states that there
> >should be n bases for n digit number. Any comments?
>
> Off of the top of my head, I'm not sure why he recommends that.
>
> >
> >And thanks for the replies ...
> >I implement basic tests - being even test - division by small primes
> >(upto 8191)
> >- prime power test - lucas - lehmann(euler)
> >
> >but Miller Robin - how many bases?
> >
>
> If the number is chosen randomly (assuming 512-bits), just use 8 bases
> (see the paper by Silverman). If the number is larger or smaller than
> 512-bits, you may require more or less bases to get probability of
> error around 2^-100. I'm sure the paper by Silverman gives references
> to other papers that can tell you the exact number of tests required
> for various sized numbers. I know there was a paper by Pomerance and
> some crypto dudes that explained this in detail, but I don't have the
> reference right now.
>
> If you know nothing about where the number comes from, it suffices
> to use 50 tests.
>
> >and is there a formula?
> >
>
> (forumla given above)
>
> >thanks ...
> >
> >ps : ok, Frobenius is good - but i don't have enough math knowledge
> >to write its code ... If someone supplies me the algorithm, why not?
> >
>
> The Frobenius test is due to Jon Grantham, and he has papers on it
> at his web site:
> http://www.clark.net/~grantham/jgpapers.html
>
> Scott
>
>
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: AEES 16 rounds
Date: Wed, 26 Apr 2000 12:42:20 -0700
After reading through your documentation, I found a few
problems:
1) You have no clue what a multiplication table is
2) You have not defined some of the functions you use
3) Without this (as I won't even bother trying to read your
source code since I don't do delphi) the algorithm will not
be analyzed by me.
Perhaps if you were to follow the guidelines used by the AES
finalists I might be inclined to give it another look.
Joe
------------------------------
From: [EMAIL PROTECTED] (Stefan Schlott)
Subject: Re: Secret splitting for kids' treasure hunt
Reply-To: [EMAIL PROTECTED] (Stefan Schlott)
Date: 26 Apr 2000 21:54:11 +0100
On 24 Apr 2000 14:14:36 -0700, Andru Luvisi <[EMAIL PROTECTED]> wrote:
>I've been mulling over a little idea for a four kid treasure hunt.
>I'm offering it here in case anyone else is interested, and also in
>the hopes that someone might be able to come up with an easier method
>for the kids to combine the parts of the secret.
A year ago, I used an optical secret sharing scheme to split a treasure
map into several parts. At first, none of the kids could imagine what
to do with those funny transparencies showing a lot of black rectangles.
But when they put them all together... (though proper positioning
was more difficult than I had thought).
Stefan.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Magnetic Remenance on hard drives.
Date: Wed, 26 Apr 2000 20:06:56 GMT
Jonathan Thornburg <[EMAIL PROTECTED]> wrote:
[...]
>> ## http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html
>>
>> Secure Deletion of Data from Magnetic and Solid-State Memory
[...]
> explains how/why data recovery *is* both possible and practical,
> in some detail.
While I disagree strongly with the original post that a single pass of
zeros is enough, the other point is valid. The paper given above does
not contain any expirimental results. In fact, the only thing even
close to experimental data is the experience of people who work for
commerical recovery firms, the FBI forensics staff, etc. This leads to
the situation where everyone agrees it's possible to recover
overwritten data, but noone knows how hard it is.
Obviously, the safest bet is destruction of the media. For simple
privacy on a networked personal computer, overwriting once is probably
enough, since that prevents remote recovery of the file. In between
the two is a slippery slope. Once you start trying to guesstimate the
cost of recovering data written over 2, 5, 500, 10000 times you're
floundering at sea without a raft.
--
Matt Gauthier <[EMAIL PROTECTED]>
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************