Cryptography-Digest Digest #502, Volume #9 Wed, 5 May 99 08:13:06 EDT
Contents:
Re: Factoring breakthrough? (Hawkhaven)
Re: Thought question: why do public ciphers use only simple ops like shift and XOR?
([EMAIL PROTECTED])
Re: Fast random number generator (Mok-Kong Shen)
Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO (Mok-Kong Shen)
Re: The Factoring Breakthrough... (Mok-Kong Shen)
Re: Compact look-up table - weak? ([EMAIL PROTECTED])
Re: Fast random number generator (Mok-Kong Shen)
Re: Compact look-up table - weak? (Mok-Kong Shen)
Looking for Time Stamp Service ("Johannes Farmer")
----------------------------------------------------------------------------
From: Hawkhaven <[EMAIL PROTECTED]>
Subject: Re: Factoring breakthrough?
Date: Tue, 4 May 1999 22:20:42 +0200
> >Here are my notes from that talk:
> >
> >The idea is to break 40 bit DES using film as an analog computer.
> >Photographic film can hold 2^30 pixels per square inch, so 10 yards
> >of 70 mm movie film could hold 2^40 pixels.
> >
> >Treating pieces of such film as long, 2^40 bit "registers", certain
you assume that you will be able to have everything developed perfectly
every time...im something of a photographer myself and i can tell you that
the actual grain (resolution) you get is anything but an 100% predicable
> >logical operations can be performed on them using photographic techniques.
this all assumes of course that you could get each pixel perfectly exposed
the way you want it too and none of the other pixels around it adversely
exposed in the process...good luck at that...remember that the whole idea
behind photographic film it a chemical reaction in the light sensitive
silverhaylide (i'm sure that ones misspelled...its spelled like it
sounds)...i submit to you that such a procedure would be a most elaborate
waist of your time as i am certain you could do a better job with existing
devices...and common could you stand the smell of fixer on you hands every
time you used this gagget of yours...if you had been forced to smell fixer
on a regular basis at one point in your life im confident this whole idea
would suddenly seem alot worse to you...
> >Negating a register can be done by taking the photographic negative of
> >the film. NOR is done by stacking multiple strips of film and exposing
> >through them. NAND is a double exposure (first expose through one,
> >then expose through the other). XOR is a combination of these.
--Hawkhaven
"Win if you can, lose if you must, but always, always cheat!"
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Thought question: why do public ciphers use only simple ops like shift
and XOR?
Date: Wed, 05 May 1999 09:01:02 GMT
Terry Ritter wrote:
> [EMAIL PROTECTED] wrote:
> > Suppose an attacker has a 5% chance
> >of breaking any of our ciphers (which may be multi-ciphers).
> >If we choose one and only one cipher, then we have a 5% chance
> >of exposing all our traffic. If we encrypt each message with a
> >random choice from 1000 ciphers, then we will almost certainly
> >loose 5% of our traffic. The problem is that a random 5% of
> >the traffic probably contains most of the useful intelligence.
>
> But that is not a good model of the real situation.
>
> In reality, an attacker has some chance of breaking a cipher *given*
> *some* *amount* *of* *time,* *effort* *and* *resources*.
Weak ciphers are weak whether someone has found the tractable
break(s) or not. Given the assumption that the attacker is
much more capable than ourselves, all he has to do is pick out
the ciphers that his methods can break.
> With only one cipher, an attacker continues to pour effort and
> resources into that one problem.
And if that one is in fact secure, he never gets anything.
> With a growing body of ciphers, there is less effort and fewer
> resources for attacking any one cipher. Ciphers might even be retired
> after a few years of service, thus limiting the effort spent attacking
> them, and also limiting the value of information they protect.
Retiring ciphers is good practice, but it's up to the attacker
to decide how much effort to spend attacking one. Protect the
design of a nuclear weapon with cipher X, and the secrecy of
the design depends on cipher X forever.
[...]
> You address a supposed "flaw" in the many-cipher system, then fail to
> address the analogous flaw in the one-cipher system you prefer.
I am considering the flaws of both sides when I say that a
five percent chance of exposing all our traffic is far better
than near certainty of exposing five percent of our traffic.
> The "flaw" here is that information flow contains redundancy which
> allows significant intelligence to be extrapolated from a small
> proportion of exposed messages. But *that* is fact; the "flaw" part
> of this is the exposure of some amount of the hidden information as a
> consequence of that fact.
You lost me. The exposure of information is not a consequence of
being able to extrapolate from a small portion of the traffic.
> Broken ciphers expose information in both systems: In the
> multi-cipher case, we can assume that having more ciphers means more
> ciphers could be broken. Fine. But if a cipher is broken, the user
> proceeds on and the next cipher will be secure again. A past exposed
> message simply does not reveal a future creative activity.
The next cipher is no more likely to be secure than the
current cipher. You'll move on to other ciphers whether
the current on is secure or not, and whether the ones you
move to are secure or not. An attacker with more powerful
techniques of cryptanalysis constantly has access to a
random portion of the messages.
[...]
> >In any case, compartmentalization by subject area makes sense,
> >while random compartmentalization does not.
>
> On the contrary, automatic random compartmentalization makes at least
> as much sense, and requires far less management.
>
> With normal compartmentalization, a single individual within the
> compartment can reveal most of that compartment *plus* what can be
> extrapolated from *that* vast quantity of divergent information.
That would seem to be a property of smaller compartments, not
random compartments.
[...]
> Oddly, I see the issue of the unknown break which continues to be
> exploited for the foreseeable future as a breathtaking flaw in
> conventional cryptography. Given that the ordinary references do not
> discuss this little detail, I suspect it will take some time for users
> to understand what this issue means to them.
I suspect the users and the authors of those references
already have a better understanding than you think.
> >[...]
> >The real problem is that we expect the number of breakable ciphers
> >in use to be roughly proportional to the number of ciphers.
>
> I agree.
>
> >It may
> >be worse, since we can't validate each of 1000 ciphers nearly as
> >well as each of a few.
>
> I agree.
>
> But the rest of the story is that attack effort is either apportioned
> among many ciphers, or focused on one. Which is better for the user?
If our theory is that ciphers are intrinsically breakable
but by definition secure unless or until someone discovers
the break, then it's a toss up. The validation effort, like
the adversaries effort, is either focused or apportioned.
If we are looking for algorithms for which no tractable
break exists, then the one is better than the many. You
would be correct to point out that we cannot mathematically
prove the absence of a break. But claims that validation
by cryptanalysis is "unscientific" are completely false. We
have a hypothesis - that no tractable break of the cipher
exists - and we can describe the evidence that would refute
our hypothesis. If our hypothesis is false, then the
evidence to refute it necessarily exists; there's only the
question of whether we can find it. As good scientists, we
therefore actively search for the evidence that would
disprove our theory.
Using the scientific method on mathematical subject matter
puts us on shaky ground. Unfortunately, the alternatives
we've seen are worse. After all your objections based on
how we can't prove or quantify the security of our systems,
what proof or quantification do we have of the security of
your proposal?
--Bryan
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Fast random number generator
Date: Wed, 05 May 1999 12:18:59 +0200
*** Note: There was a problem with my newsserver. Apology if you
get duplicates.
John Savard wrote:
>
> My more elaborate method is for use in special circumstances:
>
> - available random numbers are short, varying from 1 to the size of
> the list (or 0 to size-1)
>
> - the random numbers are felt to be of good quality
>
> - floating-point arithmetic, or even 16-bit multiplication, is to be
> avoided.
>
> It's an attempt to produce passably random permutations using only
> byte operations.
The classical shuffling depends on a good random number generator
to obtain integer random numbers not greater than size-1. If you
already have such numbers (how do you get these in the first place?),
then you can certainly apply the classical algorithm (simply swapping
the entities with the help of the integer random numbers). So I don't
yet see your point of the two items you listed above and the real
advantage of your special method.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO
Date: Wed, 05 May 1999 12:31:40 +0200
*** Note: There was a problem with my newsserver. Apology if you
get duplicates.
SCOTT19U.ZIP_GUY wrote:
>
> In article <[EMAIL PROTECTED]>,
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> > Could we confine to concrete discussion stuffs instead of asking
> > such questions as 'Do you understand this, yes or no?' ?? If the
> > partner understands, he will either pass the point or (more correctly)
> > acknowlege that you are right. If he doesn't he will ask for
> > clarification or counter your arguments.
> >
>
> You Mr SHen was the one to USE YES OR NO but fine lets drop that.
This is a straight reversal of truth!!! I challenge you to give
the date and time of my posting to show that I first used 'YES or NO'.
I never use such a phrase in a question, because I personally don't
like that, considering it to be a bit not very polite (but this
could be simply my misunderstanding due to my poor knowledge of the
English language).
>
> > Now you said 'seperate compression from encryption'. Isn't the
> > 'key' in your 'the enemy tries various keys' an issue of encryption?
> > Clearly it is. So, following your suggestion of seperating compression
>
> No it is not so CLEAR. Your adding random bits to me was part
> encryption. What this whole thread is about is compression that
> would be suitable for a file to undergo before it is encrypted.
> Most compression use headers of one formor another. Or a EOF
> symbol. However it is possible to compress a file to a shorter
> lenght if one can drop the use of an EOF symbol. IF you bother
> to look at my code. Or even look at test cases you can see that
> sometimes there is a EOF symbol of sorts. Sometimes there is not
> and some times the last symbol that would have been written by
> the adaptive huffman compression is truncated and not fully
> written to the file. To understand why one must think!
> To test the 3 alternate ending I came up with the fact that any
> random pattern in a finite number of bytes. When decompressed to
> a file. And then compressed back to a file. Would be identical.
> I am not GOD ( or the devil) but I may have made a mistake if
> any one can find a short file of a few million bytes or less that
> does not have this property when using my program then either I
> made an error. Or maybe I also am trying to do something that is
> not possible. But I think it is possible and have tested many cases.
> Now the whole KEY POINT in my mind is that you want a compression
> routine that leaves no information to the attacker. After you
> compress a file you then "seperately encrypt the file". It would
> be nice from a mathematicaly point of view. That if an attacker
> tryes a key ( block of keys ) that the resulting file from the
> decryption is of a form. That it could have come from a "actual
> compression". Look I am talking about all files. Not just ascii
> though I suspect some people may only encrypt ascii. If the file
> can be decompressed to a file and then recompressed to same file
> then the attacker has to dig deeper to find the message. But if
> the compression method always used a EOF or the like. Then the
> attacker has to look no farther and can reject imediately files
> of certain forms. Yes you could have a secret deal with you buddy
> that you will make last set of symbols garbage or whatever and
> that in specail messages to him you may drop the EOF but that
> is an encryption issue seperate from this thread.
There are different sort of files in practice. Some files need
whole words, some need whole bytes (e.g. UNIX). There are ones that can
have length of any number of bits. For the third sort of files there
is no problem with Hufmann compression at all. For the first two sorts
the Hufmann compression has a high probability of not ending
at the word/byte boundary. (This has nothing to do whether the
input file is ASCII or not!!) So what should an implementation do?
Put in all 0's (or all 1's)? Well, you can do that. But on decompression
the algorithm will pick up these and try to decode and the chance is
that some of the 0's lead to valid symbols according to the Huffman
table and these translations will be delivered to the user which is
annoying at least. Filling with random bits have the same effect.
That's why I suggested to employ a special EOF. (By the way aren't
your 3 alternatives or whatever covered by my discussion above?
If not, please clearly state which situation is not covered.
Further, whether I bother to look at your code or not can NOT be a
question, for the US crypto laws simply forbid that I fetch ANY
crypto codes from US. I already said that last time. Do you desire
that I get trouble with the justice?) EOF is only one symbol and
it translates to only a few bits. What is your point of the compression
file being longer due to EOF? Simply because of an increase of
these few bits???
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: The Factoring Breakthrough...
Date: Wed, 05 May 1999 12:42:45 +0200
John Savard wrote:
>
> As it turns out, Dr. Shamir simply gave a proposal for making an
> electronic version of the Lehmer sieve. This is a special-purpose
> hardware device that looks for numbers which are equal to (this or
> that) modulo one number, and (that or the other thing) modulo another
> number, and so on and so forth.
I think it could be valuable to consider whether there can be
clever selection of numbers to be supplied to Lehmer's sieve so
as to shorten the total time of processing for getting the factors.
Does anyone know something in that direction or there can
impossibly be good heuristics?
M. K. Shen
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Compact look-up table - weak?
Date: Wed, 05 May 1999 10:43:01 GMT
> To classify something as weak or strong is relative, relative to what; a
> scaled down version is going to be weaker than a scaled up version...I
> like scalability to demonstrate flexibility in a generic algorithm.
Well I am going from what I read in AC and the AES papers. Most people stay
away from smaller tables. The smallest I have seen are 4x32 tables...
There obviously is no proof that a small s-box is weak, it depends on the
algorithm, but in most SP network ciphers small s-boxes are subject to
analysis, and usually can be broken faster then bruteforce.
Tom
--
PGP public keys. SPARE key is for daily work, WORK key is for
published work. The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'. Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'. Try SPARE first!
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Fast random number generator
Date: Wed, 05 May 1999 09:49:41 +0200
John Savard wrote:
>
> [EMAIL PROTECTED] (John Savard) wrote, in part:
>
> >If one uses a generator of good random numbers much larger than m, one can of
> >course use the classical shuffle algorithm to guarantee that all shuffles are
> >equally likely:
>
> In response to a post of Mok-Kong Shen which hasn't shown up on my
> server, the classical method is simple, and produces all possible
> permutations with equal probability given good-quality random numbers
> of sufficient precision.
>
> My more elaborate method is for use in special circumstances:
>
> - available random numbers are short, varying from 1 to the size of
> the list (or 0 to size-1)
>
> - the random numbers are felt to be of good quality
>
> - floating-point arithmetic, or even 16-bit multiplication, is to be
> avoided.
>
> It's an attempt to produce passably random permutations using only
> byte operations.
The classical shuffling algorithm depends on a good random number
generator to produce small random numbers in the range [0, size-1].
If you already have such good small random numbers (how do you get
these in the first place?) then you can just apply the classical
algorithm to swap the entities and you have the desired random
permutation. So I don't see yet the point of the two items you
listed above.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Compact look-up table - weak?
Date: Wed, 05 May 1999 14:04:28 +0200
[EMAIL PROTECTED] wrote:
>
> > To classify something as weak or strong is relative, relative to what; a
> > scaled down version is going to be weaker than a scaled up version...I
> > like scalability to demonstrate flexibility in a generic algorithm.
>
> Well I am going from what I read in AC and the AES papers. Most people stay
> away from smaller tables. The smallest I have seen are 4x32 tables...
>
> There obviously is no proof that a small s-box is weak, it depends on the
> algorithm, but in most SP network ciphers small s-boxes are subject to
> analysis, and usually can be broken faster then bruteforce.
It is my understanding that Shaw's point is the general advantage of
having flexibility and genericity in the design of algorithms. As he
said, [for the same design] the small version is [almost certainly]
weaker than the larger version (brackets mine). But such flexibility
has to be built in; not all designs have such flexibility. I personally
like to call these flexible designs parametrized, for a user can choose
one or several parameters to obtain a version of the algorithm
according to his desire.
M. K. Shen
http://www.stud.uni-muenchen.de/~mok-kong.shen/ (Updated: 12 Apr 99)
------------------------------
From: "Johannes Farmer" <[EMAIL PROTECTED]>
Subject: Looking for Time Stamp Service
Date: Wed, 5 May 1999 14:09:55 +0200
I need a trusted time.
Is there a free time stamping server/service (that implements the PKI-Time
Stamp Protocol) on the internet?
I would need it for testing my software.
Thank you very much for any help,
--
Johannes Farmer
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************