Cryptography-Digest Digest #343

2000-12-16 Thread Digestifier

Cryptography-Digest Digest #343, Volume #13  Sat, 16 Dec 00 10:13:01 EST

Contents:
  Re: How does public key crytography work, mathematically? (Simon Johnson)
  Re: Nonlinear Diffusion with S-Boxes? (Simon Johnson)
  Re: How does public key crytography work, mathematically? (John Savard)
  Re: Securing credit card numbers (Simon Johnson)
  Re: Q: Result of an old thread? (Mok-Kong Shen)
  Re: Q: Result of an old thread? (Mok-Kong Shen)
  Encryption detail added to cipher page (Gianna Stefani)
  Nonlinearity question (Benjamin Goldberg)
  Re: NT4 Password ("Mike The Man")
  does CA need the proof of acceptance of key binding ? ([EMAIL PROTECTED])
  Re: Unguessable sequence of unique integers? (Mok-Kong Shen)
  Re: Nonlinearity question (Mok-Kong Shen)



From: Simon Johnson [EMAIL PROTECTED]
Subject: Re: How does public key crytography work, mathematically?
Date: Sat, 16 Dec 2000 11:42:13 GMT

In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] wrote:
 I purchased applied cryptography, and I've been reading it thoroughly,
 and while doing so, I've pondered the creation and programming/math
 involved in encryption.  Symmetrical keys make perfect sense to me.
If
 f(x) = character * key simply solve for key.  But I don't understand
how
 there can be a public key for encrypting and private key for
decrypting
 that both generate non-gibberish.

 To acheive this, I am assuming that the encryption/decryption
algorithms
 are different where the key is applied.  Applied Cryptography
suggests to
 assume that the source of your program is available, so wouldn't
someone
 simply be able to look at your two equations and derive one key from
the
 other?  That can't be right though... How does it work?


What we do is we take a problem that fundementally hard, say factoring.
We then compute the public function, F(X). We therefore compute G(x)
such F(x) * G(x) = x is true. The trick is, to base the derivation of g
(x) from f(x) on knowing the solution to the difficult problem. Thus,
an attacker would therefore have to find the solution to the problem to
work out the relationship between both keys.

Simon.
--
Hi, i'm the signuture virus,
help me spread by copying me into Signiture File


Sent via Deja.com
http://www.deja.com/

--

From: Simon Johnson [EMAIL PROTECTED]
Subject: Re: Nonlinear Diffusion with S-Boxes?
Date: Sat, 16 Dec 2000 12:06:09 GMT

In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] wrote:

 Hello!

 Bet this is old hat to many here, but I only had this realisation a
few
 days ago (in another discussion).

 I was wondering on doing bits of diffusion using nonlinear S-boxes in
 the following way:

   a0, a1: Initial words (all words the same size)
   b0, b1: Final words
   t0, t1: Temporary words
   S-box:  A wordsize by wordsize S-box

   t0 = a0 xor a1
   b0 = a0 xor S-box[t0]
   b1 = a1 xor S-box[t0]

 Deriving the inverse:

   t1 = b0 xor b1
  = a0 xor S-box[t0] xor a1 xor S-box[t0]
  = a0 xor a1
  = t0
   That's t0 recovered...

   c0 = b0 xor S-box[t0]
  = a0 xor S-box[t0] xor S-box[t0]
  = a0
   That's a0 recovered, and a1 can be recovered in the same way.

 An inverse of the S-box isn't needed, so the S-box doesn't need to be
 bijective (though I can imagine a nonbijective (unijective?) S-box
would
 reduce diffusion).

 What struck me as neat about this is that it can combine nonlinearity
 with diffusion, and it's an involution (it is it's own inverse).

 Thinking about it further, I ended up with an idea of a block cipher
 round using this.  It exclusively uses one function:

   f( a0, a1 ) = S-box[ a0 xor a1 ]

 With a round key the same size as the block, and the block arranged
as a
 two dimensional array of words with two rows (and the key likewise, if
 you like):

   w0 w1 w2 w3 w4 w5 w6 w7
   w8 w9 wA wB wC wD wE wF

 (16 words for this example.)

 1.  Apply function f to corresponding block and key words, putting the
 results in the block:

   w0' = f( w0, k0 ) = S-box[ w0 xor k0 ]

 (Nothing new there!)

 2.  Apply function f to the words of each column of the block, putting
 the results in a temporary row (the same size as a block row):

   t4 = f( w4, wC )

 3.  Apply function f to the temporary row and the block's top row,
 putting the results in the top row:

   w3 = f( w3, t3 )

 4.  And again, but with the bottom row of the block:

   wB = f( wB, t3 )

 That's key addition, nonlinear word substitution, and nonlinear word
 pair diffusion done using just one function!  (Of course, transposing
 the words between such rounds would obviously be necessary to get the
 whole block to diffuse (hypercubic?  I like hypercubes, they're
easy).)

 Anyway, any thoughts?  Comments?  Confirmation that this is indeed old
 hat?  Worth using as part of a block cipher?  Fatally f

Cryptography-Digest Digest #343

2000-08-02 Thread Digestifier

Cryptography-Digest Digest #343, Volume #12   Wed, 2 Aug 00 21:13:01 EDT

Contents:
  Re: Software package locking ("Trevor L. Jackson, III")
  Re: Software package locking (Andru Luvisi)
  Re: unbreakable code? Yes ("Douglas A. Gwyn")
  Re: Blowfish Implementation ("Douglas A. Gwyn")
  Re: unbreakable code? Yes (Eric Smith)
  Re: Skipjack ("Douglas A. Gwyn")
  Re: counter as IV? ("Douglas A. Gwyn")
  Re: What vulnerabilities do I have? ([EMAIL PROTECTED])
  Re: unbreakable code? Yes (H. Peter Anvin)



Date: Wed, 02 Aug 2000 18:57:47 -0400
From: "Trevor L. Jackson, III" [EMAIL PROTECTED]
Subject: Re: Software package locking

Jeffrey Williams wrote:

 Actually, Trevor, I would be interested to hear the details of a software protection
 scheme which is demonstrably uncrackable (either mathematically secure or too time
 consuming, or whatever).

I've been on both sides of this fence, so I have some sympathy for both camps.

It may be possible to prove that mathematically secure software is theoretically
impossible.  My interest lies in practical software security, not provable security.
Thus we're left with systems that are impractical to attack.  But the term impractical
means that some definition of the attacker is behind the "too" part of the phrase "too
time consuming".  An attacker with unlimited resources can probably overcome any
possible piece of software.

So we're left with attackers of limited resources (time, machines, money, etc.)  The
nature of the defense against attackers of limited means is based on an assessment of
the cost to disable the individual parts of the security system.  The parts are like 
the
transforms that make up a cipher.  Individually they are reversible.  Collectively they
may interact in ways that explode the difficulty of isolating and disabling them.

First, let's dispose of some trivial cases.  If the program has a lesser privilege 
level
than the attacker, the attacker is probably going to win fairly quickly.  Thus a secure
program probably needs root/ring 0/etc. capabilities.  Of course the attacker can 
always
use more sophisticated hardware, such as an in-circuit emulator, to gain an unmatchable
advantage, but if the attacker's testing platform matches the run-time platform then 
the
program has a chance.

In the case of a vanilla system without much hardware protection a cracker with a good
debugger has unlimited access to the machine, and should be able to have his way with
any program.  But this is where it gets interesting.  There's an interface between the
debugger and the program.  That interface is subject to preemptive attack by the
program.

For example, breakpoints and single steps are powerful analytic tools that can overcome
any kind of obfuscation or encryption.  But what if the program interferes with the
breakpoint handler?  Let's say it stomps on the breakpoint vector at many places within
the code.  And it stomps on the vector with a value that is critical to the operation 
of
the program.  Let's say it uses the breakpoint trap as a replacement for the system
services trap (int 21 in PC-DOS lingo).

Now we have a situation where the program runs fine when loaded in a debugger as long 
as
no breakpoints are set.  Further, there are many places where the breakpoint vector 
will
be overwritten if the debugger is used to adjust it, and many places the program will
fail catastrophically if the cracker manages to find and fix all of the overwrites.

The same thing is done to the single-step vector, and any other system resource that 
the
debugger might use.

Then we add linkage through the BIOS.  Calling parts of the BIOS code, especially with
an eye to using the results of the call/jmp not just threading flow of control through
the ROM.

Then we add routines that detect the identity of the spawning process and respond to 
the
debugger by writing to its memory space causing faults within the debugger code (just
which program is the debugger, and which is the debugee?)  Most of the powerful
debuggers are fairly well known.  It's not hard to fingerprint them -- easier than
fingerprinting protocol stacks.

What debugger capability is easy to disable?  The breakpoint handler!  After all its
address is _published_ in the breakpoint vector.  Just make sure the debugger break
point handler can't break the program.  More interestingly, substitute your own
breakpoint handler.  _Simulate_ debugging yourself.  Make patches but remove them 
before
the program runs and restore them when it halts.

Then we add routines to detect clock skew.  If the debugger soaks up an appreciable
amount of CPU time this will be detectable.  If it ever waits for user input it will be
easily detectable.  Certainly a debugger could be written to fully simulate the real
time clock, freezing it when the program was pause

Cryptography-Digest Digest #343

2000-03-15 Thread Digestifier

Cryptography-Digest Digest #343, Volume #11  Wed, 15 Mar 00 17:13:02 EST

Contents:
  Re: Improvement on Von Neumann compensator? (Scott Nelson)
  Re: NIST, AES at RSA conference (Tim Tyler)
  Re: new/old encryption technique (wtshaw)
  Re: Improvement on Von Neumann compensator? (Scott Nelson)
  Re: NIST, AES at RSA conference (Bryan Olson)
  Re: NIST, AES at RSA conference (Tim Tyler)
  Re: NIST, AES at RSA conference (Bryan Olson)
  Re: Crypto Patents: Us, European and International. ("Trevor L. Jackson, III")
  Re: NIST, AES at RSA conference (wtshaw)
  Re: Weaknesses in Solitaire Algorithm Found (wtshaw)
  Re: Cipher Contest ("Joseph Ashwood")
  Re: how to introduce hs students to cryptography (Doug Stell)
  Re: pedagogical provably stupid protocols ([EMAIL PROTECTED])
  Re: Improvement on Von Neumann compensator? (Bob Silverman)
  Re: Universal Language (drickel)
  Number 28 of a series (wtshaw)
  Really Interested, Where do I look for ... (jack)



From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: Improvement on Von Neumann compensator?
Reply-To: [EMAIL PROTECTED]
Date: Wed, 15 Mar 2000 20:17:02 GMT

On 14 Mar 2000 22:59:47 EST, [EMAIL PROTECTED] (Guy Macon) wrote:

In article [EMAIL PROTECTED], [EMAIL PROTECTED] (Scott Nelson) 
wrote:

Yes, measuring the total time between events is much better
than just using the least significant bit (a.k.a. latching 
an oscillator.)

If you measure time with infinite precision, and make the same
ridiculous independence assumptions that you must for the 
Von Neumann compensator to work, 
then this will produce unbiased bits.  
If time is measured with finite precision, 
then it's necessary to reject equal length timings, 
but it will then theoretically produce unbiased bits.

Would it not also answer Mike Rosing's observation that "The Von
Neumann method will stop sending output if the system locks up
and this proposal won't."?

If you reject equal length timings, 
then the timing method can lock up too.

It's also theoretically possible for an event to never
occur, which could also be considered a lock up of sorts.

However, it's exceptionally unlikely that you are dealing 
with actual independent events.  
[snip]

Hmmm. how about a von neumann compensator to remove such
residual bias?

Unless you can absolutely guarantee that the events are
100% independent, Von Neumann's method won't produce
perfectly unbiased bits.
In real systems, you just can't get that.
That's why it's better to use a hashing function.

SHA1 doesn't care what kind of bias you have; 
interdependence, sequency, frequency, offset - 
it doesn't matter.  Other "cheaper" hashing functions
like CRC or even XOR will do the same, though secure 
hashing functions tend to be a little better at
retaining entropy.  Hashing functions also have
the nice properties that they finish in a deteministic
amount of time, and you can do away with almost
all the other processing.

Scott Nelson [EMAIL PROTECTED]
- Don't forget to vote on sci.crypt.random-numbers

--

From: Tim Tyler [EMAIL PROTECTED]
Subject: Re: NIST, AES at RSA conference
Reply-To: [EMAIL PROTECTED]
Date: Wed, 15 Mar 2000 20:04:15 GMT

John Savard [EMAIL PROTECTED] wrote:
: [EMAIL PROTECTED] (Terry Ritter) wrote, in part:

:Cipher systems which change ciphers must coordinate cipher changes on
:both ends.  It would be insane to do this as unciphered plaintext.
:The opponents have no ability to force particular ciphers by changing
:the ciphertext because changed ciphertext will be detected.

: But you see where this is heading. If all ciphers are insecure, except
: when multi-ciphering is used, then the negotiation cipher is insecure
: unless you negotiate the negotiation, and then...

: Of course, you've already addressed this point, by noting that one
: enciphers the negotiation using a longer key, and more encipherment
: steps, than are necessary when the algorithms are from a large pool,
: unknown to the attacker.

There's also the issue that the negotiation is very short and involves
very little transfer of information.

The information can undoubtedly be presented in a condensed form, so it's
plaintext characteristics can be concealed effectively.

Often, the probability of a break depends on the quantity of plaintext
transmitted under a single key.  As this is small, it should be possible
to make this element of the system rather formidable to break into.

The consequences if the attacker breaks in are that he knows the order of
the cyphers.  If the message is signed - or even if the encryption key
is not recovered by the break - any active attack is still defended
against.
-- 
__
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

If it's God's will, who gets the money?

--

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: new/ol

Cryptography-Digest Digest #343

1999-10-01 Thread Digestifier

Cryptography-Digest Digest #343, Volume #10   Fri, 1 Oct 99 09:13:04 EDT

Contents:
  Re: Perfect Shuffle Algorithm? (Dr D F Holt)
  Re: Hardest ever ECDL solved by INRIA researcher and 195 volunteers (Paul Crowley)
  Re: Cryptic manuscript... Help ([EMAIL PROTECTED])
  Re: RSA-512: Weizmann Institute: London Times (Paul Crowley)
  Re: RSA-512: Weizmann Institute: London Times ("NP")
  Re: NEMA, Swiss cipher machine (drobick)
  security: stream cipher ([EMAIL PROTECTED])
  proof ([EMAIL PROTECTED])
  Re: SNAKE Web Page ([EMAIL PROTECTED])
  Re: Q: Burrows-Wheeler transform (Tom St Denis)
  Re: msg for Dave Scott (jerome)
  Re: Example of a one way function? ("Boris Kolar")



From: [EMAIL PROTECTED] (Dr D F Holt)
Crossposted-To: sci.stat.math,sci.math
Subject: Re: Perfect Shuffle Algorithm?
Date: 1 Oct 1999 10:06:51 GMT

In article 7t0rjk$oat$[EMAIL PROTECTED],
[EMAIL PROTECTED] (Alan Morgan) writes:
In article [EMAIL PROTECTED], Douglas A. Gwyn [EMAIL PROTECTED] wrote:
David Franklin wrote:
 Firstly, I knocked up a brute force program to do this (took
 about 5 mins to write), and got the same answer as Clive Tooth
 (97020); the running time was just under 1 second. Which leads
 me to wonder about the LCM solution being "much simpler and
 faster" as the original interviewer apparently said. When the run
 time is 1 second, it's hard to justify spending time speeding it
 up (as a one-off problem at any rate).

But brute force doesn't scale well, while finding the cycles does.
You were just lucky that the period was only 97,020; it could have
been much larger if the parameters had been slightly different.

What is the longest possible period?  How does one find the answer to
that without using brute force (nothing occurs to me, but I haven't
given it much thought).

Alan


I should probably charge a fee for this information but the answer
for 1001 points appears to be:

89*83*79*73*71*67*61*59*53*47*43*41*37*31*29*23*19*17*13*11*7*5*27*16 =

1711349416536879655486838707297798320 ~=

1.711349E+36


As to how you do it, this question gets asked so often, that I wrote a
short and simple C program to calculate the answer - it is basically
brute force with a little cleverness thrown in. It works OK up to degree
about 25000.

Derek Holt.

--

From: Paul Crowley [EMAIL PROTECTED]
Subject: Re: Hardest ever ECDL solved by INRIA researcher and 195 volunteers
Date: 1 Oct 1999 09:20:20 +0100

Medical Electronics Lab [EMAIL PROTECTED] writes:

 Paul Koning wrote:
  Actually, as of Sept 20, 2000, RSA will have a big advantage...
 
 :-)  Not to the owners of the patent.
 
 The patent status on ECC algorithms is not very clear.
 This is definitly a problem, but not an engineering one.
 I would really like to see what the MQV patent will say,
 if it ever issues.  If it doesn't issue, so much the
 better!

I had understood that the patents attached to particular optimisations 
of ECC, not all of it, and there were reasonable forms that were not
patent encumbered.  Am I wrong?
-- 
  __
\/ o\ [EMAIL PROTECTED] Got a Linux strategy? \ /
/\__/ Paul Crowley  http://www.hedonism.demon.co.uk/paul/ /~\

--

From: [EMAIL PROTECTED]
Subject: Re: Cryptic manuscript... Help
Date: Fri, 01 Oct 1999 10:33:54 GMT

In article [EMAIL PROTECTED],
  "Douglas A. Gwyn" [EMAIL PROTECTED] wrote:

 Also check out Jim reed's Web site; he has a lot of material
 on the Voynich problem, including lots of references.

And Rene Zandbergen's http//voynich.nu
It is very rich, and  has many links to serious sites
on the subject. In particular, Prof. Jorge Stolfi's
(do visit it, if anyone is ever going to crack this
code, I'd say Stolfi is the one).

This site below has about 50 high-quality jpegs (but only
black-and-white). They are each about 100k.

http://www.geocities/4oqpcc89/voygal1.htm

Ah yes, "4oqpcc89" is a  common "Voynichese word"
in a transliteration system called "frogguy".
In E.V.A. it is  "qoteedy". Weird mob, aren't they?


Sent via Deja.com http://www.deja.com/
Before you buy.

--

From: Paul Crowley [EMAIL PROTECTED]
Subject: Re: RSA-512: Weizmann Institute: London Times
Date: 1 Oct 1999 09:27:08 +0100

[EMAIL PROTECTED] writes:
   After an Israeli research institute said it could break Europe's
   banking codes in less than a second, a initiative has been launched
   that could result in unbreakable codes.

If they're named, could you tell us the journalist responsible for
this nonsense?  Britain's Murdoch newspaper readers are currently
facing an assault of crypto-nonsense from an especially ignorant and
stupid journalist named Jonathan Ungoed-Thomas, or
"Double-plus-Ungoed" as he is fondly known.  I'd be interested to know
if this was him.
-- 
  __
\/ o\ [EMAIL PROTECT

Cryptography-Digest Digest #343

1999-04-05 Thread Digestifier

Cryptography-Digest Digest #343, Volume #9Mon, 5 Apr 99 14:13:05 EDT

Contents:
  Re: True Randomness  The Law Of Large Numbers (R. Knauer)
  Re: quick RSA key generation question ([EMAIL PROTECTED])
  Re: True Randomness  The Law Of Large Numbers (R. Knauer)
  Re: SHA ("Chen Yijiang")
  att. Aman ([EMAIL PROTECTED])
  Re: PGPdisk or ScramDisk? (Nathan Kennedy)
  Re: quick RSA key generation question (Ian Goldberg)
  Re: quick RSA key generation question ([EMAIL PROTECTED])
  Re: True Randomness  The Law Of Large Numbers (Herman Rubin)
  Re: True Randomness  The Law Of Large Numbers (R. Knauer)
  Re: True Randomness  The Law Of Large Numbers (R. Knauer)



From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness  The Law Of Large Numbers
Date: Mon, 05 Apr 1999 12:53:37 GMT
Reply-To: [EMAIL PROTECTED]

On Mon, 05 Apr 1999 07:50:24 GMT, "Douglas A. Gwyn" [EMAIL PROTECTED]
wrote:

Herman Rubin wrote:
 In article [EMAIL PROTECTED],
 R. Knauer [EMAIL PROTECTED] wrote:
 On Sat, 03 Apr 1999 10:10:06 GMT, "Douglas A. Gwyn" [EMAIL PROTECTED]
 wrote:
 .
 If you are talking about a physical device then you must treat it like
 a piece of scientific equipment and certify its performance using
 accepted scientific techniques, including a peer-reviewd design audit
 and diagnostic tests for each subsystem.

Please check your attributions more carefully.
I didn't say that, R. Knauer did.

There is nothing wrong with those attributions above. Anyone who has
been on Usenet for any length of time knows that the attributions
above clearly point to me as the author of that statement.

Bob Knauer

"People have criticized me because my security detail is larger
than the president's.  But you must ask yourself: Are there more
people who want to kill me than who want to kill the president?
I can assure you there are."
- Marion Barry, Mayor of Washington DC


--

From: [EMAIL PROTECTED]
Subject: Re: quick RSA key generation question
Date: Mon, 05 Apr 1999 14:04:19 GMT

In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] (DJohn37050) wrote:
 Bob, Also mention the SQROOT(2) method for sizes of p and q in X9.31.
 Don Johnson


Sure.  Suppose one wants a 1024 bit modulus.  It is not sufficient that
p,q, each be 512 bits, since their product might be either 1023 or 1024 bits.
To ensure a 1024 bit modulus,  one requires that p,q be in [sqrt(2)2^1022,
2^1023-1].  This is a simple normalization condition.

= Posted via Deja News, The Discussion Network 
http://www.dejanews.com/   Search, Read, Discuss, or Start Your Own

--

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness  The Law Of Large Numbers
Date: Mon, 05 Apr 1999 12:45:57 GMT
Reply-To: [EMAIL PROTECTED]

On Mon, 05 Apr 1999 07:35:31 GMT, "Douglas A. Gwyn" [EMAIL PROTECTED]
wrote:

 Yes, it holds for *most* distributions. But it does not hold for
 distributions that are not square integrable.

So what?  No meaningful situation is going to have infinite energy.

QED has infinite energies. Yeah, renormalization dealt with them, but
nobody considers that scheme to be fundamentally correct.

You must have the "Junior Miss" version of his book (he wrote
several).  He dwelt on it (in the main text) in the college-
level book we were using last summer.

There are many editions of his book. The only one I could find at the
Houston Public Library was the 4th edition.

If he changes his position from one edition to the next, then he is
not a reliable author.

 All the statistical tests in Trioli, both parametric and
 non-parametric, require the CLT to be of any use.

That's certainly not true.  When are you going to bother
to learn the subject before making claims about it?

You said that if I read Triola's book I would know all I need to know.
That statement above comes right out of his book.

 If it is supposed to output uniformly
 random bits, and the r.v. X is the value of a generated bit, then
 X has mean 0.5 and s.d. 0.5.
 Prove that. But be careful about your assumptions, because if you go
 off into classical statistical theory you will miss the mark.

That's an elementary exercise for the beginning statistics
student.

Yeah, one of those "back of the envelope" calculations.

I suggest *you* work out the proof; it might be an
opportunity to practice converting "word problems" into formal
specification, after which computation of the answer is easy.

Here is an envelope - you work it out. You are the one making the
assertion.

That's for sure, but only because it makes no sense.
Here are some finite numbers:
   42
   0
   1234566778901033909867041675
   -72

Those are integers and are part of the ensemble of random numbers.

Here are some others:
   0.3